EBS Volumes – deleteOnTermination ?

When using EC2 instances with EBS backed storage, whether or not your instances are setup to delete their EBS volumes on termination can be a big deal — especially if you burn AMIs and provision instances over and over. You could find yourself with many EBS volumes that are unused and pay for lots of storage you don’t use.

Audit your systems with a command similar to this one – the last column in the output is whether or not deleteOnTermination is set:

for instanceid in $(ec2-describe-instances | awk '/INSTANCE/ {print $2}')
do
  echo "InstanceID: ${instanceid}"
  ec2-describe-instance-attribute -v -b ${instanceid} | egrep "BLOCKDEVICE.*false"

done

If you see output like the following:

InstanceID: i-xxxxxxxx            
  BLOCKDEVICE     /dev/sda1        vol-xxxxxxxx    2013-05-24T20:32:05.000Z        false   

…you have instances with volumes that will not delete when the instance is terminated.

To fix this, run the following command for each instance, and burn another AMI:

ec2-modify-instance-attribute -b '/dev/sda1=vol-xxxxxxxx:true' i-xxxxxxxx

I made a simple bash script that will iterate over all EC2 instances in an account and modify the first volume that is not set to delete on termination to do so. Note that this limitation requires the script to be re-run multiple times, depending on the number of EBS volumes attached to each instance that might need this flag set.

#!/bin/bash

#
# Audit instances to set all volumes to deleteOnTermination
#

for instanceid in $(ec2-describe-instances | awk '/INSTANCE/ {print $2}')
do
  IFS='\n'
  result=$(ec2-describe-instance-attribute -v -b ${instanceid} | egrep "BLOCKDEVICE.*false")
  for line in ${result}
  do  
    echo ${line}
    device=$(echo ${line} | head -n 1 | awk '{print $2}')
    volume=$(echo ${line} | head -n 1 | awk '{print $3}')
    ec2-modify-instance-attribute -b "${device}=${volume}:true" ${instanceid}
    if [ $? -gt 0 ] 
    then
      echo "command failed for ${instanceid}"
    fi  
  done
  unset IFS
done

exit 0;

Note that for some instances, multiple volumes were set properly and for some it was not. I did not take the time to troubleshoot this discrepancy or write a proper loop at this point. Patches welcome.

AWS VPC DB Security Group

The other day I was working with a client and creating a CloudFormation template that used RDS instances within a VPC. I found that while creating the DB security group object that I was getting an error like the following:

STACK_EVENT  CloudFormationName  DBSecurityGroupName   
       AWS::RDS::DBSecurityGroup                2012-12-17T22:30:20Z  CREATE_FAILED
       Please see the documentation for authorizing DBSecurityGroup ingress. For VPC,
       EC2SecurityGroupName and EC2SecurityGroupOwner must be omitted.To 
       authorize only the source address of this request (and no other address), pass
       xx.xx.xx.xx/32 as the CIDRIP parameter.

It turns out that beyond the requirement for a DB subnet group, I also needed to change the way that I create DB security groups within the VPC. I solved this problem by using the CIDRIP parameter and included the IP ranges of two private subnets:

    "DBSecurityGroupName": {
       "Type": "AWS::RDS::DBSecurityGroup",
       "Properties": {
          "EC2VpcId" : { "Ref" : "VpcName" },
          "DBSecurityGroupIngress" : [ { "CIDRIP": "10.1.201.0/24" }, { "CIDRIP": "10.1.301.0/24" } ],
          "GroupDescription": "Application Server Access"
        }   
    },

The examples given on the official docs page did not help with this issue, I found that I was experimenting until I was able to get this working. I copied the examples and they failed for this particular scenario.

TLS Issue with Amazon OpenLDAP 2.4.23-15

Today I had an issue getting a good TLS connection from an OpenLDAP client to an OpenLDAP server on an EC2 instance using the packages supplied by Amazon.

The problem packages were:

openldap-2.4.23-15.13.amzn1.x86_64
openldap-clients-2.4.23-15.13.amzn1.x86_64

The problem was resolved through updating to version 2.4.23-20 via:

yum -y update openldap-clients

The problem was produced via the following ldapsearch command:

# ldapsearch -xZZ -d 4
TLS: did not find any valid CA certificates in /etc/openldap/cacerts
TLS: could perform TLS system initialization.
TLS: error: could not initialize moznss security context - error -5939:No more entries in the directory
TLS: can't create ssl handle.
ldap_start_tls: Connect error (-11)

AWS Elastic Load Balancing in a Private Subnet

I recently learned a valuable lesson when setting up load balancing using an Elastic Load Balancer within a Virtual Private Cloud using public and private subnets and a NAT host.

When creating the ELB, be sure to create it within the public subnets and not the private subnets where the instances that will be attached to the subnet exist!

Creating the ELB within the public subnet(s) will allow them to route through the internet gateway and route traffic properly.

Note that any instance within the private subnet requires a route to the NAT host in the public subnet which has an EIP for internet access through the internet gateway. Any instance in the public network requires an EIP to allow routing through the internet gateway.

This guy said it first.