Get Started with the Boto3 SDK for AWS Infrastructure Automation

Have you ever thought of using Python to build your AWS infrastructure? Because you can! Boto3 is AWS’s powerful Software Development Kit (SDK) for Python. Learning how to use the SDK should be on your priority list.

First things first: AWS is an industry standard and the most powerful choice when it comes to cloud solutions today. According to a Cloud Security Alliance (CSA) report (Download a free copy of the report here), AWS handles 41.5% of the workloads in public cloud platforms.

Python is a popular and easy language to learn. This article will show you why using Python with Boto3 is a straightforward way to get into automating tasks and building infrastructure.

Boto Rosa
The Pink River Dolphin, "Boto Rosa", has nothing to do with the SDK but the name.

Before you start, this article makes a few assumptions, so check the following before proceeding:

  1. You have an AWS account and have installed AWS CLI.
  2. You have Python installed and have basic Python knowledge.
  3. Your AWS credentials are properly configured on ~/.aws/credentials
  4. You understand AWS concepts like EC2, IAM, key pairs, AMIs, etc.

Let’s Get Going

To begin, install Boto3 using pip:

pip install boto3

Now that you have Boto3, you can create a file and import it, so it can be used by your script:

#!/usr/bin/env python3import boto3

Now, let's do something interesting with this file. Most tutorials teach us to write hello world, let's play with EC2 instances instead. Before we write any code, it's important to make the following decisions:

  1. Select the region we're going to be using — I'm going to use us-east-1. To see a list of available regions for your account, type the following into your console:
aws ec2 describe-regions -all-regions

2. Choose the size of the instance — I'm going to use a free-tier t2.micro. To see a list of available instance types visit AWS Documentation. Make sure you are free-tier eligible before you begin if you do not want to incur costs.

3. Choose an image — I've picked Amazon Linux 2 AMI. For a list of publicly available images, check the AMI Marketplace.

With that information in hand, add the following to your file:

#!/usr/bin/env python3import boto3ec2 = boto3.resource('ec2',region_name="us-east-1")keyfile = open('awskey.pem','w')
keypair = ec2.create_key_pair(KeyName='awskey')
KeyPairOut = str(keypair.key_material)
instance = ec2.create_instances(
ImageId = 'ami-00dc79254d0461090',
MinCount = 1,
MaxCount = 1,
InstanceType = 't2.micro',
KeyName = 'awskey'
print (instance[0].id)

Save the file as

Go to the command line and run the file from the same directory as the file using the python cmd:


After the script runs, it will print an instance id to the console. If you log in to your AWS console you will see an instance with that id and the specifications from your file has been created.

How the instance appears in the AWS Console

So, how does this file work? Let's dissect each part:

ec2 = boto3.resource('ec2',region_name="us-east-1")

Here we create a resource that connects via Boto3 to the AWS region of our choice — in this case, us-east-1 (North Virginia).

To be able to connect to instances, we need key pairs. So the next part of the code creates a key pair and saves it to our local machine. This line creates a file called awskey.pem and saves it under the variable keyfile:

keyfile = open('awskey.pem','w')

Next, we use the Boto3 function create_key_pair to generate a key called “awskey” and store it under the variable keypair:

keypair = ec2.create_key_pair(KeyName='awskey')

The problem is that we need to store this key locally, or else we won't be able to connect to the instance. To do that, we capture the key and store it in a file.

KeyPairOut = str(keypair.key_material) # captures the keypair
keyfile.write(KeyPairOut) # stores it into the file defined as keyfile

If we stop running the script at this point, there would be a key pair listed on the AWS console called 'awskey' and a corresponding file called awskey.pem on our local machine.

The next line creates the instance itself. Boto3's create_instances function supports many arguments which are beyond the scope of this how-to, so we're going to focus on the basics. First, we call the function:

instance = ec2.create_instances(

Then, we define the AMI we chose ahead of time:

ImageId = 'ami-00dc79254d0461090',

Next, we set the minimum and maximum number of instances we want:

MinCount = 1,
MaxCount = 1,

Define the instance type:

InstanceType = 't2.micro',

And tell it to use the key we've just created:

KeyName = 'awskey'
) # Don't forget to close the parenthesis!

That will do it. To make sure we know the id of the instance created, we finish by printing the id of the EC2 instance we just made:

print (instance[0].id)

To test the connection to your newly created instance, do as you would with any other instance:

  1. Set the permissions of your key to 400.
  2. Go to the AWS Console and obtain the public IP address of your instance.
  3. SSH to your instance using the key.

All it took was a few lines of code. Now, imagine you are an engineer and need to launch 20 instances with a specific AMI. All you have to do is to change this bit of code:

MinCount = 20,
MaxCount = 20,

Now the script will create 20 instances.

Infrastructure-as-code tools like Terraform and CloudFormation are industry standards and use declarative syntax to provision infrastructure. Boto3, on the other hand, can be very useful in scripts to inspect and inventory AWS environments, and is essential to build dynamic inventories behind the scenes within Ansible.

The full capabilities of Boto3 give engineers a lot of possibilities to automate work. Keep practicing, you will not learn everything in a day, but Boto3 will certainly be useful for many years. The good thing is it's easy to get started and fun to learn, so I hope this article helps you start your journey.

I will write an essay on why Caprese Salad is the best food in the world