Skip to content

Things I don't like about AWS

Last Updated: 2024-06-09

Full disclaimer – I am an Amazon Web Services fanboy. I love their cloud offering and I proudly hold 3 AWS certifications. Through my day job, I am also getting exposed to Azure. Yes, I know – Azure is a swear word amongst Amazonians, but the reality is that many companies do dabble in multi-cloud strategies. Some cloud providers are better at some things than others, and some features are just nicer than others, so with that, I decided to start putting a list together of some of the cool (and not so cool) features I have spotted on both platforms.

Having said that – because I love the AWS service, I also feel I must point out where I think they need to improve their service. Even though Gardner puts them as a leader in the cloud space, there still are some things I think they can improve.

This blog post will be updated from time to time, so do come back to see the updated list. Do you have some items you’d like to add? Post them in the comments.


The summary page shows the high-level items as I discovered them, and whether the status of the item is still an issue or not.

Requirement AWS Azure
See all resources in one screen
Generate Infrastructure-as-Code (IaS) templates
Administer a database from the portal
Websites from storage linked to a domain name
Lambda Python runtime is missing requirements.txt
CloudFormation cannot load stacks from a URL
Missing history in resource meta data
NAT Gateway is expensive ?
Lightsail is a separate AWS account ?
boto3 failing for weird reasons N/A

See all resources on one screen

When you log onto Azure, you can see different resources from every region, all on one page. This is great when you are playing around in the cloud platform, you can simply go and delete it all when you’re done.

In AWS? No. You have to switch to the region, and then switch to the specific service to see what is in there, so, if you’re playing around and learning new services, do remember to go clean it up afterward, or you may end up with bill shock!

Update on 2021.10.01 – AWS now offers the ability to view all resources on one screen for EC2 Instances only.

Generate Infrastructure-as-Code (IaS) templates

Here Azure is also leading. You can build your environment, and right from the console, you can generate an ARM template. This is a great way to develop, package, and then deploy a consistent infrastructure to your production platform.

Sadly AWS does not offer this. CloudFormation is good for deploying resources, but there is no tool to analyze a cloud account and generate CloudFormation templates from it. This is unfortunate, as Azure makes it very easy with per-generated templates to help developers adopt the IaC mindset.

Administer a database from the portal

When you’re using a PaaS-style database, be it Aurora or RDS, sometimes you need to poke a few SQL commands against the database. Azure offers a SQL Query Analyzer-style interface where you can log onto your SQL database, straight from the Azure portal. AWS however does not have this. It is always a hassle to spin up a separate EC2 instance, configure security groups, install a web server and install PHPMyAdmin. Surely something as common as administering a SQL server can be a basic service offered by AWS.

Websites from storage linked to a domain name

Hosting websites from S3 is a great feature. You can load all the HTML, JavaScript, CSS, and anything else your website may require into an S3 bucket, and then turn that S3 bucket into a website. Now you get into a situation where you want to attach your domain name to S3 website hosting, you’ll find the only way to achieve this, is to attach the S3 bucket to CloudFront, AWS’ CDN solution. CloudFront allows you to attach your domain name to it, so while it would’ve been nice if S3 supported custom domains without the use of CloudFront, you, the customer, will have to cough up additional cash to Amazon copy your content over to CloudFront and serve it all over the world.

I will add, CloudFront is a great service if you need to serve content all over the world, and you don’t mind spending a few extra dollars for your hosting. For smaller businesses, they tend to operate within a geographical region, and then using CloudFront for an S3 bucket may not always make sense. There are ways you can achieve caching of content through HTTP headers without the need to use a CDN.

  • 2021.06.20 – If you create the S3 bucket name with the same name as the domain you’d like to host, then you’re able to use S3 hosting by redirecting your domain name with a CNAME to the S3 web URL. See this link for more details.
  • 2021.11.21 - is now hosted on CloudFront with S3. See this post for more information.

Lambda Python runtime is missing requirements.txt

I do plenty of programming in Python, and I like to schedule my code to run as Lambda functions. Just once in a while, I may want to use a Python module that is not installed by default on the Lambda Python runtime. Alas, the option to install modules through a requirements.txt file does not exist. Instead, AWS suggests you download the modules, zip them up, and store that in an S3 bucket. That sounds like way too much work. Azure functions have you covered. Just specify the requirements.txt file, and the function will install all the modules for you.

CloudFormation cannot load stacks from a URL

CloudFormation allows you to create infrastructure from code, allowing consistency and control over what gets created right through the entire track, from development, to testing, and ultimately into your production environment. You may want to load code hosted on a repository somewhere, but unless that repository is an S3 bucket, you won't be able to use it. Instead, you have to upload the file from the URL to an S3 bucket first, before it can be consumed by CloudFormation. While I can recognize the security aspect around it, it does feel like a bit of a let-down for something as basic as Just _reading the stack_ from this URL instead_. This is how you can do it in Azure.

Missing history in resource metadata

Tools like AWS Config are great at keeping track of what changed on a resource. AWS CloudTrail is great at keeping track of who made the change. The problem is you have to turn them on, which introduces additional costs. That's all fine. Every once in a while, I come across a resource in a system, and it would be great if I could simply see who created or modified the resource last (and when).

I would love it if AWS would capture that detail by default for every resource, allowing me the ability to see when the resource was created, by who, as well as the last time it was modified (and by who). This should be done by default, regardless if CloudTrail or Config is turned on or not.

Go one step further, and put a link on the console of each service to take me directly to the Config page where I can view the resource changes.

NAT Gateway is expensive

Not everything needs a public IP. Sometimes you want to design something a bit more secure, using private subnets. The problem is the EC2 instances on that subnet may need internet access to download their patches. Using NAT gateway is one solution, the problem is that NAT Gateway is expensive, and even more, AWS does not offer any NAT Gateway services under the Free Tier model, so when you're starting with AWS, don't forget to turn that NAT gateway off, or you're going to be in trouble.

NAT Gateway is essentially an EC2 instance that is spun up in the background, with a public IP address. So you're paying for that instance just to be there per hour, in addition to the traffic charges.

It would be great if AWS offered a "Simple NAT" service, and only charged for the data charges. I would be happy to accept that the outgoing IP address might change with every call. One way AWS could achieve this is by creating a separate VPC in each region that is hosting its own NAT gateway, and by sharing this VPC with all customers. I could simply peer my VPC to their VPC, and gain access to internet access for my instances.

Lightsail is a separate AWS account

When you spin up resources in AWS Lightsail, even though the instances are hosted on the backend AWS EC2 infrastructure, AWS creates a separate AWS account ID for you where the Lightsail system is managed through. The implication is that while you have an instance in Lightsail, you won't be able to connect to other resources in the primary AWS account. IAM policies, for example, will not allow access from Lightsail into your main account, unless you explicitly call it out. I would love to be able to assign an EC2 Instance Profile to a Lightsail instance and have Lightsail able to inherit a role on my main account.

Boto3 failing for weird reasons

boto3 is the Python library used to interact with AWS. Some of the issues I have experienced with boto3 are not directly related to the python library, but are more due to the way AWS has implemented its APIs.

Here's one that irked me today. I am writing a process that will read the tags from AWS S3 buckets. Simple enough, you do this with this sample piece of code.

tags = boto3.client('s3').get_bucket_tagging(Bucket=bucket_name)['TagSet']

This works IF the tags are set. If there are no tags, boto3 will terminate the script due to the failure of NoSuchTagSetError. Why does it terminate?? Why not just return nothing, because, nothing is set? The same behaviour happens with s3.get_bucket_policy