Five AWS Lambda Tricks to Save You Money and Improve Performance

Joe Coburn
7 min readJul 5, 2021

Hi, I’m Joe. I’m the founder of Remr.io — a lightning-fast key-value storage API designed for developers. I’d love you to try it out, but don’t worry, this isn’t a marketing pitch.

I built this tool to solve my own problems, and I used AWS to do it. Today I’m going to share the tips and tricks I used to leverage the power of Lambdas, and how you can use these tricks too to speed up performance and reduce your AWS bill.

HMS AWS Fargate Container Ship.

But first, who am I? What makes me qualified to spring the AWS traps and reveal the dirty secrets nobody tells you? Well, I’m a former tech writer for MakeUseOf, a published author of a Python book, and I have a BSc in computer science. I currently work as a senior Python developer in the UK, where I often provision, manage, monitor, diagnose, and fix Python services running in AWS, alongside developing and supporting the Python applications running on them.

I’ve been bitten by AWS more times than I can remember, and have a good handle on what is important to startups, and what can be ignored (for now). Make sure you say hello on Twitter @butteryvideo.

That said, I do feel like Val Valentino, revealing the magic secrets. Let’s hope they don’t kick me out of the club.

All the Free Stuff

It’s worth noting that AWS has a very generous free tier for their Lambdas. In their words:

The AWS Lambda free usage tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month.

That’s a lot of Lambdas! It’s a valid strategy to optimize the basics and keep an eye on your bills, only fixing things as they start costing you money, and you have to make a change. I’m sure the AWS Solutions Architects are rolling their eyes right now, but in a fast-paced startup, sometimes iterating quickly is more important than doing things “perfect” from the beginning.

Lambda Secrets

An AWS Lamb(da) in its natural habitat.

AWS Lambdas are the bread and butter of any serverless workload. Blobs of computing that you can rent by the millisecond, without worrying about software updates, uptime, or kernels (although you can spin up a Lambda container if you want to make things complicated again).

Lambdas are billed by the millisecond, excluding spinup time. Spinup time is the time it takes for AWS to prepare the lambda to run your code — copying the code, finding some spare CPU, and provisioning everything required for the job. You don’t pay for this time, but you do have to wait for it before your logic can execute.

Lambda optimization takes several forms:

  • CPU allocation
  • Reserved concurrency (throttling)
  • Timeout
  • Keep lambdas warm
  • Optimize your code

Tuning Lambda CPU Allocation

With Lambda per-millisecond billing, you only pay for the compute time you use. The price varies depending on the amount of memory you choose, so why should you tune the CPU?

Whenever you increase the memory available to a Lambda function, you gain a relative slice of CPU to match. It is more expensive to provision extra memory for a Lambda function, but it may execute much faster. You also gain better network performance. AWS doesn’t share the network information, but it’s a good “rule of thumb”.

You can change the memory allocation by visiting the AWS Console > Lambda > Functions > your_function_name > Configuration > General Configuration > Edit.

Lambda memory configuration.

Experiment with Lambda memory allocation and see what settings provide the best balance between cost and performance for your application.

Lambda Reserved Concurrency

AWS Lambdas let you cap the maximum number of concurrent Lambda executions. What does this mean in English? It allows you to prevent more than X Lambdas from running at the same time.

This may sound counter-intuitive. After all, unlimited computing power with magic autoscaling is the whole point of serverless right? Well yes, but only if you want to throw money at AWS! By capping the number of concurrent executions, you can ensure that your lambdas won’t ever cost more than you’re prepared for, even if something goes wrong and they start spiraling out of control.

You can change this reserved concurrency by visiting the AWS Console > Lambda > Functions > your_function_name > Configuration > Concurrency > Edit.

Lambda reserved concurrency settings.

Select Reserved Concurrency and enter an appropriate amount — two is a good starting point (remember that functions executing in 50–100 milliseconds will complete their work very quickly).

Lambda Timeouts

Each Lambda function lets you set a timeout. This is the maximum time (in minutes and seconds) that your Lambda can execute for. After this, AWS will kill it and stop billing you.

This setting is perfect for keeping costs under control. By default, it is set to three seconds. Make sure you increase this for large workloads! By dialing this down tiny levels, you can be confident a problem or unexpected delay won’t nuke your wallet with unnecessary costs (but remember, it needs to be big enough to let the Lambda finish executing).

Lambda timeout settings.

You can change this timeout by visiting the AWS Console > Lambda > Functions > your_function_name > Configuration > Concurrency > Edit. Enter a suitable timeout in minutes and seconds.

Lamb(da)s Are Best Served Warm (Avoid Cold Starts)

Remember that spinup time I mentioned earlier? Your customers may have to wait for this if your Lambdas are powering something like a web application. Optimizing this can lead to lightning-fast performance — it’s how I got the Remr.io API lambdas down to the 50ms range.

Whenever you run a Lambda, AWS prepares the environments and gets all the code ready to run. This takes time, and the more work it has to do, the longer it takes. This takes the shape of dependencies, or any code running outside of the Lambda Handler.

A Lambda function keeping warm.

Once completed, however, AWS will keep this Lambda “warm” for a period of time. Like using a “Bain Marie” (warm water bath to you and me) to keep food warm, AWS keeps recently used Lambdas ready in a “warm” state.

You can try this out — run your lambda from “cold”, as in it hasn’t executed recently, or you’ve redeployed it. Take note of the time. Now immediately run it again — notice how much quicker it is the second time? The difference in time here is your cold start time.

Optimizing this cold start time can benefit you, often at little to no cost. AWS doesn’t share how long they keep a recently used Lambda warm for, as it varies depending on demand. It’s not unusual to see ranges from 5 minutes to 45 minutes.

One sneaky trick is to ping your Lambdas to keep them warm. AWS recommends Provisioned Concurrency for this, whereby they will keep many Lambdas running for you, but they send you a bill for the whole duration — how rude!

Use a monitoring service such as Uptime Robot to ping your functions as a pseudo integration test. AWS CloudWatch also supports this kind of functionality, but where’s the fun in that.

Optimize Your code

This final trick may not always be simple. By optimizing your application code, you can boost performance and reduce cold start time. One of the biggest changes here is to choose the right language. Languages such as Ruby or Python are far faster to startup than languages such as Java.

This makes more of a difference than you’d expect — the less work the Lambda has to do, including configuring the runtime environment, the quicker it is ready to execute the code for your customers.

You can also share code outside of the Lambda Handler which will stay warm. A good trick here is to import your modules or configure access to shared resources such as databases (being careful not to leak data by exposing or leaving behind sensitive or customer-specific objects).

Lambda Performance Summary

In summary, here are my top tricks to dial in your Lambdas for the perfect cost/performance ratio:

  • Experiment with Lambda memory allocation and see what settings provide the best balance between cost and performance for your application.
  • Reserve concurrency to prevent financial ruin if your application goes rogue.
  • Set the timeout to a suitable level for your workload.
  • Keep Lambdas warm/avoid cold starts.
  • Optimize your code to execute faster/reduce cold start time.

If you haven’t already, make sure you sign up for my lightning-fast key-value storage API Remr.io. The free tier provides 1000 free requests per month with no credit card needed!

I had planned to cover AWS services SQS, CloudWatch, DynamoDB, API Gateway, ECS, and Fargate in this article, but realized it would be longer than War and Peace! Follow for more serverless tricks coming in this series, and let me know what you want to read next.

Make sure to give this post 50 claps to unlock an easter egg. /s

--

--

Joe Coburn

Python developer and published author living in the UK