• Pricing
© 2026 Serverless, Inc. All rights reserved.

Framework

  • Overview
  • Documentation
  • Plugins360
  • Pricing

Learn

  • Blog
  • GuidesUpdated
  • Examples240
  • Courses

Resources

  • Support
  • Security
  • Trust Center
  • Status

Community

  • Slack
  • GitHub47k
  • Forum
  • Meetups

Company

  • About
  • Careers
  • Contact
  • Partners

Legal

  • Terms of Service
  • Privacy Policy
  • Trademark
  • DMCA
Updated March 2026

The Ultimate Guide to
AWS Lambda

AWS Lambda is the most widely used serverless compute service. Run code without provisioning servers, pay only for what you use, and scale automatically from zero to tens of thousands of concurrent executions.

Deploy a Lambda FunctionRead the Docs

What is AWS Lambda?

AWS Lambda is a serverless, event-driven compute service provided by Amazon Web Services. It lets you run code for virtually any application or backend service without provisioning or managing servers. You upload your code as a Lambda function, and AWS handles everything required to run and scale it with high availability across multiple Availability Zones.

Launched in 2014, AWS Lambda pioneered the Function-as-a-Service (FaaS) model and has since become the backbone of serverless architecture on AWS. It integrates natively with over 200 AWS services, including API Gateway, S3, DynamoDB, SQS, SNS, EventBridge, Kinesis, and Step Functions, making it the default compute layer for event-driven, cloud-native applications.

With Lambda, you write functions in your preferred language, attach event triggers, and deploy. AWS automatically provisions compute capacity, runs your code in isolated Firecracker micro-VMs, and scales from zero to thousands of concurrent executions in seconds. You pay only for the milliseconds your code runs.

AWS Lambda Key Features

AWS Lambda has evolved well beyond basic function execution. Here are the capabilities that make it the most powerful serverless compute platform.

Deployment

No Servers to Manage

No operating systems to patch, no instances to provision, no infrastructure to maintain. AWS handles the entire compute lifecycle for you.

Performance

Infinite Scalability

Scales instantly from zero to tens of thousands of concurrent executions. No capacity planning, no auto-scaling groups, no load balancers.

Cost

Pay Per Invocation

Billed per request and per millisecond of compute. No charge when idle. The generous free tier covers most small-to-medium workloads entirely.

Security

Secure by Default

Each function runs in an isolated Firecracker micro-VM on AWS Nitro. IAM integration for fine-grained permissions. Multi-AZ fault tolerance built in.

Performance

SnapStart

Up to 10x faster cold starts for Java, Python, and .NET. Uses cached snapshots of initialized execution environments. No extra cost.

Cost

ARM64 / Graviton2

20% lower cost and up to 34% better price-performance. All managed runtimes support both x86_64 and arm64 architectures.

Performance

Response Streaming

Stream responses up to 20 MB for faster time-to-first-byte. Ideal for server-side rendering, large API responses, and real-time data.

Connectivity

Function URLs

Built-in HTTPS endpoints for your Lambda functions without API Gateway. Supports IAM auth, CORS, and streaming. Simpler and free.

Deployment

Container Images

Deploy functions as Docker images up to 10 GB from Amazon ECR. Use your existing container toolchain for ML models, large dependencies.

New

Durable Functions

Stateful, long-running workflows lasting up to 1 year with built-in error handling, retries, and failure recovery. No Step Functions needed.

Monitoring

Built-in Observability

CloudWatch metrics, X-Ray tracing, and Application Signals for APM, all out of the box. Advanced logging with JSON structured logs.

Ecosystem

Layers & Extensions

Share code across functions with Layers. Integrate third-party monitoring, security, and governance tools with Lambda Extensions.

AWS Lambda Use Cases

From simple automations to high-throughput data pipelines, Lambda powers a wide variety of serverless applications.

REST & GraphQL APIs

Build scalable APIs that handle millions of requests. Pair with API Gateway or use Lambda Function URLs for simpler endpoints.

Data Processing & ETL

Process streams from Kinesis and DynamoDB, transform files uploaded to S3, or run event-driven ETL pipelines at any scale.

Scheduled Tasks & Cron

Run recurring jobs using EventBridge Scheduler: cleanup routines, report generation, data syncs, and automated DevOps workflows.

Real-Time File Processing

Resize images, transcode video, parse documents, or generate thumbnails automatically when files are uploaded to S3.

AI & ML Inference

Run ML models for real-time inference. Deploy models as container images (up to 10 GB) with GPU-optimized libraries.

Event-Driven Microservices

Build loosely coupled services communicating through SNS, SQS, and EventBridge. Scale each function independently.

How AWS Lambda Works

Lambda runs your code in response to events. Each invocation follows this lifecycle:

1

Event Trigger

An event source invokes your function: an HTTP request, S3 upload, DynamoDB stream, SQS message, schedule, or any of 200+ AWS integrations.

2

Initialize

Lambda provisions a secure Firecracker micro-VM, loads your code and dependencies, and runs initialization logic outside the handler.

3

Execute

Your handler function runs with the event payload. Lambda allocates CPU proportional to your memory setting (1,769 MB = 1 vCPU).

4

Respond & Reuse

The function returns a response. The execution environment stays warm for subsequent invocations, avoiding cold starts.

Supported Languages & Runtimes

AWS Lambda provides managed runtimes for the most popular languages. All runtimes support both x86_64 and ARM64 (Graviton2) architectures. Any language can be used via the custom runtime.

Node.js

20, 22, 24

nodejs24.x

Python

3.10 – 3.14

python3.14

Java

8, 11, 17, 21, 25

java25

.NET

8, 10

dotnet10

Ruby

3.2, 3.3, 3.4

ruby3.4

Go

via provided.al2023

provided.al2023

Rust

via provided.al2023

provided.al2023

Custom

Any language

provided.al2023

Note: Amazon Linux 2 reaches end-of-life on June 30, 2026. Migrate runtimes using AL2 (e.g. python3.11, java17) to AL2023 equivalents.

Why Use AWS Lambda

Pay-Per-Use Pricing

Only pay for compute time consumed, billed per millisecond. No charges when idle. The free tier covers 1M requests and 400K GB-seconds monthly, forever.

Instant Auto-Scaling

Scales from zero to tens of thousands of concurrent executions in seconds. No capacity planning, no auto-scaling groups, no load balancers to configure.

Zero Server Management

No operating systems to patch, no servers to provision, no infrastructure to maintain. AWS handles all compute lifecycle management.

Built-In Security & Fault Tolerance

Runs across multiple Availability Zones with automatic retries. Firecracker micro-VM isolation. IAM integration for fine-grained access control.

Faster Time to Market

Focus on business logic, not infrastructure. Deploy in seconds with the Serverless Framework. Iterate faster with preview deployments and instant rollbacks.

200+ AWS Integrations

Native triggers from API Gateway, S3, DynamoDB, SQS, SNS, EventBridge, Kinesis, CloudWatch, Cognito, IoT, and many more.

Why AWS Lambda is Essential to Serverless Architecture

A complete serverless application typically requires three core components:

Compute Service

AWS Lambda

Database Service

DynamoDB, RDS, Aurora

HTTP Gateway

API Gateway, Function URLs

Lambda fills the compute role and integrates natively with both database and gateway services. Together with API Gateway, DynamoDB, and RDS, it forms the foundation of serverless solutions on AWS. Its support for many languages and runtimes makes it accessible to a wide range of developers.

A distinctive architectural property of Lambda is that many instances of the same function, or different functions from the same account, can execute concurrently. Concurrency varies by time of day or week with no difference to Lambda. You are charged only for the compute actually consumed, making it ideal for workloads with large differences between peak and baseline traffic.

AWS Lambda Limits & Quotas

Key limits to keep in mind when designing your Lambda architecture. Some can be increased via AWS support.

ResourceLimitNotes
Memory128 MB – 10,240 MB1 MB increments. 1,769 MB = 1 vCPU.
Timeout900 seconds (15 min)Hard limit. Use Step Functions for longer tasks.
Package size (.zip)50 MB / 250 MBCompressed / uncompressed. Upload via S3 for larger.
Container imageUp to 10 GBUncompressed. Supports Docker images from ECR.
Ephemeral storage (/tmp)512 MB – 10,240 MB512 MB included free. Configurable in 1 MB steps.
Concurrency1,000 per region (default)Can be increased to tens of thousands via quota request.
Payload (sync)6 MB request / responseStreamed responses up to 20 MB.
Payload (async)256 KBEach additional 64 KB chunk is billed as an extra request.
Layers5 per functionShared code/libraries across functions.
Env variables4 KB totalAggregate size of all environment variables.

AWS Lambda Pricing

Lambda charges based on requests and duration. The free tier is generous and never expires.

Free Tier (Always Free)

1M

requests / month

400K

GB-seconds / month

100 GiB

response streaming / month

Componentx86 PriceARM / Graviton2
Requests$0.20 / 1M$0.20 / 1M
Duration$0.0000166667 / GB-s$0.0000133334 / GB-s
Provisioned Concurrency$0.0000041667 / GB-s$0.0000033334 / GB-s
Ephemeral storage$0.0000000309 / GB-s (above 512 MB)

Example: API handling 10M requests/month

10M requests × $0.20/1M = $2.00

Avg 200ms @ 256 MB = 500,000 GB-s × $0.0000166667 = $8.33

Free tier credit (1M req + 400K GB-s): -$6.87

Total: ~$3.46/month (x86) or ~$3.14/month (ARM)

Prices for US East (N. Virginia). See the official AWS Lambda pricing page for current regional pricing and Savings Plan discounts (up to 17% off).

Cold Starts & Performance

Cold starts are the most discussed Lambda limitation. Here is what actually happens and how to mitigate it.

50–200ms

Node.js / Python cold start

200ms–2s

Java / .NET cold start

~0ms

With SnapStart or Provisioned Concurrency

SnapStartFree

Available for Java, Python, and .NET. Caches a snapshot of the initialized execution environment. Up to 10x faster cold starts with zero extra cost.

Provisioned ConcurrencyPaid

Pre-initializes a specified number of execution environments. Eliminates cold starts completely for latency-critical functions. Additional charge applies.

Architecture Choice

ARM64 (Graviton2) offers comparable or slightly better cold start performance vs x86 at 20% lower cost. Always benchmark for your specific workload.

Deploy AWS Lambda with the Serverless Framework

Three commands to go from zero to a deployed Lambda function on AWS.

1

Install

npm i -g serverless
2

Create a new service

serverless

This will show you several templates. Choose one that fits the language and use-case you want.

Serverless ⚡ Framework
Welcome to Serverless Framework V.4

Create a new project by selecting a Template
to generate scaffolding for a specific use-case.

? Select A Template: ...
> AWS / Node.js / Starter
  AWS / Node.js / HTTP API
  AWS / Node.js / Scheduled Task
  AWS / Node.js / SQS Worker
  AWS / Node.js / Express API
  AWS / Python / Starter
  AWS / Python / HTTP API
  AWS / Python / Scheduled Task
  AWS / Python / Flask API
  (Scroll for more)

After selecting a template, its files will be downloaded and you will be prompted to name your service.

3

Deploy

serverless deploy

The framework packages your code, generates CloudFormation, and deploys everything (Lambda functions, API endpoints, S3 triggers, and IAM roles) in one command.

Running AWS Lambda in Production

After deploying your Lambda functions, you need monitoring, security, and operational tooling to run them reliably.

Monitoring with CloudWatch

CloudWatch provides default metrics immediately after deployment: invocation count, duration, errors, and throttles. Set up CloudWatch Alarms for anomaly detection and use X-Ray for distributed tracing across services.

Secrets & Security

Store API keys and credentials in AWS Secrets Manager or SSM Parameter Store, never in code or environment variables. Use IAM roles with least-privilege policies. The Serverless Framework supports built-in secrets management.

Serverless Framework Dashboard

Functions deployed via Serverless Framework get out-of-the-box monitoring, alerts, CI/CD deployments, and provider management with no additional setup. Visit the Dashboard for real-time metrics, deployment history, and team collaboration.

When Lambda May Not Be the Best Fit

Lambda is excellent for event-driven and variable workloads, but there are scenarios where other options may be more cost-effective or practical.

High sustained throughput

For applications with consistently high, steady load, Lambda's per-invocation pricing can exceed the cost of EC2 instances or containers (ECS/Fargate). Run the numbers above a few million daily requests.

Long-running processes

Lambda has a hard 15-minute timeout. Tasks exceeding this, like video transcoding, large ML training, or batch jobs, need EC2, ECS, Step Functions, or Durable Functions.

Specialized hardware needs

Workloads requiring GPUs, high-memory instances, or specific OS configurations are better served by EC2 or SageMaker. Lambda offers up to 10 GB memory but no GPU access.

Ultra-low latency requirements

Cold starts add 50ms–2s of latency. While SnapStart and Provisioned Concurrency mitigate this, applications requiring sub-10ms response times may need always-on compute.

Links & Resources

Documentation

  • Serverless Framework Docs
  • AWS Provider Reference
  • AWS Lambda Documentation
  • AWS Lambda Pricing

Example Projects

  • HTTP Endpoint: Node.js
  • HTTP Endpoint: Python
  • REST API with DynamoDB: Python
  • Browse all examples

AWS Lambda FAQ

Common questions about AWS Lambda, serverless functions, and getting started.

What is AWS Lambda?
AWS Lambda is a serverless compute service from Amazon Web Services that lets you run code without provisioning or managing servers. You pay only for the compute time your code actually uses, and it scales automatically from zero to thousands of concurrent executions.
What is a Lambda function?
A Lambda function is a self-contained unit of code that runs in response to an event. You write a handler function in a supported language, configure a trigger (like an API request or S3 upload), and AWS handles everything else: provisioning, scaling, patching, and monitoring.
What are AWS Lambda cold starts?
A cold start happens when Lambda creates a new execution environment to handle a request. This adds initialization latency, typically 100ms–1s for most runtimes, but can be several seconds for Java/.NET without optimization. Use SnapStart (Java, Python, .NET) or Provisioned Concurrency to eliminate cold starts for latency-sensitive workloads.
How much does AWS Lambda cost?
AWS Lambda pricing is based on requests ($0.20 per 1M) and duration ($0.0000166667 per GB-second on x86). The free tier includes 1M requests and 400,000 GB-seconds per month, permanently. ARM/Graviton2 is 20% cheaper. Most low-to-moderate workloads stay within the free tier.
What is the maximum execution time for a Lambda function?
AWS Lambda functions have a maximum timeout of 15 minutes (900 seconds). For longer workflows, use AWS Step Functions to orchestrate multiple Lambda functions, or Lambda Durable Functions for stateful workflows lasting up to 1 year.
What languages does AWS Lambda support?
Lambda natively supports Node.js (20, 22, 24), Python (3.10–3.14), Java (8, 11, 17, 21, 25), .NET (8, 10), and Ruby (3.2–3.4). Go, Rust, C++, and any other language can be used via the provided.al2023 custom runtime. All runtimes support both x86_64 and ARM64 architectures.
Can I use Docker containers with AWS Lambda?
Yes. Lambda supports container images up to 10 GB deployed via Amazon ECR. You can use your existing container toolchain and base images. This is ideal for functions with large dependencies like ML models or scientific computing libraries.
What is the difference between AWS Lambda and EC2?
EC2 gives you full virtual servers that run continuously and require manual scaling and patching. Lambda runs individual functions on-demand, scales automatically, and charges per-use. Lambda is ideal for event-driven, variable workloads. EC2 is better for long-running processes or workloads that need specific OS-level control.
Is AWS Lambda open source?
No. AWS Lambda is a proprietary service available only within AWS. However, the Serverless Framework, the most popular tool for building on Lambda, is open source and available on GitHub.
Can AWS Lambda call other Lambda functions?
Yes. You can invoke one Lambda from another using the AWS SDK, send messages through Amazon SNS or SQS to trigger a second function, or use AWS Step Functions to orchestrate workflows across multiple functions.

Start Building with AWS Lambda

Deploy your first Lambda function in minutes with the Serverless Framework. No infrastructure to manage, no servers to provision.

Get Started FreeView Documentation