Amazon DynamoDB is a fully managed NoSQL key-value database from AWS. It delivers single-digit millisecond performance at any scale, with automatic scaling, built-in replication, and zero server management.
DynamoDB replaces traditional database infrastructure with a fully managed service that handles scaling, replication, and backups automatically.
Store and retrieve JSON objects using simple key-value access patterns. Support for partition keys, sort keys, and secondary indexes for flexible querying.
Consistent, low-latency reads and writes at any scale. DynamoDB Accelerator (DAX) adds an in-memory cache layer for microsecond response times on hot data.
Capture a time-ordered feed of item-level changes. Trigger Lambda functions on every insert, update, or delete for real-time automation and event-driven architectures.
Multi-region, multi-master replication with minimal configuration. Serve reads and writes from the closest region for lower latency and higher availability.
ACID transactions across multiple items and tables. Choose between eventually consistent reads (default) and strongly consistent reads per query.
On-demand mode scales instantly with no capacity planning. Provisioned mode supports auto-scaling policies that adjust read and write capacity based on load.
DynamoDB distributes your data across many machines automatically, providing high availability and horizontal scaling without any server management.
You write items (JSON objects) to a table. Each item must include a primary key: either a single partition key, or a partition key combined with a sort key.
DynamoDB hashes the partition key to spread data across many storage nodes. This ensures even distribution and consistent performance as your dataset grows.
Retrieve items by primary key, scan with filters, or query secondary indexes. DynamoDB returns results in single-digit milliseconds regardless of table size.
DynamoDB integrates tightly with the broader AWS ecosystem:
Trigger functions on every table change via DynamoDB Streams. The most common pattern for serverless event-driven applications.
Export table data to S3 for analytics, or store references to S3 objects in DynamoDB items for large binary data.
Authenticate and authorize access to individual DynamoDB records using Cognito identity pools.
Export DynamoDB data to Redshift for large-scale analytics and business intelligence workloads.
Use DynamoDB as a data source for GraphQL APIs with built-in resolvers and real-time subscriptions.
Pipe DynamoDB stream events into EventBridge for cross-service orchestration and event routing.
The Serverless Framework makes it straightforward to provision DynamoDB tables alongside your Lambda functions. Define tables in the resources section of your serverless.yml and wire up DynamoDB Streams as Lambda event sources:
service: users-api
provider:
name: aws
runtime: nodejs22.x
functions:
getUser:
handler: handler.getUser
events:
- httpApi:
path: /users/{id}
method: get
onUserChange:
handler: handler.onUserChange
events:
- stream:
type: dynamodb
arn:
Fn::GetAtt: [UsersTable, StreamArn]
resources:
Resources:
UsersTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: usersTable
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGESRunning serverless deploy creates the DynamoDB table, Lambda functions, IAM roles, and all necessary permissions in a single CloudFormation stack. The framework also supports provisioned capacity, Global Tables, and secondary indexes through standard CloudFormation syntax.
DynamoDB eliminates all operational database tasks. No servers to provision, no kernel patches to apply, no storage volumes to manage. AWS handles hardware provisioning, software patching, cluster scaling, and backups. Teams spend their time building features instead of maintaining infrastructure.
In on-demand mode, DynamoDB scales read and write throughput instantly with no capacity planning. Tables handle sudden traffic spikes without throttling or manual intervention. Provisioned mode supports auto-scaling policies that adjust capacity based on utilization thresholds (20-90%), offering cost optimization for predictable workloads.
DynamoDB Global Tables provide multi-master, multi-region replication with minimal configuration. Reads and writes are served from the closest region, reducing latency for globally distributed users. Replication happens automatically with no ongoing maintenance required.
DynamoDB Streams capture every change to your table in real time. Combined with Lambda, this enables powerful automation: sending transactional emails, syncing data to search indexes, updating derived tables, running analytics pipelines, and implementing activity counters, all triggered automatically on every data change.
DynamoDB is an excellent fit for many serverless workloads, but these constraints are worth understanding before you commit.
DynamoDB is proprietary to AWS with no open-source equivalent. Migrating to another database requires significant effort, and features like DynamoDB Streams have no direct counterpart outside AWS.
DynamoDB cannot join data across tables. If a single operation requires data from multiple tables, you must make separate requests and combine results in application code. This is slower and more expensive than a relational database for join-heavy workloads.
On-demand pricing is convenient for variable traffic, but costs increase quickly as average throughput grows. For steady, high-volume workloads, provisioned capacity with auto-scaling or reserved capacity is significantly cheaper.
Individual items cannot exceed 400 KB. For images, documents, or other large objects, store the data in S3 and keep a reference in DynamoDB.
DynamoDB has no built-in caching for hot-key access patterns. DynamoDB Accelerator (DAX) provides microsecond caching but is a separate, proprietary service with additional costs starting around $0.04/hour per node.
DynamoDB offers two capacity modes: on-demand (pay per request) and provisioned (pay per hour for reserved throughput). Both include a generous free tier.
25 GB
Data storage
25 RCU
Read capacity units
25 WCU
Write capacity units
Plus 2.5M DynamoDB Streams read requests and 25 replication write units for Global Tables in two regions.
| Resource | Price (us-east-1) |
|---|---|
| Write request units | $1.25 / 1M writes |
| Read request units | $0.25 / 1M reads |
| Data storage | $0.25 / GB-month (25 GB free) |
| Continuous backups (PITR) | $0.20 / GB-month |
| On-demand backups | $0.10 / GB-month |
| Global Tables replicated writes | $1.875 / 1M writes |
| DynamoDB Streams reads | $0.02 / 100K reads (2.5M free) |
| Resource | Price (us-east-1) |
|---|---|
| Write capacity unit | $0.00065 / hour per WCU |
| Read capacity unit | $0.00013 / hour per RCU |
| Data storage | $0.25 / GB-month (25 GB free) |
| Global Tables replicated WCU | $0.000975 / hour per WCU |
100M reads x $0.25/1M = $2.50/month
50M writes x $1.25/1M = $6.25/month
105 GB storage (25 GB free) = $20/month
Continuous backup (105 GB) = $21/month
Total: ~$50/month (on-demand, single region)
Adding Global Tables roughly doubles costs to ~$90/month. DAX caching adds ~$100/month for a minimal cluster.
See the official DynamoDB pricing page for current regional rates and reserved capacity discounts.
Use DynamoDB when your access patterns are well-defined, each operation works with self-contained items, you need single-digit millisecond latency at any scale, or you want a fully managed database that pairs naturally with Lambda and the broader serverless ecosystem.
Consider alternatives when your workload relies heavily on relational joins across tables (use Amazon RDS), you need full SQL query capabilities, or your data model is highly relational. For graph-shaped data with complex relationship traversals, consider Amazon Neptune. If you want an open-source NoSQL alternative with less vendor lock-in, evaluate MongoDB or Apache Cassandra.
DynamoDB is a strong default for serverless workloads on AWS, but other databases may be a better fit depending on your query patterns, hosting requirements, or data model.
Best for workloads that require complex queries, JOINs across tables, and relational data modeling. Aurora offers MySQL and PostgreSQL compatibility with automatic scaling up to 128 TB.
A flexible document database with a rich query language and compound indexes. MongoDB Atlas offers managed hosting on AWS, GCP, and Azure, making it a strong choice for multi-cloud deployments.
A distributed wide-column database designed for multi-datacenter replication with tunable consistency. Cassandra is cloud-agnostic and open source, avoiding vendor lock-in.
A wide-column store optimized for analytics, time-series data, and large-scale IoT workloads. Bigtable handles petabytes of data with low latency but offers limited query flexibility compared to DynamoDB.
A managed graph database for workloads that involve complex relationship traversals, such as social networks, recommendation engines, and fraud detection.
An in-memory data store delivering sub-millisecond latency. Ideal for caching, session storage, leaderboards, and real-time analytics where persistence is secondary to speed.
How DynamoDB compares to other popular database options across key dimensions.
| Aspect | DynamoDB | RDS |
|---|---|---|
| Type | NoSQL key-value/document | Relational (SQL) |
| Schema | Schema-less | Fixed schema |
| Scaling | Automatic, horizontal | Manual, vertical (read replicas for reads) |
| Pricing | Per request or provisioned capacity | Per instance hour |
| Joins | Not supported | Full SQL JOIN support |
| Transactions | Supported (up to 100 items) | Full ACID transactions |
| Best for | High-scale, low-latency key-value lookups | Complex queries, relational data, reporting |
| Aspect | DynamoDB | MongoDB |
|---|---|---|
| Type | Managed NoSQL | Self-hosted or Atlas (managed) |
| Query language | PartiQL (SQL-compatible) or API | MongoDB Query Language (MQL) |
| Secondary indexes | GSI and LSI (limited) | Flexible compound indexes |
| Schema | Schema-less | Schema-less with optional validation |
| Hosting | AWS only | Any cloud or self-hosted |
| Pricing | Per request or provisioned | Instance-based (Atlas) or self-managed |
| Best for | Serverless apps on AWS | Flexible queries, multi-cloud |
| Aspect | DynamoDB | Cassandra |
|---|---|---|
| Management | Fully managed | Self-managed or DataStax Astra |
| Query language | PartiQL or API | CQL (Cassandra Query Language) |
| Replication | Managed Global Tables | Configurable replication factor |
| Vendor lock-in | AWS only | Cloud-agnostic |
| Pricing | Per request or provisioned | Infrastructure cost |
| Best for | AWS serverless, zero-ops | Multi-datacenter, tunable consistency |
| Aspect | DynamoDB | Cloud Bigtable |
|---|---|---|
| Type | Key-value and document | Wide-column store |
| Query flexibility | Secondary indexes, PartiQL | Row key scanning only |
| Triggers | DynamoDB Streams + Lambda | No native triggers |
| Pricing | Per request or provisioned | Per node hour ($0.65/hr) |
| Best for | Serverless apps, varied access patterns | Analytics, time-series, large-scale IoT |
Key service limits to keep in mind when designing your DynamoDB data model. Some limits are adjustable through AWS support.
| Resource | Limit |
|---|---|
| Item size | 400 KB max |
| Partition key length | 2,048 bytes max |
| Sort key length | 1,024 bytes max |
| Global secondary indexes (GSI) per table | 20 (adjustable) |
| Local secondary indexes (LSI) per table | 5 (cannot be changed after creation) |
| BatchWriteItem | 25 items per request |
| BatchGetItem | 100 items per request |
| TransactWriteItems / TransactGetItems | 100 items per transaction |
| Query or Scan result size | 1 MB per response (paginate for more) |
| Table size | Unlimited |
| Tables per account per region | 2,500 (adjustable) |
Common questions about Amazon DynamoDB.
Deploy a DynamoDB table with Lambda functions in minutes using the Serverless Framework.