Cloudflare D1 — SQLite at the edge.
D1 is SQLite running at Cloudflare's edge. It supports standard SQL, automatic backups, and read replicas that route queries to the closest edge location (paid tier).
Free tier: 5GB storage, 5M row reads/day, 100k row writes/day. Paid tier (Workers Paid): 50GB storage, 25B row reads/month, 50M row writes/month + read replicas.
[[d1_databases]]
binding = "DB"
database_name = "my-app-db"
database_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
wrangler d1 create my-app-db # creates DB, prints database_id
wrangler d1 execute my-app-db --local --file=migrations/0001_initial.sql
wrangler d1 execute my-app-db --remote --file=migrations/0001_initial.sql
migrations/
0001_initial.sql
0002_add_indexes.sql
0003_sessions.sql
Never modify existing migrations. Always add a new numbered file.
-- migrations/0001_initial.sql
CREATE TABLE IF NOT EXISTS users (
id TEXT PRIMARY KEY, -- use UUID or nanoid
email TEXT NOT NULL UNIQUE,
created_at INTEGER NOT NULL DEFAULT (unixepoch())
);
CREATE TABLE IF NOT EXISTS sessions (
id TEXT PRIMARY KEY,
user_id TEXT NOT NULL REFERENCES users(id) ON DELETE CASCADE,
data TEXT NOT NULL DEFAULT '{}', -- JSON blob
expires_at INTEGER NOT NULL,
created_at INTEGER NOT NULL DEFAULT (unixepoch())
);
-- migrations/0002_add_indexes.sql
-- Always index foreign keys and columns used in WHERE clauses
CREATE INDEX IF NOT EXISTS idx_sessions_user_id ON sessions(user_id);
CREATE INDEX IF NOT EXISTS idx_sessions_expires ON sessions(expires_at);
// Single row
const user = await env.DB
.prepare('SELECT * FROM users WHERE email = ?')
.bind(email)
.first<User>();
// Multiple rows
const { results } = await env.DB
.prepare('SELECT * FROM sessions WHERE user_id = ? AND expires_at > ?')
.bind(userId, Math.floor(Date.now() / 1000))
.all<Session>();
// Insert / update
const { success } = await env.DB
.prepare('INSERT INTO users (id, email) VALUES (?, ?)')
.bind(crypto.randomUUID(), email)
.run();
// Raw (returns metadata too)
const { results, meta } = await env.DB
.prepare('SELECT * FROM users')
.raw();
// All statements execute atomically — all succeed or all fail
const results = await env.DB.batch([
env.DB.prepare('INSERT INTO users (id, email) VALUES (?, ?)').bind(id, email),
env.DB.prepare('INSERT INTO sessions (id, user_id, expires_at) VALUES (?, ?, ?)').bind(sessionId, id, expiry),
]);
Use batch for any multi-step write — it's the D1 equivalent of a transaction.
-- Store JSON in TEXT column
UPDATE users SET preferences = ? WHERE id = ?
-- Query JSON field (SQLite JSON functions)
SELECT json_extract(preferences, '$.theme') AS theme FROM users WHERE id = ?
wrangler d1 execute my-app-db --local --command="INSERT INTO users VALUES ('1', 'admin@example.com', unixepoch())"
# Or seed from a file
wrangler d1 execute my-app-db --local --file=seed.sql
When enabled on the paid tier, D1 automatically routes read queries to the closest Cloudflare edge location. Write queries always go to the primary.
// No code change needed — D1 routes automatically
// Write goes to primary, reads go to nearest replica
const row = await env.DB.prepare('SELECT * FROM users WHERE id = ?').bind(id).first();
unixepoch() (seconds), not ISO strings. Easier to compare and index.crypto.randomUUID() in the Worker and pass it in..first() returns null: Not an error — check for null before using the result.batch() is atomic within one call, but not across multiple Worker requests.wrangler dev creates a .wrangler/state/v3/d1/ file. Run --remote to test against the real DB.| Use case | D1 | KV | R2 |
|---|---|---|---|
| Structured relational data | ✓ | ✗ | ✗ |
| User records, sessions, logs | ✓ | Sessions only | ✗ |
| Key-value lookups (<128KB) | ✗ | ✓ | ✗ |
| Config, feature flags | ✗ | ✓ | ✗ |
| Files, images, documents | ✗ | ✗ | ✓ |
| Large blobs (>1MB) | ✗ | ✗ | ✓ |