tech/cloudflare/d1

D1

Cloudflare D1 — SQLite at the edge.

production Cloudflare Workers, Wrangler CLI
improves: tech/cloudflare

Cloudflare D1

D1 is SQLite running at Cloudflare's edge. It supports standard SQL, automatic backups, and read replicas that route queries to the closest edge location (paid tier).

Free tier: 5GB storage, 5M row reads/day, 100k row writes/day. Paid tier (Workers Paid): 50GB storage, 25B row reads/month, 50M row writes/month + read replicas.

wrangler.toml binding

[[d1_databases]]
binding = "DB"
database_name = "my-app-db"
database_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"

Create and migrate

wrangler d1 create my-app-db                     # creates DB, prints database_id
wrangler d1 execute my-app-db --local  --file=migrations/0001_initial.sql
wrangler d1 execute my-app-db --remote --file=migrations/0001_initial.sql

Migration convention

migrations/
  0001_initial.sql
  0002_add_indexes.sql
  0003_sessions.sql

Never modify existing migrations. Always add a new numbered file.

Schema patterns

-- migrations/0001_initial.sql
CREATE TABLE IF NOT EXISTS users (
  id        TEXT PRIMARY KEY,              -- use UUID or nanoid
  email     TEXT NOT NULL UNIQUE,
  created_at INTEGER NOT NULL DEFAULT (unixepoch())
);

CREATE TABLE IF NOT EXISTS sessions (
  id         TEXT PRIMARY KEY,
  user_id    TEXT NOT NULL REFERENCES users(id) ON DELETE CASCADE,
  data       TEXT NOT NULL DEFAULT '{}',  -- JSON blob
  expires_at INTEGER NOT NULL,
  created_at INTEGER NOT NULL DEFAULT (unixepoch())
);

-- migrations/0002_add_indexes.sql
-- Always index foreign keys and columns used in WHERE clauses
CREATE INDEX IF NOT EXISTS idx_sessions_user_id ON sessions(user_id);
CREATE INDEX IF NOT EXISTS idx_sessions_expires ON sessions(expires_at);

Querying in Workers

// Single row
const user = await env.DB
  .prepare('SELECT * FROM users WHERE email = ?')
  .bind(email)
  .first<User>();

// Multiple rows
const { results } = await env.DB
  .prepare('SELECT * FROM sessions WHERE user_id = ? AND expires_at > ?')
  .bind(userId, Math.floor(Date.now() / 1000))
  .all<Session>();

// Insert / update
const { success } = await env.DB
  .prepare('INSERT INTO users (id, email) VALUES (?, ?)')
  .bind(crypto.randomUUID(), email)
  .run();

// Raw (returns metadata too)
const { results, meta } = await env.DB
  .prepare('SELECT * FROM users')
  .raw();

Batch operations (atomic)

// All statements execute atomically — all succeed or all fail
const results = await env.DB.batch([
  env.DB.prepare('INSERT INTO users (id, email) VALUES (?, ?)').bind(id, email),
  env.DB.prepare('INSERT INTO sessions (id, user_id, expires_at) VALUES (?, ?, ?)').bind(sessionId, id, expiry),
]);

Use batch for any multi-step write — it's the D1 equivalent of a transaction.

JSON columns

-- Store JSON in TEXT column
UPDATE users SET preferences = ? WHERE id = ?
-- Query JSON field (SQLite JSON functions)
SELECT json_extract(preferences, '$.theme') AS theme FROM users WHERE id = ?

Seeding data

wrangler d1 execute my-app-db --local --command="INSERT INTO users VALUES ('1', 'admin@example.com', unixepoch())"

# Or seed from a file
wrangler d1 execute my-app-db --local --file=seed.sql

Read replicas (paid)

When enabled on the paid tier, D1 automatically routes read queries to the closest Cloudflare edge location. Write queries always go to the primary.

// No code change needed — D1 routes automatically
// Write goes to primary, reads go to nearest replica
const row = await env.DB.prepare('SELECT * FROM users WHERE id = ?').bind(id).first();

Common Gotchas

Choosing D1 vs KV vs R2

Use caseD1KVR2
Structured relational data
User records, sessions, logsSessions only
Key-value lookups (<128KB)
Config, feature flags
Files, images, documents
Large blobs (>1MB)

See Also