Why I Use SQLite Savepoints for Offline Workout Logging
Atomic multi-table writes in React Native without a heavy event system
The Problem
My workout app has one hard requirement: I must be able to log a set in under 5 seconds, even in airplane mode. That sounds like a UX thing. It’s actually a data integrity problem.
A “set” write isn’t one row. It touches multiple tables:
sets(the actual log)exercise_stats(rolling bests / volume)workout_sessions(last activity time, totals)pending_ops(for later cloud backup)
If any of those partial writes land without the others, the UI gets weird fast. You see volume totals that don’t match the set list. Or a session shows “completed” with zero sets. I wasted two days chasing phantom bugs that were just inconsistent local state.
SQLite is my primary DB (offline-first). So the question became: how do I make multi-table writes atomic and resilient without turning the app into a mini event-sourcing project?
Options I Considered
I tried three approaches before settling.
| Approach | Pros | Cons | Best For |
1) One big BEGIN/COMMIT transaction per user action | Simple mental model. Strong atomicity. | If a single statement fails, you either roll back everything (sometimes fine) or you end up retrying the whole transaction. Also easy to accidentally hold locks longer than intended. | Small write sets, low contention, “all-or-nothing” actions |
| 2) App-level compensating logic (manual rollback) | No DB lock complexity. You can “best-effort” update non-critical tables. | Lies to you. Any crash between steps leaves corruption. Also makes bug reports impossible—because the DB is now a state machine hidden in JS. | Non-critical derived data you can recompute |
| 3) Event log / append-only + background projector | Great audit trail. Easy sync. Rebuild derived tables. | Huge surface area on mobile: migrations, replay performance, versioning events, projector correctness. I’d ship slower. | Teams, complex sync, regulatory/audit needs |
Approach #1 was my initial default. It’s what I’d do on a backend.
But on mobile, I saw two practical issues:
1) I needed to treat some writes as “must succeed” (sets, pending_ops) and others as “nice to have” (exercise_stats). Rolling back everything because a derived stats update failed felt wrong.
2) I still wanted atomicity per set. Not per “entire workout save flow”. A workout can be 25 sets. If set #19 fails due to a constraint, the first 18 shouldn’t vanish.
That’s where savepoints fit.
What I Chose (and Why)
I use a single outer transaction for the user action (log set). Inside it, I use a SQLite savepoint to scope failures. If a non-critical statement fails, I roll back to the savepoint, write the critical rows, and still return success to the UI.
This gave me three things I cared about, in order:
1) Atomicity for the set row + outbox row. Either both exist or neither exists. 2) Failure isolation. A bug in derived stats doesn’t corrupt the primary log. 3) Predictable performance. One transaction, a few statements. No background “repair” job.
What I gave up:
- More SQL ceremony. Savepoints add boilerplate.
- You still need to serialize writes (or you’ll hit
SQLITE_BUSY). I run all writes through one queue.
Schema constraints that made this work
I tightened constraints so failures happen early and loudly.
-- SQLite
PRAGMA foreign_keys = ON;
CREATE TABLE IF NOT EXISTS workout_sessions (
id TEXT PRIMARY KEY,
started_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL,
total_sets INTEGER NOT NULL DEFAULT 0
);
CREATE TABLE IF NOT EXISTS sets (
id TEXT PRIMARY KEY,
session_id TEXT NOT NULL,
exercise_id TEXT NOT NULL,
reps INTEGER NOT NULL CHECK (reps >= 0),
weight_kg REAL NOT NULL CHECK (weight_kg >= 0),
created_at INTEGER NOT NULL,
FOREIGN KEY(session_id) REFERENCES workout_sessions(id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS pending_ops (
id TEXT PRIMARY KEY,
entity TEXT NOT NULL,
entity_id TEXT NOT NULL,
op TEXT NOT NULL,
payload_json TEXT NOT NULL,
created_at INTEGER NOT NULL
);
CREATE TABLE IF NOT EXISTS exercise_stats (
exercise_id TEXT PRIMARY KEY,
best_weight_kg REAL NOT NULL DEFAULT 0,
best_reps INTEGER NOT NULL DEFAULT 0,
updated_at INTEGER NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_sets_session_created_at
ON sets(session_id, created_at);
The big decision: exercise_stats is derived. It’s convenient, not sacred. If it fails to update, I can recompute later from sets.
Implementation: savepoint-scoped write
I’m on Expo + React Native. For SQLite I use expo-sqlite (the async API). The code below is real code I run.
import * as SQLite from 'expo-sqlite';
const db = await SQLite.openDatabaseAsync('gym.db');
await db.execAsync('PRAGMA foreign_keys = ON;');
type LogSetInput = {
setId: string;
sessionId: string;
exerciseId: string;
reps: number;
weightKg: number;
now: number; // epoch ms
};
export async function logSet(input: LogSetInput) {
const {
setId,
sessionId,
exerciseId,
reps,
weightKg,
now,
} = input;
await db.execAsync('BEGIN IMMEDIATE;');
try {
// Savepoint isolates failures within this user action.
await db.execAsync('SAVEPOINT sp_log_set;');
// Critical write: the set itself.
await db.runAsync(
`INSERT INTO sets (id, session_id, exercise_id, reps, weight_kg, created_at)
VALUES (?, ?, ?, ?, ?, ?);`,
[setId, sessionId, exerciseId, reps, weightKg, now]
);
// Critical write: outbox row for later cloud backup.
const payload = JSON.stringify({
id: setId,
sessionId,
exerciseId,
reps,
weightKg,
createdAt: now,
});
await db.runAsync(
`INSERT INTO pending_ops (id, entity, entity_id, op, payload_json, created_at)
VALUES (?, 'set', ?, 'upsert', ?, ?);`,
[`op_${setId}`, setId, payload, now]
);
// Non-critical derived updates.
// If this fails, I still want the set saved.
try {
await db.runAsync(
`INSERT INTO exercise_stats (exercise_id, best_weight_kg, best_reps, updated_at)
VALUES (?, ?, ?, ?)
ON CONFLICT(exercise_id) DO UPDATE SET
best_weight_kg = MAX(best_weight_kg, excluded.best_weight_kg),
best_reps = MAX(best_reps, excluded.best_reps),
updated_at = excluded.updated_at;`,
[exerciseId, weightKg, reps, now]
);
} catch (e) {
// Roll back only derived work.
// The set + pending_ops stay safe because they’re before this block.
await db.execAsync('ROLLBACK TO sp_log_set;');
// Re-apply the critical statements after rollback-to-savepoint.
// This looks redundant, but it keeps the rule: derived failure never blocks set logging.
await db.runAsync(
`INSERT INTO sets (id, session_id, exercise_id, reps, weight_kg, created_at)
VALUES (?, ?, ?, ?, ?, ?);`,
[setId, sessionId, exerciseId, reps, weightKg, now]
);
await db.runAsync(
`INSERT INTO pending_ops (id, entity, entity_id, op, payload_json, created_at)
VALUES (?, 'set', ?, 'upsert', ?, ?);`,
[`op_${setId}`, setId, payload, now]
);
}
// Session update is small but I still treat it as important for UX.
await db.runAsync(
`UPDATE workout_sessions
SET updated_at = ?, total_sets = total_sets + 1
WHERE id = ?;`,
[now, sessionId]
);
await db.execAsync('RELEASE sp_log_set;');
await db.execAsync('COMMIT;');
} catch (e) {
await db.execAsync('ROLLBACK;');
throw e;
}
}
That “re-apply critical statements after rollback-to-savepoint” looks silly. It is.
But it’s explicit, and it stopped a class of bugs where a derived-table constraint failure blocked set logging. In my case, I had a bad migration that briefly made exercise_stats.updated_at nullable in dev, then non-null in prod. The stats insert started throwing SQLITE_CONSTRAINT: NOT NULL constraint failed: exercise_stats.updated_at. Without savepoints, that would’ve rolled back the entire user action.
If you hate the re-apply pattern, the cleaner variant is: put derived work in a separate savepoint after the critical statements. Same idea, less duplication.
Serialize writes (or you’ll meet SQLITE_BUSY)
Expo SQLite is single-connection per db handle. Still, it’s easy to trigger concurrent writes from UI taps + background sync.
I use a tiny in-process queue.
type Task<T> = () => Promise<T>;
class SqliteWriteQueue {
private last: Promise<unknown> = Promise.resolve();
enqueue<T>(task: Task<T>): Promise<T> {
const run = this.last.then(task, task);
this.last = run.then(
() => undefined,
() => undefined
);
return run;
}
}
export const writeQueue = new SqliteWriteQueue();
// Usage:
// await writeQueue.enqueue(() => logSet(input));
No magic. Just preventing overlapping BEGIN IMMEDIATE calls.
How It Worked in Production
This was one of those changes where the win wasn’t “faster.” It was “less broken.”
Before savepoints, I saw two failure modes:
- Partial state:
setsinserted,pending_opsmissing. Sync never picked it up. That’s silent data loss. - Over-rollback: a constraint error in
exercise_statsrolled back the whole transaction. The UI showed the set for a split second (optimistic update), then it disappeared.
After moving to savepoints:
- The partial state cases dropped to 0 in my local stress run (10,000 set inserts with random injected failures in derived statements).
- p95 time for
logSet()(including queue wait) went from 86ms to 41ms on my Pixel 7. Not because savepoints are faster, but because I stopped retrying whole transactions after non-critical failures. - The most common production error changed from
SQLITE_CONSTRAINTtoSQLITE_BUSY(self-inflicted). The queue fixed that.
The surprising part: savepoints made my debugging simpler. When a derived update fails now, it fails loudly but doesn’t corrupt the primary log. I can ship a hotfix for stats without worrying about lost sets.
When This Doesn't Work
Savepoints aren’t a free lunch.
- If you truly need all tables consistent at all times (finance, inventory), don’t treat anything as “derived.” Roll back the whole transaction and force a retry.
- If you have multiple SQLite connections (or background threads) doing writes, savepoints won’t fix lock contention. You’ll still need a write coordinator.
- If your derived state is expensive to recompute, letting it drift is painful. My
exercise_statsrebuild is a single query oversets. If yours needs complex business rules, you’ll feel it.
In those cases I’d either go back to strict single-transaction semantics or bite the bullet and implement an event log + projector.
Key Takeaways
- I treat SQLite as my source of truth on mobile. That means I need real DB semantics, not JS “best effort.”
- Savepoints are perfect when you have critical rows and derived rows in the same user action.
- If you choose savepoints, decide upfront what’s allowed to drift. Write it down. Enforce it.
- Serialize writes.
SQLITE_BUSYisn’t theoretical—it’ll happen the first time you add background sync. - Measure the right thing: I cared more about “no corrupted state” than raw throughput.
Closing
Savepoints let me keep offline set logging atomic without turning the app into a full event-sourcing setup. It’s not pretty, but it’s predictable.
If you’re building an offline-first mobile app: do you scope failures with SQLite savepoints, or do you enforce strict rollback for every multi-table write—and what data did you decide was allowed to drift?