ES | EN

SQLITE IN DESKTOP APPS: THE DATABASE THAT NEEDS NO SERVER

TAGS: DATABASES / PYTHON / DESKTOP APPS READ_TIME: 12 MIN
SQLite in desktop apps: the database that needs no server

Every desktop app needs to persist data. The web developer's reflex is to spin up PostgreSQL or MySQL. For a single-user desktop app, that is overkill by multiple orders of magnitude: it requires a running server, user configuration, connection management, and complexity that adds nothing in a single-user context. SQLite is the correct answer — one file on disk, zero configuration, zero server, zero maintenance.

PROJECT_STATUS: STABLE

Stack: Python · SQLite · JSON
Reference project: FocOs — data persistence
Goal: Robust persistence without a server — one file on disk, years of development without touching the schema

01. THE PROBLEM IT SOLVES

PostgreSQL and MySQL are designed for concurrent multi-user access, high availability, and distributed workloads. In a personal desktop app, none of those properties apply — yet you drag along the entire infrastructure. SQLite solves exactly the problem you actually have: reliable persistence for one user, no external process, no configuration, and the end user never knows a database exists.

// Non-technical explanation

Imagine you need to save a shopping list. You have two options: hire a professional chef with an industrial kitchen to keep it, or simply write it in a notebook. PostgreSQL is the chef. SQLite is the notebook. For a shopping list, the notebook is the right answer — and you can carry it in your pocket.

02. FOCOS SCHEMA IN SQLITE

FocOs uses four core tables. The two PRAGMAs at initialization are non-negotiable: WAL journal mode dramatically improves write performance, and foreign_keys=ON activates referential integrity that SQLite leaves disabled by default.

# main.py — database initialization

import sqlite3
from pathlib import Path

DB_PATH = Path('data/focos.db')

def init_db():
    DB_PATH.parent.mkdir(parents=True, exist_ok=True)
    conn = sqlite3.connect(DB_PATH)
    conn.execute("PRAGMA journal_mode=WAL") # Faster writes
    conn.execute("PRAGMA foreign_keys=ON") # Referential integrity

    conn.executescript("""
        CREATE TABLE IF NOT EXISTS projects (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            name TEXT NOT NULL UNIQUE,
            status TEXT DEFAULT 'active',
            meta TEXT DEFAULT '{}'
        );

        CREATE TABLE IF NOT EXISTS tasks (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            project_id INTEGER REFERENCES projects(id),
            title TEXT NOT NULL,
            status TEXT DEFAULT 'pending',
            priority INTEGER DEFAULT 1,
            created_at TEXT DEFAULT (datetime('now'))
        );

        CREATE TABLE IF NOT EXISTS workspaces (
            ws_id INTEGER PRIMARY KEY,
            project_id INTEGER REFERENCES projects(id),
            layout TEXT DEFAULT 'default',
            state TEXT DEFAULT '{}'
        );

        CREATE TABLE IF NOT EXISTS sessions (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            project_id INTEGER REFERENCES projects(id),
            date TEXT DEFAULT (date('now')),
            duration INTEGER DEFAULT 0,
            notes TEXT DEFAULT ''
        );

        INSERT OR IGNORE INTO workspaces(ws_id) VALUES (1),(2),(3);
    """)
    conn.commit()
    conn.close()

03. FULL CRUD — NO ORM, NO MAGIC

The pattern is consistent across all operations: open connection, execute, commit if writing, close in the finally block. No ORM, no unnecessary abstraction — the SQL is visible, debuggable, and predictable. Soft delete instead of real DELETE preserves history and avoids irreversible mistakes.

# CREATE
def create_project(self, name: str, status: str = 'active') -> dict:
    conn = sqlite3.connect(DB_PATH)
    try:
        cur = conn.execute(
            "INSERT INTO projects(name, status) VALUES(?,?)",
            (name, status)
        )
        conn.commit()
        return { 'ok': True, 'id': cur.lastrowid }
    except sqlite3.IntegrityError:
        return { 'ok': False, 'error': f"Project '{name}' already exists" }
    finally:
        conn.close()

# READ
def get_projects(self) -> list:
    conn = sqlite3.connect(DB_PATH)
    rows = conn.execute(
        "SELECT id, name, status, meta FROM projects ORDER BY id DESC"
    ).fetchall()
    conn.close()
    return [
        { 'id': r[0], 'name': r[1], 'status': r[2],
          'meta': json.loads(r[3] or '{}') }
        for r in rows
    ]

# UPDATE
def update_project_status(self, project_id: int, status: str) -> dict:
    conn = sqlite3.connect(DB_PATH)
    conn.execute(
        "UPDATE projects SET status=? WHERE id=?",
        (status, project_id)
    )
    conn.commit()
    conn.close()
    return { 'ok': True }

# DELETE (soft delete — change status, never drop rows)
def archive_project(self, project_id: int) -> dict:
    return self.update_project_status(project_id, 'archived')

04. JSON INSIDE SQLITE — THE META FIELD

SQLite has no native JSON type — it is stored as TEXT and serialized/deserialized manually. The advantage is that the meta field can grow indefinitely without altering the schema. The rule is clear: data that requires querying goes in dedicated columns; flexible, evolving data goes in meta.

import json

# Store complex metadata in a single field
meta = {
    'type': 'software',
    'stack': ['Python', 'JavaScript'],
    'philosophy': 'Focus is the primary function',
    'completion': 85,
}

conn.execute(
    "UPDATE projects SET meta=? WHERE id=?",
    (json.dumps(meta, ensure_ascii=False), project_id)
)

# Read and deserialize
row = conn.execute(
    "SELECT meta FROM projects WHERE id=?", (project_id,)
).fetchone()
meta = json.loads(row[0]) if row else {}
print(meta['completion']) # 85
Data type Where it goes Reason
ID, status, dates, foreign keys Dedicated column Needed in WHERE, ORDER BY or JOIN clauses.
Stack, philosophy, completion, config meta field (JSON) Read-only access. Can grow without migration.
Rapidly evolving data meta field (JSON) Add fields without ALTER TABLE or migration scripts.

05. AUTOMATIC BACKUP — ONE LINE

SQLite is a file. Backing it up is copying that file. No dumps, no exports, no external tooling. The strategy in FocOs is to call backup_db() at the start of every work session — if something gets corrupted, that day's backup is immediately available.

import shutil
from datetime import datetime

def backup_db(self) -> dict:
    date = datetime.now().strftime('%Y-%m-%d')
    backup = DB_PATH.parent / f'focos_backup_{date}.db'
    shutil.copy2(DB_PATH, backup)
    return { 'ok': True, 'path': str(backup) }

# Call at the start of every session:
# backup_db() — if corruption happens, the day's backup is there

-- CONCLUSION

SQLite with the meta JSON pattern eliminates the need for schema migrations in fast-evolving projects. Structured fields go in dedicated columns for fast querying. Flexible data goes in the meta field without migration. For a single-user desktop app, this approach holds up for years of development without ever touching the database structure.

> SYSTEM_READY > NODE_ONLINE

< session_end // node: exit >
> INFOGRATECH_CORE_SHELL X
$