ES | EN

CONTEXT AS A SUPERPOWER: AUTOMATIC STATE INJECTION IN EVERY LLM MESSAGE

TAGS: LLMs / PRODUCTIVITY / PROMPT ENGINEERING / PYTHON READ_TIME: 12 MIN
Context as a superpower: automatic state injection in every LLM message

The difference between a generic LLM and an assistant that genuinely knows your work is not the model. It is the context you feed it. But writing context manually in every message is slow, inconsistent, and easy to forget. FocOs solves this by automatically injecting the active project context into every message sent to the LLM — without the user writing anything extra, without omissions, without cross-session inconsistencies.

PROJECT_STATUS: STABLE

Stack: Python · JavaScript · any LLM
Reference project: FocOs — context system
Goal: Transform a generic LLM into an assistant that knows exactly where you are and what you are building

01. THE PROBLEM IT SOLVES

Without automatic context, the LLM does not know which project you are in, what task you are working on, what decisions have been made, or what the current system state is. The user ends up repeating context in every message — consuming tokens, time, and attention. Or worse: they skip it, and the LLM produces generic responses that do not apply to the specific problem.

// Non-technical explanation

Imagine you hire an expert consultant. Every time you call them, they have total amnesia — they remember nothing from the last call. You have two options: explain everything from scratch every time (exhausting), or hand them a one-page brief at the start of each call that gets them up to speed in 30 seconds. FocOs generates that brief automatically and injects it into every message before it reaches the LLM.

02. THE STAMP — THE CONTEXT SEAL

FocOs builds a compressed single-line identifier called the Stamp that is prepended to every message. In 80-120 characters it communicates everything the LLM needs to respond with precision.

# Stamp format:
[F|HH:MM TZ|Day|DX/1096|Project|Task|PX|WS]

# Fields:
# F = User identifier (Frank)
# HH:MM TZ = Local time with timezone
# Day = Abbreviated weekday
# DX/1096 = KayrOs day (how much of the horizon has elapsed)
# Project = Active project name
# Task = Active task or cell ID
# PX = Pomodoro number in current session
# WS = Active workspace (WS1, WS2, WS3)

# Real examples:
[F|10:30 COT|Thu|D4/1096|TeliOs|TELIO-U003|P1|WS1]
[F|14:45 COT|Tue|D18/1096|FocOs|FOCOS-B012|P3|WS2]
[F|09:00 COT|Mon|D1/1096|Novel-Cycles|CHAP-07|P1|WS1]
[F|22:30 COT|Fri|D365/1096|InfoGraTech|POST-14|P2|WS3]
Field Example Purpose
F F User identifier. Anchors the assistant's persona to a specific operator.
HH:MM TZ 10:30 COT Local time with timezone. Enables time-sensitive responses.
DX/1096 D4/1096 KayrOs horizon day. Positions the LLM within the project lifecycle.
Project TeliOs Active project name. Defines the response domain.
PX P1 Session pomodoro count. Signals operator cognitive load level.

03. BUILDING THE STAMP IN PYTHON

The get_context() function in FocOs builds the stamp and the complete system prompt injected into every LLM call. It queries the local database to retrieve the real state of the active workspace — never a cached assumption.

# main.py — FocOsAPI

def get_context(self) -> dict:
    '''
    Builds the full context for the active project.
    Automatically injected into every LLM call.
    '''
    import datetime
    conn = sqlite3.connect(DB_PATH)

    # Active workspace data
    ws = conn.execute(
        'SELECT project_id FROM workspaces WHERE ws_id=1'
    ).fetchone()
    project_id = ws[0] if ws else None

    # Project data
    project = None
    if project_id:
        row = conn.execute(
            'SELECT name, status, meta FROM projects WHERE id=?',
            (project_id,)
        ).fetchone()
        if row:
            project = {
                'name': row[0],
                'status': row[1],
                'meta': json.loads(row[2] or '{}'),
            }

    # Active task
    task = conn.execute(
        'SELECT title FROM tasks WHERE status="active" LIMIT 1'
    ).fetchone()
    conn.close()

    # Build stamp
    now = datetime.datetime.now()
    hora = now.strftime('%H:%M')
    dia = now.strftime('%a')[:3].capitalize()
    project_name = project['name'] if project else 'No project'
    task_name = task[0][:20] if task else 'No task'
    day_num = self._get_kayros_day()

    stamp = f'[F|{hora} COT|{dia}|D{day_num}/1096|{project_name}|{task_name}|P1|WS1]'

    # Full system prompt with context
    meta = project['meta'] if project else {}
    system = f'''You are Chronos — Frank's development assistant.
Active context: {stamp}

PROJECT: {project_name}
STATUS: {project["status"] if project else "unknown"}
ACTIVE TASK: {task_name}
TYPE: {meta.get("tipo", "general")}
STACK: {meta.get("stack", "undefined")}
PHILOSOPHY: {meta.get("filosofia", "")}

Always respond in the user's language.
Prioritize practical solutions over theory.
If you detect an error, flag it before answering.
Maximum 3 options when alternatives exist.
'''

    return {
        'stamp': stamp,
        'system': system,
        'project': project_name,
        'task': task_name,
        'day_num': day_num,
    }

04. INJECTING CONTEXT INTO EVERY MESSAGE

send_to_llm() calls get_context() automatically. The user never writes context manually — it arrives at the LLM on every message without any intervention.

def send_to_llm(self, message: str, history: list = None) -> dict:
    cfg = self.get_llm_config()
    provider = cfg.get('provider', 'gemini')
    model = cfg.get('model', 'gemini-2.0-flash')

    # AUTOMATIC CONTEXT — always present
    ctx = self.get_context()
    system = ctx['system']
    stamp = ctx['stamp']

    # The message reaching the LLM includes the stamp
    # e.g.: '[F|10:30 COT|Thu|D4/1096|TeliOs|TELIO-U003|P1|WS1]\n\nhow do I implement the pub/sub bus?'
    full_message = f'{stamp}\n\n{message}'

    messages = []
    if history:
        for h in (history if isinstance(history, list) else []):
            if isinstance(h, dict):
                messages.append({
                    'role': h.get('role', 'user'),
                    'content': h.get('content', '')
                })
    messages.append({'role': 'user', 'content': full_message})

    # Dispatch to provider with contextualized system prompt
    return self._dispatch_llm(provider, model, system, messages, cfg)

05. CONTEXT FOR NON-TECHNICAL PROJECTS

The stamp is not exclusive to software development. The same mechanism adapts to any project type — the system prompt changes according to the domain, but the compressed structure stays identical.

# WRITER — working on a novel
[F|09:15 COT|Tue|D22/1096|Novel-Cycles|CHAP-07|P1|WS1]
# Project: Novel — Cycles | Chapter: 7 | Character: Elena
# Tone: melancholic, introspective | POV shifts to first person

# MUSIC PRODUCER — working on an album
[F|23:00 COT|Fri|D45/1096|Album-Roots|TRACK-03|P2|WS1]
# Project: Album — Roots | Track: 03 | BPM: 92 | Key: Am
# Section: bridge | DAW: Ableton | Reference: Nils Frahm

# RESEARCHER
[F|11:30 COT|Wed|D8/1096|Thesis-Ecosystems|CHAP-04|P3|WS2]
# Project: Thesis | Chapter: 4 — Methodology
# Active hypothesis: H3 | Pending source: IEEE 2024
Project type Stamp example Key context injected
Software development D4/1096|TeliOs|TELIO-U003 Stack, architecture, active task state.
Writing / novel D22/1096|Novel-Cycles|CHAP-07 Chapter, character, tone, last narrative decision.
Music production D45/1096|Album-Roots|TRACK-03 BPM, key, active section, sonic reference.
Research / thesis D8/1096|Thesis-Ecosystems|CHAP-04 Chapter, active hypothesis, pending sources.

06. DISPLAYING THE STAMP IN THE FOCOS UI

The JavaScript bridge exposes the context to the LLM panel in real time. The user can see at any moment which stamp is being injected and confirm the assistant has the correct state before sending any message.

// bridge.js — Display stamp in the LLM panel

async function llmInit() {
  try {
    const api = await FocOs.ready()
    const ctx = await api.get_context()

    // Show stamp in LLM panel header
    const stampEl = document.getElementById('llm-stamp')
    if (stampEl && ctx.stamp) {
      stampEl.textContent = ctx.stamp
      stampEl.title = 'Active context injected in every message'
    }
    const projEl = document.getElementById('llm-project')
    if (projEl) projEl.textContent = ctx.project
  } catch(e) {
    console.log('llmInit:', e.message)
  }
}

async function llmSend(message) {
  const history = llmGetHistory()
  // send_to_llm() injects context automatically — user does nothing
  const result = await FocOs.sendToLLM(message, history)
  if (result.ok) {
    llmAppendMessage('assistant', result.response)
    llmSaveHistory('user', message)
    llmSaveHistory('assistant', result.response)
  } else {
    llmShowError(result.error)
  }
}

-- CONCLUSION

Automatic context injection turns a generic LLM into an assistant that knows exactly where you are and what you are building — without the user writing anything extra. The 80-character stamp communicates the complete work session state in ultra-compressed format. The same mechanism works for any project type: code, writing, music, research. And because the context is built from FocOs's local database, it always reflects the real state — never a fiction.

> SYSTEM_READY > NODE_ONLINE

< session_end // node: exit >
> INFOGRATECH_CORE_SHELL X
$