Configuration — Signet Docs

Docs / Getting Started

Configuration

Complete configuration reference for Signet.

Configuration Reference

Complete reference for all Signet configuration options. For initial setup, see Quickstart. For the Daemon runtime, see Architecture.

Configuration Files

All files live in your active Signet workspace.

  • Default workspace: ~/.agents/
  • Persisted workspace setting: ~/.config/signet/workspace.json
  • Override for a single process: SIGNET_PATH=/some/path
FilePurpose
agent.yamlMain configuration and manifest
AGENTS.mdAgent identity and instructions (synced to harnesses)
SOUL.mdPersonality and tone
MEMORY.mdWorking memory summary (auto-generated)
IDENTITY.mdOptional identity metadata (name, creature, vibe)
USER.mdOptional user preferences and profile

The loader checks agent.yaml, AGENT.yaml, and config.yaml in that order, using the first file it finds. All sections are optional; omitting a section falls back to the documented defaults.

Workspace selection and persistence

Use the CLI to inspect or change the default workspace path:

signet workspace status
signet workspace set ~/.openclaw/workspace

signet workspace set is idempotent. It safely migrates files, stores the new default workspace in ~/.config/signet/workspace.json, and updates detected OpenClaw-family configs to keep agents.defaults.workspace aligned.

Resolution order for the effective workspace is:

  1. --path CLI option
  2. SIGNET_PATH environment variable
  3. Stored CLI workspace setting (~/.config/signet/workspace.json)
  4. Default ~/.agents/

agent.yaml

The primary configuration file. Created by signet setup and editable via signet configure or the dashboard’s config editor.

version: 1
schema: signet/v1

agent:
  name: "My Agent"
  description: "Personal AI assistant"
  created: "2025-02-17T00:00:00Z"
  updated: "2025-02-17T00:00:00Z"

owner:
  address: "0x..."
  localId: "user123"
  ens: "user.eth"
  name: "User Name"

harnesses:
  - forge
  - claude-code
  - openclaw
  - opencode

embedding:
  provider: ollama
  model: nomic-embed-text
  dimensions: 768
  base_url: http://localhost:11434

search:
  alpha: 0.7
  top_k: 20
  min_score: 0.3

memory:
  database: memory/memories.db
  session_budget: 2000
  decay_rate: 0.95
  synthesis:
    harness: openclaw
    model: sonnet
    schedule: daily
    max_tokens: 4000
  pipelineV2:
    enabled: true
    shadowMode: false
    extraction:
      provider: ollama
      model: qwen3:4b
    synthesis:
      enabled: true
      provider: ollama
      model: qwen3:4b
    graph:
      enabled: true
    autonomous:
      enabled: true
      maintenanceMode: execute

hooks:
  sessionStart:
    recallLimit: 10
    includeIdentity: true
    includeRecentContext: true
    recencyBias: 0.7
  preCompaction:
    includeRecentMemories: true
    memoryLimit: 5

auth:
  mode: local
  defaultTokenTtlSeconds: 604800
  sessionTokenTtlSeconds: 86400

trust:
  verification: none

agent

Core agent identity metadata.

FieldTypeRequiredDescription
namestringyesAgent display name
descriptionstringnoShort description
createdstringyesISO 8601 creation timestamp
updatedstringyesISO 8601 last update timestamp

owner

Optional owner identification. Reserved for future cryptographic identity verification.

FieldTypeDescription
addressstringCryptographic identity address or external identity ID, reserved for future use
localIdstringLocal user identifier
ensstringOptional ENS or human-friendly identity alias
namestringHuman-readable name

harnesses

List of AI platforms to integrate with. Valid values: forge, claude-code, opencode, openclaw, and codex. Support for cursor, windsurf, chatgpt, and gemini is planned.

embedding

Vector embedding configuration for semantic memory search.

FieldTypeDefaultDescription
providerstring"ollama""ollama" or "openai"
modelstring"nomic-embed-text"Embedding model name
dimensionsnumber768Output vector dimensions
base_urlstring"http://localhost:11434"Ollama API base URL
api_keystringAPI key or $secret:NAME reference

Recommended Ollama models:

ModelDimensionsNotes
nomic-embed-text768Default; good quality/speed balance
all-minilm384Faster, smaller vectors
mxbai-embed-large1024Better quality, more resource usage

Recommended OpenAI models:

ModelDimensionsNotes
text-embedding-3-small1536Cost-effective
text-embedding-3-large3072Highest quality

Rather than putting an API key in plain text, store it with signet secret put OPENAI_API_KEY and reference it as:

api_key: $secret:OPENAI_API_KEY

Hybrid search tuning. Controls the blend between semantic (vector) and keyword (BM25) retrieval.

FieldTypeDefaultDescription
alphanumber0.7Vector weight 0-1. Higher = more semantic.
top_knumber20Candidate count fetched from each source
min_scorenumber0.3Minimum combined score to return a result

At alpha: 0.9 results are heavily semantic, suitable for conceptual queries. At alpha: 0.3 results skew toward keyword matching, better for exact-phrase lookups. The default of 0.7 works well generally.

memory

Memory system settings.

FieldTypeDefaultDescription
databasestring"memory/memories.db"SQLite path (relative to the active workspace)
session_budgetnumber2000Character limit for session context injection
decay_ratenumber0.95Daily importance decay factor for non-pinned memories

Non-pinned memories lose importance over time using the formula:

importance(t) = base_importance × decay_rate^days_since_access

Accessing a memory resets the decay timer.

memory.synthesis

Configuration for periodic MEMORY.md regeneration. The synthesis process reads all memories and asks a model to write a coherent summary.

FieldTypeDefaultDescription
harnessstring"openclaw"Which harness runs synthesis (forge, openclaw, claude-code, codex, opencode)
modelstring"sonnet"Model identifier
schedulestring"daily""daily", "weekly", or "on-demand"
max_tokensnumber4000Max output tokens

Pipeline V2 Config

The V2 memory pipeline lives at packages/daemon/src/pipeline/. It runs LLM-based fact extraction against incoming conversation text, then decides whether to write new memories, update existing ones, or skip. Config lives under memory.pipelineV2 in agent.yaml.

The config uses a nested structure with grouped sub-objects. Legacy flat keys (e.g. extractionModel, workerPollMs) are still supported for backward compatibility, but nested keys take precedence when both are present.

Enable the pipeline:

memory:
  pipelineV2:
    enabled: true
    shadowMode: true        # extract without writing — safe first step
    extraction:
      provider: ollama
      model: qwen3:4b

Control flags

These top-level boolean fields gate major pipeline behaviors.

FieldDefaultDescription
enabledtrueMaster switch. Pipeline does nothing when false.
shadowModefalseExtract facts but skip writes. Useful for evaluation.
mutationsFrozenfalseAllow reads; block all writes. Overrides shadowMode.
semanticContradictionEnabledfalseEnable LLM-based semantic contradiction detection for UPDATE/DELETE proposals.
telemetryEnabledfalseEnable anonymous telemetry reporting.

The relationship between shadowMode and mutationsFrozen matters: shadowMode suppresses writes from the normal extraction path only; mutationsFrozen is a harder freeze that blocks all write paths including repairs and graph updates.

Extraction (extraction)

Controls the LLM-based extraction stage. Supports multiple providers.

FieldDefaultRangeDescription
provider"ollama""none", "ollama", "claude-code", "opencode", "codex", "anthropic", "openrouter", or "command"
model"qwen3:4b"Model name for the configured provider
timeout900005000-300000 msExtraction call timeout
minConfidence0.70.0-1.0Confidence threshold; facts below this are dropped
commandCommand provider config (bin, args[], optional cwd, optional env) — required when provider: "command"

For safety, the intended extraction setups are:

  • claude-code on a Haiku model
  • codex on a GPT Mini model
  • local ollama with at least qwen3:4b

Set provider: none to disable extraction entirely, which is the recommended default for VPS installs that should not make background LLM calls.

Remote API extraction can accumulate extreme fees quickly because the pipeline runs continuously in the background. Use anthropic, openrouter, or remote OpenCode routes only when you explicitly want that billing behavior.

When using ollama, the model must be available locally. When using claude-code, the Claude Code CLI must be on PATH. codex uses the Codex CLI as the extraction provider. Lower minConfidence to capture more facts at the cost of noise; raise it to write only high-confidence facts.

For provider: command, the summary worker executes memory.pipelineV2.extraction.command in the summary job queue control-plane path. The transcript is written to a temporary file and its path is substituted into command arguments:

  • $TRANSCRIPT (alias $TRANSCRIPT_PATH) — temp transcript file path
  • $SESSION_KEY — session key (or empty string)
  • $PROJECT — project path (or empty string)
  • $AGENT_ID — agent id for the queued job
  • $SIGNET_PATH — active Signet workspace path

For safety, user-derived tokens ($SESSION_KEY, $PROJECT, $TRANSCRIPT) are intended for args/env substitution. Keep bin and cwd fixed (or use trusted $SIGNET_PATH / $AGENT_ID), so command path resolution is not driven by transcript/session metadata.

The command’s stdout/stderr are not used as extraction output. The external command is responsible for writing memories to Signet state (for example, writing rows to memories.db).

After command extraction succeeds, synthesis-provider hooks can still run (summary generation for continuity/predictor + DAG + synthesis trigger), but summary markdown writes and insertSummaryFacts are skipped in command mode to avoid duplicate memory writes. The external command remains the source of truth for fact persistence.

Example:

memory:
  pipelineV2:
    extraction:
      provider: command
      command:
        bin: node
        args:
          - ./scripts/custom-extractor.mjs
          - --transcript
          - $TRANSCRIPT
          - --session
          - $SESSION_KEY

Session synthesis (synthesis)

Controls the provider used by the summary-worker for session summaries. This is separate from fact extraction once explicitly configured.

If the synthesis block is omitted entirely, Signet falls back to the resolved extraction provider, model, endpoint, and timeout. Exception: when extraction.provider: command, synthesis falls back to synthesis defaults (ollama + default synthesis model/timeout) instead.

FieldDefaultRangeDescription
enabledtrueEnable background session summary generation
providerinherited from extraction when omitted"none", "ollama", "claude-code", "codex", "opencode", "anthropic", or "openrouter"
modelinherited from extraction when omittedModel name for the configured provider
endpointinherited from extraction when omittedOptional base URL override for Ollama, OpenCode, or OpenRouter
timeoutinherited from extraction when omitted5000-300000 msSummary generation timeout

Set provider: none or enabled: false to disable background session summary synthesis entirely.

synthesis.provider: command is invalid and rejected during config load.

Worker (worker)

The pipeline processes jobs through a queue with lease-based concurrency control.

FieldDefaultRangeDescription
pollMs2000100-60000 msHow often the worker polls for pending jobs
maxRetries31-10Max retry attempts before a job goes to dead-letter
leaseTimeoutMs30000010000-600000 msTime before an uncompleted job lease expires

A job that exceeds maxRetries moves to dead-letter status and is eventually purged by the retention worker.

Knowledge Graph (graph)

When graph.enabled: true, the pipeline builds entity-relationship links from extracted facts and uses them to boost search relevance.

FieldDefaultRangeDescription
enabledtrueEnable knowledge graph building and querying
boostWeight0.150.0-1.0Weight applied to graph-neighbor score boost
boostTimeoutMs50050-5000 msTimeout for graph lookup during search

Hints (hints)

Prospective indexing generates hypothetical future queries at write time. These “hints” are indexed in FTS5 so memories match by anticipated cue, not just stored content. For example, a memory about “switched from PostgreSQL to SQLite” might generate hints like “database migration”, “why SQLite”, and “storage engine decision” — queries the user is likely to ask later.

FieldDefaultRangeDescription
enabledtrueEnable prospective indexing
max51-20Maximum hints generated per memory
timeout300005000-120000 msHint generation LLM timeout
maxTokens25632-1024Max tokens for hint generation
poll50001000-60000 msJob polling interval
memory:
  pipelineV2:
    hints:
      enabled: true
      max: 5
      timeout: 30000
      maxTokens: 256
      poll: 5000

Traversal (traversal)

Graph traversal controls how the knowledge graph is walked during retrieval. When primary: true, graph traversal produces the base candidate pool and flat search fills gaps. When primary: false, traditional hybrid search runs first with graph boost as supplementary.

FieldDefaultRangeDescription
enabledtrueEnable graph traversal
primarytrueUse traversal as primary retrieval strategy
maxAspectsPerEntity101-50Max aspects to collect per entity
maxAttributesPerAspect201-100Max attributes per aspect
maxDependencyHops101-50Max hops for dependency walking
minDependencyStrength0.30.0-1.0Minimum edge strength to follow
maxBranching41-20Max branching factor during traversal
maxTraversalPaths501-500Max paths to explore
minConfidence0.50.0-1.0Minimum confidence for results
timeoutMs50050-5000 msTraversal timeout
boostWeight0.20.0-1.0Weight for traversal boost in hybrid search
constraintBudgetChars1000100-10000Character budget for constraint injection
memory:
  pipelineV2:
    traversal:
      enabled: true
      primary: true
      maxAspectsPerEntity: 10
      maxAttributesPerAspect: 20
      maxDependencyHops: 10
      minDependencyStrength: 0.3
      maxBranching: 4
      maxTraversalPaths: 50
      minConfidence: 0.5
      timeoutMs: 500
      boostWeight: 0.2
      constraintBudgetChars: 1000

The primary flag determines the retrieval strategy. In primary mode, entities are extracted from the query, the graph is walked to collect related memories, and flat hybrid search only runs to fill remaining slots. In supplementary mode (primary: false), the standard hybrid search runs first and traversal results are blended in using boostWeight. Primary mode is faster for entity-dense queries; supplementary mode is more conservative and better for freeform text.

Reranker (reranker)

An optional reranking pass that runs after initial retrieval. An embedding-based reranker is built in (uses cached vectors, no extra LLM calls). Custom cross-encoder providers can also be used.

FieldDefaultRangeDescription
enabledtrueEnable the reranking pass
model""Model name for the reranker (empty uses embedding-based)
topN201-100Number of candidates to pass to the reranker
timeoutMs2000100-30000 msTimeout for the reranking call

Autonomous (autonomous)

Controls autonomous maintenance, repair, and mutation behavior.

FieldDefaultDescription
enabledtrueAllow autonomous pipeline operations (maintenance, repair).
frozenfalseBlock autonomous writes; autonomous reads still allowed.
allowUpdateDeletetruePermit the pipeline to update or delete existing memories.
maintenanceIntervalMs1800000How often maintenance runs (30 min). Range: 60s-24h.
maintenanceMode"execute""observe" logs issues; "execute" attempts repairs.

In "observe" mode the worker emits structured log events but makes no changes. When frozen is true, the maintenance interval never starts, though the worker’s tick() method remains callable for on-demand inspection.

Repair budgets (repair)

Repair sub-workers limit how aggressively they re-embed, re-queue, or deduplicate items to avoid overloading providers.

FieldDefaultRangeDescription
reembedCooldownMs30000010s-1hMin time between re-embed batches
reembedHourlyBudget101-1000Max re-embed operations per hour
requeueCooldownMs600005s-1hMin time between re-queue batches
requeueHourlyBudget501-1000Max re-queue operations per hour
dedupCooldownMs60000010s-1hMin time between dedup batches
dedupHourlyBudget31-100Max dedup operations per hour
dedupSemanticThreshold0.920.0-1.0Cosine similarity threshold for semantic dedup
dedupBatchSize10010-1000Max candidates evaluated per dedup batch

Document ingest (documents)

Controls chunking for ingesting large documents into the memory store.

FieldDefaultRangeDescription
workerIntervalMs100001s-300sPoll interval for pending document jobs
chunkSize2000200-50000Target chunk size in characters
chunkOverlap2000-10000Overlap between adjacent chunks (chars)
maxContentBytes104857601 KB-100 MBMax document size accepted

Chunk overlap ensures context is not lost at chunk boundaries. A value of 10-15% of chunkSize is a reasonable starting point.

Guardrails (guardrails)

Content size limits applied during extraction and recall to prevent oversized content from degrading pipeline performance.

FieldDefaultRangeDescription
maxContentChars50050-100000Max characters stored per memory
chunkTargetChars30050-50000Target chunk size for content splitting
recallTruncateChars50050-100000Max characters returned per memory in recall results

These limits are enforced at the pipeline level. Content exceeding maxContentChars is truncated before storage. Recall results are truncated at recallTruncateChars to keep session context budgets predictable.

Continuity (continuity)

Session checkpoint configuration for continuity recovery. Checkpoints capture periodic snapshots of session state (focus, prompts, memory activity) to aid recovery after context compaction or session restart.

FieldDefaultRangeDescription
enabledtrueMaster switch for session checkpoints
promptInterval101-1000Prompts between periodic checkpoints
timeIntervalMs90000060s-1hTime between periodic checkpoints (15 min default)
maxCheckpointsPerSession501-500Per-session checkpoint cap (oldest pruned)
retentionDays71-90Days before old checkpoints are hard-deleted
recoveryBudgetChars2000200-10000Max characters for recovery digest

Checkpoints are triggered by five events: periodic, pre_compaction, session_end, agent, and explicit. Secrets are redacted before storage.

Telemetry (telemetry)

Anonymous usage telemetry. Only active when telemetryEnabled: true. Events are batched and flushed periodically.

FieldDefaultRangeDescription
posthogHost""PostHog instance URL (empty disables)
posthogApiKey""PostHog project API key
flushIntervalMs600005s-10minTime between event flushes
flushBatchSize501-500Max events per flush batch
retentionDays901-365Days before local telemetry data is purged

Embedding tracker (embeddingTracker)

Background polling loop that detects stale or missing embeddings and refreshes them in small batches. Runs alongside the extraction pipeline.

FieldDefaultRangeDescription
enabledtrueMaster switch
pollMs50001s-60sPolling interval between refresh cycles
batchSize81-20Max embeddings refreshed per cycle

The tracker detects embeddings that are missing, have a stale content hash, or were produced by a different model than the currently configured one. It uses setTimeout chains for natural backpressure.

Auth Config

Auth configuration lives under the auth key in agent.yaml. Signet uses short-lived signed tokens for dashboard and API access.

auth:
  mode: local
  defaultTokenTtlSeconds: 604800    # 7 days
  sessionTokenTtlSeconds: 86400     # 24 hours
  rateLimits:
    forget:
      windowMs: 60000
      max: 30
    modify:
      windowMs: 60000
      max: 60
FieldDefaultDescription
mode"local"Auth mode: "local", "team", or "hybrid"
defaultTokenTtlSeconds604800API token lifetime (7 days)
sessionTokenTtlSeconds86400Session token lifetime (24 hours)

In "local" mode the token secret is generated automatically and stored at $SIGNET_WORKSPACE/.daemon/auth-secret. In "team" and "hybrid" modes, the daemon validates HMAC-signed bearer tokens with role and scope claims.

Rate limits

Rate limits are sliding-window counters that reset on daemon restart. Each key controls a category of potentially destructive operations.

OperationDefault windowDefault maxDescription
forget60 s30Soft-delete a memory
modify60 s60Update memory content
batchForget60 s5Bulk soft-delete
forceDelete60 s3Hard-delete (bypasses tombstone)
admin60 s10Admin API operations

Override any limit under auth.rateLimits.<operation>:

auth:
  rateLimits:
    forceDelete:
      windowMs: 60000
      max: 1

Retention Config

The retention worker runs on a fixed interval and purges data that has exceeded its retention window. It is not directly configurable in agent.yaml; the defaults below are compiled in and apply unconditionally when the pipeline is running.

FieldDefaultDescription
intervalMs21600000Sweep frequency (6 hours)
tombstoneRetentionMs2592000000Soft-deleted memories kept for 30 days before hard purge
historyRetentionMs15552000000Memory history events kept for 180 days
completedJobRetentionMs1209600000Completed pipeline jobs kept for 14 days
deadJobRetentionMs2592000000Dead-letter jobs kept for 30 days
batchLimit500Max rows purged per step per sweep (backpressure)

The retention worker also cleans up graph links and embeddings that belong to purged tombstones, and orphans entity nodes with no remaining mentions. The batchLimit prevents a single sweep from locking the database for too long under high load.

Soft-deleted memories remain recoverable via POST /api/memory/:id/recover until their tombstone window expires.

Hooks Config

Controls what Signet injects during harness lifecycle events. See Hooks for full details.

hooks:
  sessionStart:
    recallLimit: 10
    includeIdentity: true
    includeRecentContext: true
    recencyBias: 0.7
  preCompaction:
    includeRecentMemories: true
    memoryLimit: 5
    summaryGuidelines: "Focus on technical decisions."

hooks.sessionStart controls what is injected at the start of a new harness session:

FieldDefaultDescription
recallLimit10Number of memories to inject
includeIdentitytrueInclude agent name and description
includeRecentContexttrueInclude MEMORY.md content
recencyBias0.7Weight toward recent vs. important memories (0-1)

hooks.preCompaction controls what is included when the harness triggers a pre-compaction summary:

FieldDefaultDescription
includeRecentMemoriestrueInclude recent memories in the prompt
memoryLimit5How many recent memories to include
summaryGuidelinesbuilt-inCustom instructions for session summary

Environment Variables

Environment variables take precedence over agent.yaml for runtime overrides. They are useful in containerized or CI environments where editing the config file is impractical.

VariableDefaultDescription
SIGNET_PATHRuntime override for agents directory
SIGNET_PORT3850Daemon HTTP port
SIGNET_HOST127.0.0.1Daemon host for local calls and default bind address
SIGNET_BINDSIGNET_HOSTExplicit bind address override (0.0.0.0, etc.)
SIGNET_LOG_FILEOptional explicit daemon log file path
SIGNET_LOG_DIR$SIGNET_WORKSPACE/.daemon/logsOptional daemon log directory override
SIGNET_SQLITE_PATHmacOS explicit SQLite dylib override used before Bun opens the database
OPENAI_API_KEYOpenAI key when embedding provider is openai

SIGNET_PATH changes where Signet reads and writes all agent data for that process, including the config file itself. Use this for temporary overrides in CI or isolated local testing.

On macOS, SIGNET_SQLITE_PATH can point at a libsqlite3.dylib build that supports loadExtension(). If it is set, Signet treats it as an authoritative override and refuses fallback if the file is missing. If it is unset, Signet checks $SIGNET_WORKSPACE/libsqlite3.dylib, where $SIGNET_WORKSPACE resolves from SIGNET_PATH, then ~/.config/signet/workspace.json, then the default ~/.agents, before trying standard Homebrew SQLite locations and finally falling back to Apple’s system SQLite.

AGENTS.md

The main agent identity file. Synced to all configured harnesses on change (2-second debounce). Write it in plain markdown — there is no required structure, but a typical layout looks like this:

# Agent Name

Short introduction paragraph.

## Personality

Communication style, tone, and approach.

## Instructions

Specific behaviors, preferences, and task guidance.

## Rules

Hard rules the agent must follow.

## Context

Background about the user and their work.

When AGENTS.md changes, the daemon writes updated copies to:

  • ~/.claude/CLAUDE.md (if ~/.claude/ exists)
  • ~/.config/opencode/AGENTS.md (if ~/.config/opencode/ exists)

Each copy is prefixed with a generated header identifying the source file and timestamp, and includes a warning not to edit the copy directly.

SOUL.md

Optional personality file for deeper character definition. Loaded by harnesses that support separate personality and instruction files.

# Soul

## Voice
How the agent speaks and writes.

## Values
What the agent prioritizes.

## Quirks
Unique personality characteristics.

MEMORY.md

Auto-generated working memory summary. Updated by the synthesis system. Do not edit by hand — changes will be overwritten on the next synthesis run. Loaded at session start when hooks.sessionStart.includeRecentContext is true.

Database Schema

The SQLite database at memory/memories.db contains three main tables.

memories

ColumnTypeDescription
idTEXTPrimary key (UUID)
contentTEXTMemory content
typeTEXTfact, preference, decision, daily-log, episodic, procedural, semantic, system
sourceTEXTSource system or harness
importanceREAL0-1 score, decays over time
tagsTEXTComma-separated tags
whoTEXTSource harness name
pinnedINTEGER1 if critical/pinned (never decays)
is_deletedINTEGER1 if soft-deleted (tombstone)
deleted_atTEXTISO timestamp of soft-delete
created_atTEXTISO timestamp
updated_atTEXTISO timestamp
last_accessedTEXTLast access timestamp
access_countINTEGERNumber of times recalled
confidenceREALExtraction confidence (0-1)
versionINTEGEROptimistic concurrency version
manual_overrideINTEGER1 if user has manually edited

embeddings

ColumnTypeDescription
idTEXTPrimary key (UUID)
content_hashTEXTSHA-256 hash of embedded text
vectorBLOBFloat32 array (raw bytes)
dimensionsINTEGERVector size (e.g. 768)
source_typeTEXTmemory, conversation, etc.
source_idTEXTReference to parent memory UUID
chunk_textTEXTThe text that was embedded
created_atTEXTISO timestamp

memories_fts

FTS5 virtual table for keyword search. Indexes content and tags from the memories table. An after-delete trigger keeps the FTS index in sync when tombstones are hard-purged.

Harness-Specific Configuration

Claude Code

Location: ~/.claude/

settings.json installs hooks that fire at session lifecycle events:

{
  "hooks": {
    "SessionStart": [{
      "hooks": [{
        "type": "command",
        "command": "python3 $SIGNET_WORKSPACE/memory/scripts/memory.py load --mode session-start",
        "timeout": 3000
      }]
    }],
    "UserPromptSubmit": [{
      "hooks": [{
        "type": "command",
        "command": "python3 $SIGNET_WORKSPACE/memory/scripts/memory.py load --mode prompt",
        "timeout": 2000
      }]
    }],
    "SessionEnd": [{
      "hooks": [{
        "type": "command",
        "command": "python3 $SIGNET_WORKSPACE/memory/scripts/memory.py save --mode auto",
        "timeout": 10000
      }]
    }]
  }
}

OpenCode

Location: ~/.config/opencode/plugins/

signet.mjs is a bundled OpenCode plugin installed by @signet/connector-opencode that exposes /remember and /recall as native tools within the harness.

Note: Legacy memory.mjs installations are automatically migrated to ~/.config/opencode/plugins/signet.mjs on reconnect.

OpenClaw

Location: $SIGNET_WORKSPACE/hooks/agent-memory/ (hook directory)

Also configures the OpenClaw workspace in ~/.openclaw/openclaw.json (and compatible clawdbot / moltbot config locations):

{
  "agents": {
    "defaults": {
      "workspace": "$SIGNET_WORKSPACE"
    }
  }
}

See HARNESSES.md for the full OpenClaw adapter docs.

Git Integration

If your Signet workspace is a git repository, the daemon auto-commits file changes with a 5-second debounce after the last detected change. Commit messages use the format YYYY-MM-DDTHH-MM-SS_auto_<filename>. The setup wizard offers to initialize git on first run and creates a backup commit before making any changes.

Recommended .gitignore for your workspace:

.daemon/
.secrets/
__pycache__/
*.pyc
*.log