Examples & Patterns
Real-world usage patterns for building AI agents with persistent memory.
Personal AI Assistant
Use Case
A personal AI that remembers user preferences, work context, and ongoing projects across conversations.
Session Setup
// Every new conversation starts with this
memory_bootstrap(ai_name="PersonalAssistant", ai_platform="Claude")
memory_init_session(
conversation_id="uuid-generated-by-client",
title="Morning Planning Session",
ai_name="PersonalAssistant",
ai_platform="Claude"
)Storing Preferences
// User says: "I prefer morning meetings before 10am"
memory_store(
observation="User prefers meetings scheduled before 10am",
confidence=0.95,
domain="preferences",
evidence=["Direct statement from user"]
)
// User says: "I'm working on the Q1 budget report this week"
memory_store(
observation="User currently working on Q1 budget report",
confidence=0.9,
domain="active_work",
evidence=["User mentioned current focus"]
)Retrieving Context
// Next conversation: "What's on my agenda?"
// Agent searches memory
memory_search(query="current work agenda", limit=5)
// Returns: Q1 budget report, meeting preferences, etc.Research Team Collaboration
Use Case
A team AI that tracks papers, experiments, hypotheses, and findings across multiple researchers.
Storing Research Papers
// Team member uploads paper to Google Drive
memory_store_document(
title="Attention Is All You Need",
doc_type="paper",
url="https://drive.google.com/...",
content_summary="Introduced the Transformer architecture for NLP",
key_concepts=["Transformers", "attention mechanism", "self-attention"],
publication_date="2017-06-12"
)Tracking Experiments
// Researcher logs experiment
memory_store(
observation="Experiment 47: Increased learning rate to 0.001, achieved 94.2% accuracy",
confidence=0.95,
domain="experiments",
evidence=["Training logs", "Validation metrics"]
)
// Later, pattern emerges
memory_update_pattern(
category="research_findings",
pattern_name="learning-rate-sweet-spot",
pattern_text="Learning rates between 0.0008-0.0012 consistently yield best results",
confidence=0.85,
evidence_observation_ids=[47, 52, 61, 73]
)Building Knowledge Graph
// Create concepts
memory_store_concept(
name="Transformer",
concept_type="framework",
description="Neural network architecture based on attention mechanisms"
)
memory_store_concept(
name="BERT",
concept_type="framework",
description="Bidirectional Encoder Representations from Transformers"
)
// Connect them
memory_add_concept_relationship(
from_concept="BERT",
to_concept="Transformer",
rel_type="implements",
weight=0.95,
description="BERT is built on Transformer architecture"
)Querying Related Work
// Researcher asks: "What frameworks are related to Transformers?"
memory_related_concepts(
concept_name="Transformer",
rel_type="implements" // Only show implementations
)
// Returns: BERT, GPT, T5, etc.Customer Support Agent
Use Case
Support AI that remembers customer history, previous issues, and resolution patterns.
Storing Support Tickets
// Customer reports issue
memory_store(
observation="Customer ID 12345 reported login failure with 2FA",
confidence=0.95,
domain="support_tickets",
evidence=["Customer chat transcript", "System logs"]
)
// Issue resolved
memory_store(
observation="Login issue resolved by clearing browser cache and re-enabling 2FA",
confidence=0.9,
domain="resolutions",
evidence=["Support transcript", "Customer confirmation"]
)Pattern Recognition
// After multiple similar tickets
memory_update_pattern(
category="common_issues",
pattern_name="2fa-cache-conflict",
pattern_text="2FA login failures often resolve with browser cache clear",
confidence=0.85,
evidence_observation_ids=[12345, 12389, 12401, 12456]
)Context-Aware Support
// Customer returns with new issue
memory_recall(
domain="support_tickets",
min_confidence=0.7,
// Filters to this customer's history
)
// Agent can say: "I see you had a 2FA issue last month. Is this related?"Code Assistant
Use Case
Development AI that tracks project architecture, coding patterns, and technical decisions.
Tracking Architecture
// Store project concept
memory_store_concept(
name="LegiVellum",
concept_type="project",
description="AI infrastructure stack with gate primitives",
domain="software",
status="active",
metadata={"tech_stack": ["Python", "FastAPI", "PostgreSQL"]}
)
// Store components
memory_store_concept(name="MemoryGate", concept_type="component", ...)
memory_store_concept(name="AsyncGate", concept_type="component", ...)
// Connect them
memory_add_concept_relationship(
from_concept="MemoryGate",
to_concept="LegiVellum",
rel_type="part_of",
weight=0.9
)Recording Technical Decisions
// User makes architectural choice
memory_store(
observation="Decided to use FastAPI instead of Flask for async support",
confidence=0.95,
domain="technical_decisions",
evidence=["Team discussion", "Performance requirements"]
)
// Later, pattern emerges
memory_update_pattern(
category="architecture",
pattern_name="async-preference",
pattern_text="Team consistently chooses async frameworks for I/O-heavy services",
confidence=0.85,
evidence_observation_ids=[203, 245, 289]
)Contextual Code Suggestions
// User asks: "How should I structure the API endpoints?"
// Agent searches architectural patterns
memory_search(query="API structure FastAPI", limit=5)
// Returns: past decisions, patterns, and related concepts
// Agent: "Based on your LegiVellum project, I recommend..."Learning Companion
Use Case
Educational AI that tracks student progress, learning style, and knowledge gaps.
Tracking Learning Progress
// Student completes lesson
memory_store(
observation="Student completed React Hooks module with 92% accuracy",
confidence=0.95,
domain="progress",
evidence=["Quiz results", "Completion timestamp"]
)
// Identify struggling areas
memory_store(
observation="Student struggled with useEffect dependency arrays",
confidence=0.8,
domain="knowledge_gaps",
evidence=["Quiz errors", "Question patterns"]
)Adaptive Teaching
// Recognize learning pattern
memory_update_pattern(
category="learning_style",
pattern_name="visual-learner",
pattern_text="Student responds best to visual examples and diagrams",
confidence=0.85,
evidence_observation_ids=[12, 34, 56, 78]
)
// Next lesson: Agent provides diagram-heavy contentSpaced Repetition
// Query older concepts for review
memory_recall(
domain="progress",
min_confidence=0.6, // Topics not fully mastered
)
// Use recency to schedule reviews
memory_search(query="hooks concepts", limit=10)
// Older memories = need reviewBest Practices
Confidence Scoring
Use high confidence (0.9-1.0) for:
- Direct user statements
- Verified facts from reliable sources
- System-generated logs or metrics
Use medium confidence (0.6-0.8) for:
- Inferred preferences
- Patterns with some uncertainty
- Historical behavior analysis
Use low confidence (<0.6) for:
- Weak signals or hunches
- Ambiguous statements
- Speculative connections
Domain Organization
Keep domains focused and consistent:
// Good - specific domains
"preferences"
"technical_decisions"
"active_work"
"experiments"
// Bad - too broad or vague
"stuff"
"notes"
"things"Use hierarchical naming when needed:
"project/architecture"
"project/deployment"
"user/preferences/communication"
"user/preferences/scheduling"Evidence Collection
Always provide supporting evidence:
// Good - specific evidence
evidence=["User chat transcript line 47", "System log timestamp 2025-01-14T10:30Z"]
// Bad - vague evidence
evidence=["user said so"]
// Better - no evidence than bad evidence
evidence=[]Lifecycle Management
Periodic Cleanup
// Preview what would be archived
list_archive_candidates(limit=50)
// Archive low-value memories
archive_memory(
threshold={"days_inactive": 90, "min_score": 0.3},
dry_run=false
)Rehydrating Old Context
// User asks about old project
search_cold_memory(query="legacy API project")
// Bring back to hot tier
rehydrate_memory(
query="legacy API project",
bump_score=true // Increase importance
)Anti-Patterns to Avoid
Don't Store Everything
Bad: Storing every trivial interaction
// Don't do this
memory_store(observation="User said hello", ...)
memory_store(observation="User thanked me", ...)Good: Store meaningful observations only
// Do this instead
memory_store(
observation="User prefers formal greetings in professional contexts",
confidence=0.8,
domain="preferences"
)Don't Over-Synthesize
Bad: Creating patterns too early
// After just 2 observations
memory_update_pattern(
pattern_text="User always prefers X",
evidence_observation_ids=[1, 2] // Not enough data
)Good: Wait for sufficient evidence
// After 5+ observations showing consistency
memory_update_pattern(
pattern_text="User consistently prefers X in Y context",
evidence_observation_ids=[1, 2, 5, 8, 12]
)Don't Ignore Sessions
Bad: Skipping session initialization
// Missing session context
memory_store(observation="Something important", ...)Good: Always initialize sessions
// Proper session tracking
memory_init_session(...)
memory_store(observation="Something important", ...)Next Steps
Ready to implement these patterns? Review the API Reference for complete tool documentation.