Back to Portfolio
Personal Project

Second Brain: Personal AI Assistant with 3-Layer Security

Builder & Security Architect|
Active
0 (baseline: 3 failed attempts in first week)
Security Incidents
15 of 15 adversarial tests
Injection Patterns Blocked
94% → 99.8% (after circuit breaker implementation)
Uptime
91% (with safety filters vs 96% without — acceptable trade-off)
Response Accuracy

The Challenge

Personal assistants need access to sensitive data (calendar, emails, vault notes) to be useful, but LLMs are vulnerable to prompt injection attacks. Needed a system that could integrate Google Calendar, Slack, and Obsidian vault memory while defending against adversarial prompts trying to leak secrets or manipulate outputs.

The Approach

Built FastAPI + Pydantic AI assistant with 3-layer prompt injection defense: (1) Input Sanitization — strip executable patterns from user prompts, (2) Instruction Hierarchy — system instructions override user instructions via structured prompting, (3) Output Validation — regex patterns detect leaked secrets or policy violations. Integrated Google Calendar API for scheduling, Slack API for notifications, and Obsidian vault as RAG memory store. Logged all security events with structlog for audit trail.

Key Learnings

  • Defense in depth works — no single layer is enough against determined adversarial prompts
  • Obsidian vault as RAG memory is powerful — 80% query hit rate on personal knowledge
  • FAILURE: First iteration used regex-only output validation — missed semantic leaks. Had to add semantic filtering layer.
  • Personal projects are the best way to learn real AI security before deploying at enterprise scale