How It's Built
A technical deep dive into the architecture, decisions, and challenges behind Be Right Back. Eight articles covering the journey from reading iMessage data to running fine-tuned models locally.
tip: Each article has a Technical and Simple mode. Toggle between them based on your preference.
The Problem & Vision
Why build a local-first AI persona app? The inspiration, motivation, and ethical considerations.
Data Foundation - Understanding iMessage
Deep dive into the iMessage database schema, message decoding, and snapshot strategy.
Ingestion Pipeline - From Raw to Embedded
The 5-stage pipeline that transforms raw messages into searchable vector embeddings.
Vector Search & RAG
How semantic search and retrieval-augmented generation power intelligent conversations.
Training Pipeline - LoRA & MLX
Local LoRA fine-tuning on Apple Silicon using MLX for personalized language models.
Inference System - Running Models Locally
Multi-backend architecture for running fine-tuned models entirely on your Mac.
Desktop App Architecture - Tauri & React
Building a native macOS app with Rust, Tauri, and React for performance and security.
Privacy & Ethics by Design
How architectural decisions ensure your data never leaves your device.
> Content is being written. Check back for updates as articles are completed.