
This is a submission for the Gemma 4 Challenge: Build with Gemma 4
What I Built
dwriter is a minimalist, terminal-first journaling and productivity tool designed to jot down fleeting thoughts or important notes without leaving your terminal, effectively solving "logging friction" through a zero-latency capture system and a rich, AI-augmented reflection layer. It features an interactive AI 2nd-Brain that understands your entire work history via a dual-model reasoning pipeline and a closed learning loop that automatically extracts durable facts—such as preferences, goals, and recurring constraints—to build a persistent personal knowledge base. This agentic ReAct pipeline allows the AI to go beyond simple chat by using tools to search your journal, tasks, and Git history for grounded, data-driven answers. Under the hood, the tool employs graph indexing using LadybugDB alongside SQLite to power complex relationship queries and high-speed full-text searches. Data remains private and accessible across devices through a local-first, Git-backed synchronization engine utilizing Lamport clocks for conflict resolution, while seamless Obsidian integration allows you to export AI-generated standups, weekly retros, and burnout checks directly to your vault as Markdown.
Screenshots
Code
How I Used Gemma 4
To achieve the perfect balance between reasoning depth and terminal-speed performance, I implemented a Dual-Model AI Pipeline using the Gemma 4 family. This architecture is critical because it prevents the "intelligence latency" that typically plagues local LLM integrations. By decoupling interactive reasoning from background processing, dwriter remains responsive while building a deeply indexed personal knowledge base.
-
Gemma 4 E4B (4B Dense) — The "Main Brain"
The Main Brain is responsible for high-stakes reasoning and interactive synthesis. It powers the "2nd-Brain" chat and generates complex analytical reports like Weekly Retrospectives and Burnout Assessments.-
The Agentic ReAct Loop: Instead of simple text completion, the E4B model operates via a Reason + Act (ReAct) pipeline. When you ask a question, it reasons about your intent, selects the appropriate tool (e.g.,
run_cypher,fetch_recent_commits, orsearch_facts), and synthesizes the raw data into a grounded, professional response. - Context Synthesis: Its higher reasoning density allows it to merge multiple streams of context—your journal, git history, and tasks—to identify subtle trends and provide data-driven advice.
-
The Agentic ReAct Loop: Instead of simple text completion, the E4B model operates via a Reason + Act (ReAct) pipeline. When you ask a question, it reasons about your intent, selects the appropriate tool (e.g.,
-
Gemma 4 E2B (2B Dense) — The "Daemon"
The Daemon handles the high-frequency, structured extraction tasks that happen silently in the background. In a terminal environment, speed is a dealbreaker; the E2B model is optimized for near-instant execution.- The Closed Learning Loop: Every time you save an entry, the Daemon performs Fact Extraction and Semantic Auto-Tagging. It identifies "Durable Facts"—preferences, long-term goals, and recurring constraints—and projects them into the graph index.
- Recursive Summarization: It works in the background to summarize long-term history, ensuring that your "2nd Brain" maintains a compressed yet accurate map of your progress without causing UI stutters or interrupting your flow.
Local-First Privacy & Persistence
By running this dual-model pipeline locally, dwriter transforms from a simple log file into a Living Knowledge Base. The Daemon constantly refines your personal fact graph, which the Main Brain then uses to provide uniquely tailored insights. Because everything stays on your machine, your most sensitive thoughts and work logs remain entirely private, gaining intelligence with every entry you write.
Project author: Rhaeyyan
United States
NORTH AMERICA
Related News
How Braze’s CTO is rethinking engineering for the agentic area
10h ago
Amazon Employees Are 'Tokenmaxxing' Due To Pressure To Use AI Tools
21h ago

Implementing Multicloud Data Sharding with Hexagonal Storage Adapters
15h ago

DeepMind’s CEO Says AGI May Be ~4 Years Away. The Last Three Missing Pieces Are Not What Most People Think.
15h ago

CCSnapshot - A Claude Code Configs Transfer Tool
21h ago



