Skip to main content
Tim DisneyTim Disney

Building a Personal GTD System in Plain Text with AI Agents

I've been experimenting with trying to build a plain-text GTD system with Flint. I started by just creating a todo note type with a couple of simple metadata fields (e.g. status, due date, etc.) with agent instructions to keep things concise and action-oriented.

Creating and updating todos then happens conversationally:

Me: make a todo to schedule a dentist appointment today

Agent: Done.

Later in the day:

Me: mark dentist appointment done

Agent: Updated.

My favorite part of this approach is you can then ask the agent "what should I be working on today" and it's generally smart enough to look through all your todos but also other kinds of notes (daily logs, project notes, etc.) and give you a pretty intelligent response.

This is fine but gets awkward to update multiple items at a time. What I wanted was a way of seeing and updating multiple todos at a time while still keeping the benefits of text and agent smarts. So I just asked Flint to make me a special note that it would keep in sync:

Me: create a todo note that is special called "today", it should:

  • list all the other todo items that should be worked on today
  • format items like - [ ] todo item.
  • update agent instructions to update this note when updating other notes
  • when the note has items checked off, synchronize with corresponding note status
  • have a section called "Quick Add New Tasks" to easily add new items
  • each todo should be in one of three states: now (I want to focus on working on it today), upcoming (ready to be worked on soon), someday (a backlog of stuff to plan out eventually)
  • the today note should group by status but not show all the someday tasks (just show a count)
  • items in the now section should be written like - [ ] + todo item and items in the upcoming sections are written like - [ ] todo item.
  • when syncing the today note look for changes to the + to move items from the now state to the upcoming or vice versa

Flint then made a "todo" note that looks like this:

# Today - [Current Date]

## NOW (Focus on today) 🎯
- [ ] + Finish quarterly report draft
- [ ] + Review team feedback on project proposal
- [ ] + Schedule dentist appointment
- [ ] + Follow up on client contract (OVERDUE)

## UPCOMING (Ready to work on soon) đź“‹
- [ ] Plan next week's team meeting (Due: Friday)
- [ ] Update budget spreadsheet (Due: End of month)
- [ ] Research new project management tools

## Quick Add New Tasks
*Add new todo items below - use + for NOW, - for SOMEDAY, no symbol for UPCOMING*
- [ ]
- [ ]
- [ ]

Now I can easily check off (e.g. change [ ] to [x]) multiple items and then ask Flint to sync the note and it will update each todo note as completed and refresh the today note.

A day's interaction could look something like this:

Morning (9:00 AM):

User opens Today note, sees:
- [ ] + Write project proposal
- [ ] + Call insurance company
- [ ] Review budget numbers (Due: Thursday)

User adds + symbol to "Review budget numbers":
- [ ] + Review budget numbers (Due: Thursday)

System automatically:
- Updates individual todo note workflow_state to "now"
- Moves task to NOW section in Today note

Midday (1:00 PM):

User checks off completed task in Today note:
- [x] + Write project proposal âś…

System automatically:
- Updates individual todo note status to "completed"
- Adds completion timestamp
- Moves to "Recently Completed" section

Evening (6:00 PM):

User adds new task to Quick Add section:
- [ ] + Book flight for conference

System automatically:
- Creates new individual todo note: "Book flight for conference"
- Sets workflow_state to "now" (because of + symbol)
- Adds to NOW section in Today note
- Clears the Quick Add entry

I think this is pretty cool, you can build an approximation of a full-fledged GTD app like Things or OmniFocus in plain text just by writing down a few simple instructions.

Another neat thing is I built this workflow conversationally. My example above defined the today note in one instruction but I actually went back and forth with the agent a bunch to narrow down how the today note should best be structured.

This bidirectional sync between plain text and AI agents feels like a fundamental building block. You get the benefits of structured data and automation while keeping the flexibility and ownership of plain text. I suspect this pattern will be useful beyond task management.


The Note-Taking Trap - Why AI Could Save (or Destroy) How We Think

I'm obsessed with note-taking systems. I've been obsessed with them for most of my computer-using life. Evernote, Notion, Roam, Obsidian—each promised to be my "external brain". I spent hours building out workflows to help me think and act on the things that mattered to me. But inevitably I would hit the same wall when my system became unmanageable and I had to start over from scratch.

Here's the problem: your thinking changes. Your projects shift. Your interests expand. But most note-taking systems lock you into structures that become prisons over time. Maybe you organize by project phases—"Research", "Planning", "Execution", "Review". Then you start working on iterative projects that don't follow linear phases, and your structure crumbles.

You either stick with categories that no longer fit, or you abandon the system entirely and start over.

Now AI promises a solution: just dump everything into a smart assistant and let it handle the thinking for you. Sounds perfect, right?

Maybe, but I think that approach is dangerous.

When we fully outsource our thinking to AI, our thinking atrophies. Think about the Google Maps effect: before GPS, we mentally mapped neighborhoods, remembered landmarks, and built spatial reasoning. After GPS, we can't navigate familiar areas without turn-by-turn directions. The same thing happens with AI-assisted thinking.

The messy work of organizing ideas, making connections, and structuring knowledge isn't busy work—it's how we actually think and learn. Automate that away, and you're automating away your own intelligence.

Plus, you're handing over your most valuable asset—your accumulated knowledge and thinking—to systems you don't control.

So here's what I want: an AI thinking system that I control and that augments my thinking instead of replacing it.

Instead of throwing everything into an AI void, imagine an agent that works with you based on your explicit preferences for how you want to think and organize. You tell the system what kinds of notes you work with—daily journals, project logs, book summaries, meeting notes, research captures—and what workflows matter to you: weekly reviews, concept mapping, progressive summarization, or building arguments over time.

For example, you specify that you keep "insight notes" and "reference notes" separate, and prefer connecting ideas through explicit linking rather than tagging. The AI then notices when you write about "flow state" in an insight note, remembers you mentioned "deep work" in a book reference from 6 months ago and "maker's schedule" in a meeting capture, then suggests creating a synthesis document that fits your preferred workflow—maybe a new insight note that builds on these connections, or links to add to your existing concept map.

Or when your project logs increasingly reference the same underlying problems across different domains, the AI suggests a reorganization that aligns with how you actually think—perhaps shifting from department-based folders to problem-solution frameworks, but only because you've indicated you prefer outcome-oriented organization over categorical filing.

The AI handles the friction—the tedious reorganization, automatically tagging notes with concepts and people mentioned, the structural updates—while you focus on the actual thinking. The goal isn't to eliminate the work of thinking. It's to eliminate the work that gets in the way of thinking.

This vision sounds great in theory, but how do you actually build it? The traditional approach would be to create another proprietary platform, which is exactly the kind of lock-in I want to avoid. Instead, we need a way for AI models to work with your note-taking system while keeping your notes completely under your control.

MCP Is Neat #

This is where I think Model Context Protocol (MCP) unlocks something special. Unlike proprietary systems that lock you into a platform, MCP unlocks the potential for any AI model to work with your data while you stay in control.

Consider the difference: In traditional systems, your notes are locked in Notion's proprietary format. Want to switch to Obsidian? Technically there is an export but the export breaks formatting and loses connections. Company gets acquired? Your thinking system disappears.

With an MCP-enabled system, your notes can just be files in a folder. The AI agent reads and writes through the MCP protocol, but you can switch AI models, sync services, or editing tools without losing anything. Your 10-year+ knowledge base outlasts any single company or platform.

But unlike traditional file-based systems like Obsidian where you're on your own for organization, the AI agent becomes your active partner in managing, connecting, and evolving this information.

This separation of concerns (intelligence versus data storage) enables something that wasn't possible before: true agent-first design where the AI can be deeply integrated with your thinking process and you control your data.

This isn't some distant future vision. It's what I'm building right now with Flint Note.

How Flint Works #

Flint is an MCP server that organizes your notes as a vault of ordinary markdown files with semantic note types. Each note type defines its purpose, custom agent instructions, and metadata schema. Let me show you how this works in practice by walking through setting up a vault with three common note types.

Setting Up Your Vault #

A vault starts as just a directory of markdown and YAML config files:

my-notes/
├── .flint-note/
│   └── config.yml
├── daily/
│   └── _description.md
├── todos/
│   └── _description.md
└── reading/
    └── _description.md

The magic happens when you define what each note type means and how the AI should behave when working with it.

Creating Note Types Through Conversation #

Let's start by defining our note types:

You: lets create three note types: daily, todo, and reading

Agent: I'll create all three note types for you: daily, todo, and reading. Let me set each one up with thoughtful agent instructions and useful metadata schemas.

[Creates daily/_description.md, todo/_description.md, reading/_description.md]

Perfect! I've created all three note types for your vault:

đź“… Daily (Journal & Reflection)

  • Purpose: Daily journal entries for reflection, planning, and personal growth
  • Features: Mood tracking, energy levels, gratitude lists, key insights, and tomorrow's focus
  • Agent Behavior: Structures entries with morning intentions, daily events, and evening reflection while identifying patterns and celebrating progress

âś… Todo (Task Management)

  • Purpose: Task management with priority, deadlines, and progress monitoring
  • Features: Priority levels, categories, completion tracking, due dates, and project connections
  • Agent Behavior: Formats tasks as checkboxes, organizes by priority, identifies dependencies, and suggests productivity improvements

📚 Reading (Books & Articles)

  • Purpose: Reading notes for books, articles, and other materials with insights and connections
  • Features: Author tracking, reading status, ratings, key takeaways, and topic organization
  • Agent Behavior: Structures notes with summaries and key ideas, identifies connections to other materials, and encourages critical thinking

Your vault is now ready for comprehensive knowledge management! You can start creating notes of any of these types. Just say something like:

  • "Create today's daily note"
  • "Add a todo list for this week's tasks"
  • "I want to create a reading note for [book/article name]"

What would you like to create first?

The resulting daily/_description.md looks like this:

# Daily

## Purpose

Daily journal entries for reflection, planning, and personal
growth tracking

## Agent Instructions

- Structure daily notes with clear sections: Morning
  intentions, daily events, evening reflection
- Extract and highlight key insights, lessons learned,
  and personal growth moments
- Identify patterns in mood, energy, productivity,
  and relationships
- Suggest connections to previous daily entries when
  similar themes emerge
- Encourage reflection questions: What went well? What
  could be improved? What am I grateful for?
- Track recurring themes and suggest deeper exploration
  through follow-up questions
- Notice and celebrate small wins and progress toward
  personal goals
- When appropriate, suggest creating separate notes for
  significant insights or ideas that emerge

## Metadata Schema

Expected frontmatter or metadata fields for this note type:

- `mood`: Overall mood for the day (optional, select,
  options: [`energized`, `content`, `neutral`, `tired`,
  `stressed`, `excited`, `reflective`, `grateful`])
- `energy_level`: Energy level (1-10 scale) (optional,
  number, min: 1, max: 10)
- `gratitude`: Things you're grateful for today (optional, array)
- `key_events`: Significant events or moments from the
  day (optional, array)
- `insights`: Key insights or lessons learned (optional, array)
- `tomorrow_focus`: Main focus or intention for
   tomorrow (optional)

Since the description is just a plain text markdown file, you can have the agent update it or edit it yourself with a text editor.

Adding Notes Through Natural Conversation

Now let's see the system in action:

You: Add today's daily note

Agent: [Creates daily/2025-06-27.md with structured content]

Perfect! I've created your daily note for today, June 27, 2025. Following the agent instructions for daily notes, I've structured it with clear sections to guide your reflection:

Your Daily Note Structure #

  • Morning Intentions (Space to set your focus for the day)
  • Daily Events (Capture key moments, interactions, and accomplishments)
  • Evening Reflection (Process how the day went with guided questions)
  • Gratitude Section (Practice appreciation)
  • Key Insights & Lessons (Extract learning and growth moments)
  • Tomorrow's Focus (Set intention for the next day)

The note is ready for you to fill in! As you use it, I'll help you:

  • Identify patterns in your mood, energy, and productivity
  • Connect themes to previous daily entries
  • Celebrate your small wins and progress
  • Extract insights that might be worth exploring in separate notes

You can also add metadata like your mood, energy level (1-10), and specific gratitude items if you'd like more structured tracking. How would you like to start filling it in?

You: My energy is about a 7, pretty good energy

Agent: Great! I've updated your daily note to capture that energy level of 7. That's solid energy for the day!

With that good energy level, it sounds like you're having a productive day. What's been contributing to that energy? Any particular events, accomplishments, or moments that have stood out so far today?

I can help you capture those in the daily events section, or if you're reflecting on the day, we could work on the evening reflection part. What would you like to add to your daily note?

Adding a reading note:

You: Add a reading note for "The Unaccountability Machine" by Dan Davies. I've finished it, here's a great quote:

"An organisation does things, and it systematically does some things rather than others. But that's as far as it goes. Systems don't make mistakes – if they do something, that's their purpose."

Agent: Excellent! I've added that powerful quote and updated your reading note to show you've completed the book. That's a really striking insight – the idea that "systems don't make mistakes, if they do something, that's their purpose" cuts right to the heart of how we think about organizational accountability.

Since you've finished the book, I'd love to help you capture more of your insights:

  • What was the main thesis that resonated most with you?
  • How does this quote connect to the broader argument about unaccountability?
  • Are there specific examples from the book that illustrate this "systems don't make mistakes" principle?
  • Does this change how you think about organizations you've worked with or observed?

This seems like the kind of concept that could warrant its own separate note if it becomes a recurring theme in your thinking about systems and organizations. What other key takeaways stood out to you from the book?

[Creates reading/the-unaccountability-machine.md with the quote and reading status]

Evolving Your System

The real power emerges when you want to adapt how the system works:

You: Update my daily notes to also ask about tomorrow's top priority

Agent: I'll modify your daily note instructions to include planning tomorrow's main focus.

[Updates daily/_description.md to add tomorrow planning prompt]

Updated! Now when you create daily notes, I'll also ask: "What's your top priority for tomorrow?" This will help bridge your reflection with forward planning.

You: Actually, make it three priorities, not just one

Agent: Good adjustment! Three priorities gives you more flexibility while still maintaining focus.

[Updates the instruction again]

Done! I'll now ask for your top three priorities for tomorrow in each daily note.

The Result: Intelligent, Adaptive Organization

What you end up with is a system that understands the different contexts of your thinking:

But unlike rigid templates, every aspect can evolve through natural conversation. The AI learns your preferences and adapts its behavior, while your data remains simple markdown files you can edit anywhere.

The system grows with you, becoming more intelligent about your specific needs while keeping your information completely portable and under your control.

Why This Matters

We're at an inflection point where AI can finally deliver on the vision of an external brain that truly enhances thinking. But only if we're willing to rethink our assumptions about how knowledge systems should work.

Most AI-enhanced note apps are still designed for the old world where humans do the organizing and AI occasionally helps out. Flint flips that: the AI is your thinking partner from day one, and the system is designed to make that partnership as natural and powerful and flexible as possible.

Since all your notes are just markdown files, there's no lock-in. You can work on your notes with Flint, or fire up vim or Obsidian or iA Writer or any other local markdown editor you want. You can sync or backup with whatever file syncing service you already use (git, Dropbox, iCloud, Obsidian Sync, etc.).

But unlike other markdown-based systems, Flint is architected from the ground up to be agent-first. Every design decision (from the vault structure to the metadata schemas to the conversation patterns) optimizes for AI collaboration rather than human maintenance.

Getting Started

If this sounds interesting, you can try Flint by adding it to your MCP client of choice. You'll need Node.js installed first:

{
  "mcpServers": {
    "flint-note": {
      "command": "npx",
      "args": ["@flint-note/server@latest"]
    }
  }
}

I've also created system prompts to help guide your model's interactions: system_core.md for intelligent models like Claude 4 or ChatGPT 4, and simple_models_detailed.md for simpler or local models.

The project is still early, but if you try it out I'd love to hear what you think. The system is designed to evolve based on how people actually want to interact with their knowledge.

Check out the GitHub project to dive deeper, or just give it a try. The future of knowledge work is agent-first, and Flint is my attempt to build part of that future today.


MCP is neat

Jotting down an aha! moment I had about how MCP can augment the utility of LLMs in some of the things I do every day.

The background is I've been trying to be more intentional about committing things to memory using Anki and spaced repetition. I was inspired years ago by Michael Nielsen's article about using SRS to read and remember ideas from papers so that's my main goal: commit interesting ideas from papers to memory.

When I first read that article I tried to build an Anki routine but fell off—the activation energy required to add cards was always too high (also writing good prompts is hard).

LLMs are really good at summarizing and restructuring text so I wondered if they could help with writing Anki cards. I'm not sure if this is the right approach long-term since writing cards by hand would almost certainly aid in the learning process. That said, it turns out I couldn't sustain the manual effort so I think it's worth experimenting to see if a different approach could help.

So my approach is to use an LLM to help write the cards. MCP then lets me easily connect the LLM directly to Anki and even further reduce the activation energy required for authoring cards.

I'm using Raycast, but the basic idea works in any MCP-capable LLM client (e.g. Claude desktop). The MCP server I'm using is anki-mcp-server, which requires Anki and the AnkiConnect plugin. Wiring up the Anki MCP server to Raycast is super straightforward and took me all of five minutes.

For articles, there are a few kinds of prompts that seem to work. If the article is well-known and in the model's training set, you can simply say:

suggest some anki cards for the article "As We May Think" by Vannevar Bush

Claude 4 responds with something like:

Basic Information Cards

Card 1:

• Front: Who wrote "As We May Think" and when was it published?

• Back: Vannevar Bush, published in The Atlantic Monthly in July 1945

Core Concepts

Card 2:

• Front: What is the "Memex" as described by Vannevar Bush?

• Back: A hypothetical electromechanical device that would store books, records, and communications, allowing users to retrieve and link information through associative trails

...

For newer or less well-known articles, you can paste the entire article or a URL (if the model supports web browsing). Not every suggested card will be useful, sometimes the model suggests trivia I don't care about, but it's easy to go back and forth with the model to pick the best ones. You can also add some pre-prompt instructions to guide the model toward the kinds of cards that are most helpful, but honestly, it's quick enough to just pick out the good ones.

Once I have the cards I like, it’s a simple instruction to add them to Anki:

now add those cards to @anki

(Raycast uses the @ notation to add an MCP server to a chat, but you can also provide it as a chat preset.)

Raycast also provides a nice feature to wrap commonly used prompts in AI commands. For example, I’ve added an “ankify” command with a prompt like:

from the article in {browser-tab}, suggest some anki cards to help me remember the salient points. when referencing the article use a short version of its title and link to it

This does what it sounds like: it provides the currently focused browser tab as context to the model. I can then check the suggested cards and follow up by telling it to add them to my Anki deck.

I think it's pretty neat that thanks to MCP we can now wire up these powerful capabilities to our LLMs in a matter of minutes.


New Things

Big changes happening in my life. After almost a decade, today was my last day working at Shape Security / F5.

It was a really good run. I had the chance to work with incredible folks building wild things on the web and I think we make it a little safer and more convenient for everyone.

Now it's time for something completely different. Not entirely sure what that is yet but I'll figure out something fun soon.


Large AI Models Are Cultural and Social Technologies

Large AI Models Are Cultural and Social Technologies. The way LLMs are often framed is as "intelligent agents" but maybe this is the wrong framing. Rather we should view them as "cultural and social technologies" (like the printing press or markets) that are "allowing humans to take advantage of information other humans have accumulated".

Not as snappy or exciting in a science fiction sort of way but more accurate without being dismissive of their impact on society.


Zed Edit Prediction Feature

Zed now predicts your next edit with Zeta, our new open model. Played around with Zed's edit prediction feature today and it worked surprisly well.

I like this feature because it sits on the spectrum betwen "just chat with Claude" on the one hand and "describe what Cursor should do" on the other. It augments your process rather than trying to do it entirely.


Hypermedia Controls - From Feral to Formal

Hypermedia Controls: From Feral to Formal. An interesting paper that tries to locate and formalize a set of core primitives in hypermedia systems as expressed in HTMX. It identifies a "hypermedia control" as consisting of four mechanisms: (1) an element that (2) responds to an event trigger by (3) sending a network request and (4) placing the response in at some position in the viewport. By enhancing a hypermedia system with primitives that allow you to manipulate each of those mechanisms you can declaratively extend the system with your own hypermedia controls.

An example they give:

<button hx-trigger="click" hx-post="/clicked" hx-target="#output">
  Issue a request
</button>
<output id="output"> </output>

When the user clicks on the button the system will issue a network request to /clicked and place the response in the <output id="output"> element.

This is interesting in so far as it goes but I'm not convinced that the "hypermedia maximalist" approach is really all that great of a way to develop systems.