Studio Journal

Welcome

November 1, 20254 min readBy Kanora
developmentai

Welcome

If you’ve found your way here, you probably already know what Kanora is and why it exists. This blog isn’t here to restate that story. It’s here to capture the process — the thinking, the experiments, the mistakes, and the decisions that shape something over time.

Kanora isn’t a small or throwaway project. It deals with audio playback, timing, concurrency, background work, and state that has to remain coherent even when users do unexpected things. Most of that work is invisible when it’s done well, and painfully obvious when it isn’t.

That’s partly why I wanted a place to write this stuff down.

Over time, another thread has become impossible to ignore: AI tooling.

I’m using tools like ChatGPT, Claude, and Codex every day while working on Kanora. Not to “generate an app”, and not to replace engineering judgement, but to help me reason about a codebase that’s grown complex enough that no single change exists in isolation anymore.

Sometimes those tools are incredibly useful. Sometimes they’re confidently wrong. Learning how to tell the difference — and how to shape prompts so the output is actually usable — has become just as important as writing the code itself.

Kanora has turned into a proving ground for that kind of work.

One of the reasons this keeps surfacing is that audio software lives in a very different world to most examples people learn from.

Almost every new framework, library, or tool is introduced with the same kind of application: a to-do list, a notes app, a counter, maybe a simple CRUD interface. Those examples are useful, but they’re also deeply misleading. They rarely deal with background work, real-time constraints, or long-lived state. Nothing terrible happens if something runs a bit late. Nothing overlaps in surprising ways.

Audio doesn’t work like that.

Playback has to run continuously, often on background threads, while the UI stays responsive. State has to remain coherent while tracks change, devices appear and disappear, metadata updates in the background, or the user hammers play, pause, and skip in quick succession. When something goes wrong, it’s not abstract — you hear it. You feel it. The app stutters, glitches, or falls apart in ways that are hard to ignore.

Trying to build something closer to iTunes than a to-do app exposes all of this very quickly.

That’s where AI tooling becomes both powerful and risky. These tools are great at small, contained problems. They’re far less reliable when you ask them to reason about real systems with concurrency, history, and edge cases baked in. Used well, they help surface problems earlier. Used carelessly, they help you dig deeper holes faster.

In practice, AI has become part of my development loop in ways I didn’t expect. It helps me:

  • step back and ask what is actually happening, not what I assume is happening
  • trace real execution paths through the code
  • surface threading and state-management issues early
  • turn vague discomfort into concrete, actionable work
  • generate prompts that drive other tools more effectively

But it also forces discipline. Audio software is unforgiving. If something is wrong, you hear it immediately. There’s no hiding behind abstractions or “good enough” demos.

You’ll see posts here where I deliberately stop adding features and spend time fixing foundations. Posts where the most valuable outcome isn’t new code at all, but a better mental model, a cleaner concurrency boundary, or a set of guardrails that prevent the same mistake from happening twice.

This blog isn’t a changelog or a marketing feed. It’s closer to a lab notebook — a place to write down what I’m learning while building something that’s meant to last.

If you’re interested in what AI-assisted development looks like once the novelty wears off — when you’re working on real software, with real constraints and real consequences — this is where I’ll be writing about it.

More soon.