Studio Journal
From HALC Overload to an Ordered Threading Backlog (With Help From AI)
This one started with a log line I couldn’t ignore:
HALC_ProxyIOContext.cpp:1623 HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload
That’s Core Audio politely telling you: I didn’t get CPU time when I needed it.
And in a playback app, “skipping a cycle” is exactly how you end up with those annoying glitches that only show up when you’re doing other stuff at the same time — scrolling, editing metadata, loading views, running a scan, switching destinations, etc.
The context: this wasn’t “evolution”, it was me messing around
In my previous post about taming AI, I talked about how I’ve been using AI tools to explore features fast — sometimes too fast: Taming AI
That matters here, because this wasn’t a neat, linear “the architecture evolved over time” story.
This was more like:
- “Let’s try AirPlay”
- “Let’s try DLNA”
- “What if we add recording?”
- “What if we do streaming destinations?”
- “What if stats update live?”
- “What if we add editing views that query playback state?”
- “What if… what if… what if…”
Exploration is fun. It’s also how you end up with subtle concurrency damage if you don’t periodically stop and re-baseline.
The HALC overload warning was basically my re-baseline alarm.
The suspicion: playback is being starved by UI work (and friends)
This log line usually points to contention rather than a single broken function. In plain terms: something is hogging CPU time (or blocking something important), and audio doesn’t get scheduled reliably.
In an app like Kanora that can mean:
- heavy work on the main actor
- multiple timers firing too frequently
- background work competing for disk or CPU at the wrong time
- race-y start/stop flows that create extra work
- duplicated playback pipelines doing overlapping “helpful” things
At that point, I didn’t want guesswork. I wanted a proper threading audit.
AI intervention #1: use ChatGPT to craft a Codex audit prompt
I’m going to be blunt: most prompts people write for codebase audits are rubbish. They’re either too vague (“check threading pls”) or they ask for “best practices” instead of what this code is actually doing right now.
So I did what I’ve started doing more often:
- ask ChatGPT to produce a tight, structured prompt
- feed that prompt to Codex
- make Codex do the tedious work
Here’s the prompt I ran, in full:
Codex Prompt: Kanora Threading & Audio Concurrency Audit
You are working in the Kanora repository. Your task is to audit the current architecture and implementation to determine what is actually happening with
threads/queues/actors during:
1) audio playback (local + DLNA if applicable)
2) track transitions (end-of-track → next track)
3) progress reporting / timers
4) library operations that might overlap playback (scans, metadata edits, artwork extraction, waveform analysis, transcoding)
Goals
• Identify all thread/queue usage in the codebase and how they interact.
• Determine where we are accidentally doing heavy work on main or using the wrong queue.
• Determine where we have race conditions or re-entrancy risks in playback state.
• Propose a single, coherent concurrency model that is pragmatic and safe.
Non-negotiables
• UI updates must be on main.
• No blocking / locking / allocations / file I/O inside real-time audio callbacks.
• Playback state must be owned by one concurrency domain (single serial queue OR a dedicated actor).
• The solution must be implementable without rewriting the entire app.
Step 1 — Map the architecture as it exists (with file paths)
Search the repo and report findings with exact file paths + symbols.
Produce:
• A short architecture diagram in Mermaid.
• A “Concurrency Inventory” table.
Step 2 — Trace real execution flows
A) User taps Play
B) Progress updates
C) Track ends → auto-advance
D) Rapid user actions
Step 3 — Identify specific problems
Return concrete issues with evidence and fix strategies.
Step 4 — Propose a Kanora concurrency model
Option 1: Single Serial Playback Queue
Option 2: Playback Actor
Step 5 — Implementation plan (small PRs)
Step 6 — Add guardrails
Output format
1) Repo Threading Inventory
2) Flow Traces
3) Issues Found
4) Recommended Model
5) Implementation Plan
6) Guardrails & Docs
The important thing here isn’t the wording — it’s the shape:
• it forces file paths and symbols
• it forces flow traces
• it forces evidence-backed risks
• it forces a recommended model
• it forces a PR-sized execution plan
Codex can’t really veer off when the prompt is built like this.
What Codex found
The resulting report was verbose (as expected), but the key findings were clear: • playback is split between two systems • @MainActor PlaybackController • legacy AudioPlayerService • both are constructed in ServiceContainer, so both pipelines can be active • now-playing state exists in multiple layers • shared mutable state exists without protection • some heavy work can run on the main actor • synchronous track loading • DLNA prep that can involve ffmpeg
None of that guarantees glitches — but it absolutely explains why I can hit bad states under stress.
AI intervention #2: summarise the report into an actual plan
Next problem: I didn’t want to re-read a long audit every time I sat down to work.
So I asked ChatGPT to summarise Codex’s output into something I could reason about and act on. The conclusion was obvious: these findings needed to become real tickets.
AI intervention #3: generate GitHub issues from the report
Instead of manually writing issues, I asked ChatGPT to generate a second prompt — this time instructing Codex to: • extract work items • create an Epic • create small, shippable issues • label and prioritise them • add dependencies • create everything via gh issue create
Here’s the ticket-creation prompt:
Codex Prompt: Turn Threading Audit Report into Ordered GitHub Issues
You are working in the Kanora repository. Turn the audit report into: • an Epic • P0/P1/P2 issues • labels • Blocks / Blocked by • GitHub issues created via gh CLI
Required issues:
- Epic: Unify playback concurrency model (PlaybackActor)
- Debug threading diagnostics
- Eliminate dual playback pipelines
- Serialize AudioStreamCoordinator
- Remove main-thread blocking file I/O
- Fix progress reporting
- Race-free auto-advance
- DLNA isolation + cancellation
- Guardrails & docs
I’m never going to spend time writing tickets on a passion project — that’s work behaviour. But tickets are useful, and AI + the GitHub CLI removes almost all the friction.
The resulting backlog
Codex produced a clean execution sequence:
- #141 Add debug threading diagnostics and assertions (P1)
- #142 Eliminate dual playback pipelines via PlaybackActor (P0)
- #143 Serialize AudioStreamCoordinator and buffer routing (P0)
- #144 Remove main-thread blocking file I/O (P0)
- #145 Fix scheduler-correct progress reporting (P1)
- #146 Make end-of-track auto-advance race-free (P0)
- #147 Isolate DLNA playback service with proper cancellation (P0)
- #148 Add threading guardrails and documentation (P2)
- #149 Unify playback concurrency model (Epic)
Ordered. Labeled. Ready to execute.
Back to boring, disciplined engineering
This workflow is working for me.
This isn’t “AI wrote my app”. It’s AI doing the boring legwork so I don’t have to.
The architectural instincts were still mine. The HALC warning still came from the system. AI just made it possible to fix the problem properly instead of shelving the project — which is what I’ve done more times than I care to admit.
Now that the backlog exists, it’s back to single-issue engineering:
- pick the next ticket
- implement
- run verify/tests
- merge
- repeat
And hopefully, the next time Core Audio is tempted to skip a cycle… it won’t have to.