How I Actually Use AI Coding Tools
Posted 1 April 2026
As I apply for new roles, I keep getting asked the same question: how do I use AI coding tools? It comes up in almost every conversation now - and each time, I find myself explaining the same framework. So I figured I'd document it.
This is a snapshot of how I use these tools today. I fully expect it to change as the technology advances - it already looks different from six months ago. But right now, my usage has settled into four distinct modes, and I think the distinction matters more than most people realise. Prompting an AI to build you a throwaway script is a fundamentally different activity from using it to add features to a codebase you know inside out, and treating them the same is where people get into trouble.
Some context: I've been on a career break since mid-2025, so most of my usage has been on personal projects rather than production systems. That's meant less pressure, more room to experiment, and a front-row seat to watch these tools go from promising-but-clunky to genuinely good. I use Claude Code and OpenAI's Codex as my primary tools, paying for both but not on the expensive Pro or Max tiers. I don't have the budget to burn through tokens the way some of the "vibe coding" crowd seem to. In practice, I bounce between the two - partly because they have different strengths, but honestly, often just to work around rate limits.
Mode 1: Accelerating What I Already Know
When I'm working in PHP and Laravel - my bread and butter - I'm not asking the AI to figure out how to build something. I already know how. The AI is there to make the mechanical parts faster.
I've been dropping back into pre-existing codebases with a coding agent to backfill tests, tighten up code against best practices, refactor, add documentation, and build new features. My personal blog runs on a custom CMS, and I've also rebuilt a micro-blog system from an earlier project. These are codebases I understand deeply, and that understanding is what makes the AI useful rather than dangerous.
Over time, I've built up a set of custom commands and skills to streamline this work. For example, I have a /readme command that appraises the codebase and updates the README with any changes. These aren't anything fancy - skills are just prompts - but I've built them iteratively as patterns emerge. If I find myself repeating a prompt, I convert it into a skill, then refine it over time as I get a better feel for what the models respond well to. It's the same instinct as any developer abstracting a repeated pattern. The tooling just happens to be natural language.
In this mode, my existing expertise acts as a quality filter. I can spot when the AI suggests something that's technically correct but idiomatically wrong, or when it's taken a shortcut that'll cause problems later. This is where I think experienced developers get the most leverage - not by handing over control, but by accelerating the parts of coding that were already mechanical.
Mode 2: Learning With a Safety Net
I've been building a SwiftUI app for managing household chores - complete with iCloud sync and collaboration. It's a personal project, but I've approached it with the rigour of a production app, and I'll be publishing it to the App Store shortly.
SwiftUI is not a language I have deep muscle memory in, and that changes the dynamic completely. The AI handles the syntax and platform-specific idioms I haven't internalised yet, while I focus on architecture and logic - things that transfer across languages and frameworks.
But I'm reading every line. I'm asking why it chose one pattern over another. I'm looking up constructs I don't recognise. The AI is essentially a patient tutor who also writes working code, and the code itself becomes a learning artifact. This is genuinely faster than tutorials and documentation alone, because I'm learning in the context of the thing I actually want to build, not a toy example.
The tradeoff is speed. This mode is slower than Mode 1 because I'm stopping to understand, not just shipping. But the goal isn't speed - it's competence. The spec tends to be very clear and focused, which helps keep the AI on track and gives me a solid basis to evaluate what it produces.
Mode 3: Disposable Tools
Sometimes I need a tool and I genuinely don't care about the code.
I built a CLI tool in Swift to export my Photos library into a specific folder structure - different from what Apple's built-in export gives you. I wrote a Python script to upload my entire Instagram history to my micro-blog. In both cases, I gave the AI maximum freedom. It picked the approach, the libraries, whatever it wanted. I ran the result, checked the output, and moved on.
This is the mode where AI coding tools deliver the most obvious, immediate value. Things that would have taken half an hour of documentation-reading and boilerplate now take a few minutes of conversation. The ROI is hard to argue with.
It's also the least interesting mode to talk about, because it doesn't require much skill or judgment. The AI handles it, it works (or you tell it what's wrong and it fixes it), and you get on with your day.
Mode 4: Prototypes
This sits somewhere between Modes 2 and 3. I care about the code more than a throwaway tool, but I'm less sure about the approach. These are experiments that might graduate into Mode 1 projects if they prove out.
I built a RAG-based chatbot using Confluence documents as a proof-of-concept for my wife to use at work. I've also built a simple SwiftUI app as a mobile companion for my micro-blog platform. Neither of these is production-ready, but they're more than disposable - they're me testing whether an idea has legs before committing to building it properly.
What I've Learned Along the Way
Iterative beats one-shot, every time. With the exception of genuinely disposable tools, I don't try to generate entire features in a single prompt. Even with a comprehensive spec and the most capable model available, one-shotting tends to produce code that looks right but isn't quite. The app appears to work, but when you dig into the detail, specified functionality is missing or subtly wrong. The model doesn't hallucinate - it just quietly doesn't do the thing. That's a harder failure mode to catch than an obvious error, and it's why I prefer building iteratively, the same way I would by hand. Catch issues early, maintain quality, and keep a handle on token usage in the process.
A fresh set of eyes matters - even when those eyes are artificial. Beyond balancing token usage, I switch between Claude Code and Codex when one model gets stuck on a problem. A concrete example: my SwiftUI chores app worked perfectly in the development environment, but when I pushed to TestFlight - which hits the production database - nothing synced. Codex went round in circles trying different fixes. I switched to Claude, which spotted immediately that the development environment was hardcoded. A fresh context, without the history of failed attempts, made the difference.
Different tools have different strengths. For the kind of work I do, Codex tends to be better at big-picture coding - scaffolding, broad strokes, getting a feature stood up. Claude is stronger in the details - spotting subtle bugs, understanding nuanced requirements, getting the specifics right. Having both available means I'm rarely completely stuck.
Model choice matters more than people think. Rather than defaulting to the most capable model for every prompt, I'll use a lighter model for simpler tasks - refactoring, renaming, documentation, generating commit messages. It saves tokens, it's faster, and for mechanical tasks the output is just as good. Treating model selection as a conscious decision rather than always reaching for the top shelf is one of the easiest ways to get more out of these tools on a budget.
Your expertise is the multiplier. This is the thing I keep coming back to. The better you already understand what you're building, the more value you extract from these tools. They're an amplifier. In Mode 1, my deep knowledge of Laravel means I can move fast and catch mistakes. In Mode 2, my general software engineering experience lets me evaluate SwiftUI code I couldn't have written from scratch. Even in Mode 3, knowing what the output should look like means I can validate it quickly. The tools are powerful, but the judgment is yours.
There are thousands of blog posts about AI coding tools, written by people with different budgets, different experience levels, and different appetites for letting the AI drive. This is just how I've found myself working - four modes, each with its own balance of control and delegation. Your mileage will vary. But if there's one thing I'd suggest, it's being deliberate about which mode you're in. The tool doesn't change. How you use it does.
Email a comment