Alex Goldhoorn

Articles

← Back to Articles

From Autocomplete to AI Agents

From Autocomplete to AI Agents

My personal journey through the eras — years reflect when I adopted each tool, not when it launched.

My year Tool The leap How I worked
~1990s QBasic / Turbo Pascalself-taught, no IDE No tooling — type every character by hand Type. Single-letter names, everything in one script.
2001 IntelliJ IDEAlaunched Jan 2001 Language-native intelligence; refactoring as a first-class feature; long names finally free Write. IDE handles imports, boilerplate, and renames.
2022 GitHub Copilotlaunched Jun 2021 Completion moves from syntax to intent — write a comment, get a function body Describe intent, verify output. Model fills in the detail.
2025 Cursorlaunched 2023 LLM reads your file and makes the edit — you review a diff, not a suggestion Describe the change. Read and accept or reject the diff.
2026 Claude Codelaunched Mar 2025 Agent reads, plans, edits, runs checks, iterates — you come back to a result State the goal. Judge whether the result is correct, clean, and appropriate.

I self-learned with a friend QBasic around age 12, by creating moving animations, then Turbo Pascal from a book. I hadn't had any classes in programming yet and therefore had a bad coding style: single-letter variable names, no functions, everything in one long script. It worked, but it was not readable or maintainable.

That experience taught me something I only understood later: tooling shapes how you write, not just how fast. When you type everything by hand, short names are a survival strategy. Although I remember there were people that had the view of being a "real" programmer if you write all code by hand - i.e. without any tools of an Integrated Development Environment (IDE). The moment autocomplete arrived, writing logicOrExpression costs nothing. Readable names became free.

The First Real IDE

My first serious Java work was in 2001, during an internship at the University of Groningen — part of my HBO ICT study. I used IntelliJ IDEA, which had just launched that January as an Early Access Program — pre-release builds were always free.

What made IDEA different wasn't just that it had completion — Visual Studio already did. It was that Java intelligence was built in from day one: three completion modes (by name, expected type or class name), refactoring as a first-class feature, imports resolved automatically. You typed list. and it offered every method on that type. It was programmatic: no magic, just a very well-indexed knowledge of the language and your context.

The Copilot Moment

My first real break from that paradigm was GitHub Copilot, which I started using at Glovo around 2022. The jump was qualitative, not just quantitative. It wasn't completing variable names — it was completing thoughts. A large part of "knowing how to code" had always been knowing the API — the exact method name, argument order, whether it's axis=0 or axis='columns' for example. Copilot made that friction disappear: it completed the call for you. Better still: write a comment and it drafted the function body. For unit tests it was immediately excellent — it could infer edge cases just from the function signature and name.

But the model was still reactive. It waited for you to type. You were still the author; it was a very good autocomplete.

The IDE as Chat Interface

At Meight we switched to Cursor around mid-2025. The chat became the primary interface — you stopped writing code and started describing what you wanted changed. The LLM read your file and codebase, understood the context, and made edits. You reviewed the changes.

It felt more like reviewing a junior's code and steering it. The style was often off: too verbose, not reusing existing helpers, solving the general case when only the specific one was needed. Giving that feedback became a real part of the workflow.

A few failure patterns kept recurring: generating new helpers instead of reusing existing ones, taking an indirect solution instead of a straightforward one (sorting a DataFrame to find the max, instead of calling .max()), and adding parameters for flexibility nobody asked for. The output was often correct but not appropriate. I keep a running log of these patterns.

I've used Claude Code mostly for personal work — this website, side projects, real vibe coding. Warp I use from the terminal, less for code and more for system work: setting up services, configuring my home network, creating bash scripts. Claude and Gemini have also been useful as command-line coding assistants when I don't want to open an IDE at all.

For work I'm more cautious: reviewing the output in more detail and testing more carefully to make sure things actually work as expected before they land in the codebase.

What Changed, Really

Last year was a big learning curve: finding the balance between coding myself, giving good instructions, trusting the output, and knowing when to review carefully. I tried tools across the board — web chat interfaces, IDE integrations (Cursor, VS Code), the console, even tools that review code PRs in Github and Gitlab.

As models improve, I expect to trust the output more and review it less. The skill shifts from writing code to knowing what to ask for — and recognising when the answer is wrong.


Alex Goldhoorn is a freelance Senior Data Scientist. Find more at goldhoorn.net.