Skill Issues and Mirrors
The other day, I was revisiting a somewhat dated Python codebase. It used Python 3.11, and I thought it was time to upgrade to Python 3.14. Let’s keep stuff clean.
I assumed this was the ideal task for Codex: modify a few files here and there and check for any relevant new features/deprecations. Quite dull for me, but a breeze for our mechanical coding friend.
Well, I didn’t necessarily expect a negotiation session instead of a quick Reddit break. I had to convince Codex that Python 3.14 wasn't in beta! I mean, if it had been released last Friday, maybe I would’ve been more empathetic. But come on, it came out last October!
Maybe Python isn’t the forte of Codex. So, to switch things up, I headed over to an Astro project to update a few links (perhaps Codex hates brownfield stuff?). I wanted to add a few target="_blank" snippets. Easy money.
My guy went the extra mile: on top of the target attribute, it also added rel="noopener noreferrer". Feeling somewhat humbled (the machine outsmarted me once again), I had to look this up. Well, it turns out that since 2021, target=”blank” implies noopener. So, I got obsolete stuff just like last time.
I have a few concerns about cases like the above ones:
- Do those who do Shipping at Inference Speed let such things slip? While these are small, the idea that I’m shipping MUCH worse code than I could’ve produced manually tears me apart.
- Is there a snowball effect at play here? If I let
rel=”noopener noreferrer”into my codebase, is the AI gonna treat that as a codebase convention and replicate it elsewhere?
Geoffrey Huntley has an excellent piece titled LLMs are mirrors of operator skill. Building on this idea, I believe that the quality of the output generated by a language model (rather than the process of its operation alone) is just as much of a reflection of the developer’s craftsmanship. Just because we can “generate” code much faster than before, we can’t let quality slip.