3 min read

A bridge too far for vibe coding

I developed a web app this week, despite a lack of qualifications.

To be fair, it is a simple app: a form that ingests a few fields of data to upload a PDF and update a Google Sheet. Oh, and posts a Slack notification, and uses a WordPress plugin to take the Sheet data and display it: Local News Impact Consortium

I came to the project with vanishingly little skill in Python but enough to develop some web scraping scripts, install Linux on a Raspberry Pi, and host my own Ghost instance (this blog) on a shared server. I have probably coded each of the individual elements of the app for various past projects. So, not a noob but far from an expert.

Most of my past success has been through brute force effort and a lot of Stack Overflow. But this time, I used GitHub Copilot, with Anthropic's Claude Sonnet 4 AI model in Agent Mode. In effect, I asked the AI questions or gave it directions, and it created files, wrote code, and asked me to accept or reject its solutions, sometimes line-by-line.

And so the app has several features I have never used before and probably would have avoided without the AI's support. For instance, it is running Flask, is authenticated by Google oAuth, runs in a Docker container on Google Cloud Run and is automatically built and deployed there by CI/CD Github Actions that trigger when I push updated code to the repo.

Not to mention, the dizzying number of secret keys, service workers, client IDs, .yml files and API permission scopes required to make all of those systems seamlessly integrate is why actual skill and experience are usually required. But the power of AI is that it allowed me to have just a bit of skill in one part of the project, and with minimal guidance and intervention, traverse up and down the tech stack to produce a complete solution.

By "a bit of skill," I mean I have limited experience with Flask apps, but I know the components: Python code, HTML templates, and Routes that know which screens to load and when. And, I have limited experience with Cloud services, but know enough to set up API keys and environment variables, and to check the logs for errors. The AI is just good enough to do most things right and to be a very helpful troubleshooting tool. Often, I would copy/paste an error message into Copilot and ask it to diagnose and fix it.

But, AI pair programming operates in an uncanny valley of intelligence: it can write a fully-functional shell of a Flask app in 30 seconds, create a set of templates and functionality, and have it deployed in minutes. But when the AI runs into a simple permission error on the web server, it is apt to write a hundred lines of code and jump through hoops to work around the constraint rather than ask if you (the human) can fix the permissions. And so not infrequently, the AI would go on flights of fancy, developing unnecessary functions that I was barely wise enough to recognize and stop.

So - how much did I miss? And the real question: How much do you need to know to use AI?

The Dreyfus Model of Skill Acquisition has some answers. The framework suggests human learners pass through five stages of personal development:

  1. Novice
  2. Advanced Beginner
  3. Competence
  4. Proficiency
  5. Expertise

As we gain skill in a specific domain, we depend less on explicit rules and more on intuition and a holistic understanding of the system or process, moving from browsing Stack Overflow to a more natural understanding of "how things work." Where a novice relies on following directions, an expert recognizes subtle patterns and signals and reacts instinctively.

When it comes to AI pair programming (ok, vibe coding) artificial intelligence can level up your skills. But within limits. For a simple project, a Novice can produce a competent app - as I did. By that example, a Proficient developer could use the tools to achieve results beyond their usual reach. But it won't turn a Novice into an Expert - the gap is too great for AI to safely build that bridge.