From Vibe Coder to AI-Assisted Architect
Some claim that AI will soon replace us because it can code better than we can. Already, the current models of AI, not necessarily the ones that you purchase or that you just get through the retail ChatGPT, but the more advanced models that are available now to companies, they can code better than, let’s call it 60%-70% of coders. Barack Obama If you believe that, it's better to stop reading this article. Don't waste your time. AI is a useful assistant that helps us work much faster — true. But it’s not a replacement. Of course, the way we work has changed. We need fewer people focused solely on typing out code without understanding the bigger picture. Instead, we need more people who can design systems, think at a higher level about components, understand system architecture, follow programming best practices, and still be able to fix deep issues in AI-generated code. I’m sure you’ve already heard of vibe coders: Someone who generates code by repeatedly prompting a large language model, and has a very limited awareness of how the code actually functions This new approach — with its endless loop of prompt → generation → testing & refinement — wastes a lot of time, patience, nerves, and money if used with "very limited awareness." I got stuck in this loop myself. It just doesn’t work. However, I’ve been coding effectively with AI tools for a while now and seem to have found a more concise way to program with AI — one that doesn’t lead to frustration. So, I’d like to share several ideas that I use regularly and that work in most cases. I hope they’ll save you time and money — and definitely speed up your development process. But this requires much more focus and — surprise, surprise — programming skills. Idea To keep things simple, I’ll explain the entire process using a small example in Go. Let’s begin with the idea behind the issue or task — whatever it may be. Suppose we need to parse a YAML configuration file that contains some information about an AI model we’re going to use in our application. As a developer, you recognise that you’ll need something capable of retrieving specific values from this config file. Let’s express that through an interface: type Config interface { GetModel() (string, error) } Notice that there’s no AI involved yet (though you could ask the AI to generate the interface as well). There’s no implementation either. The key point is to clearly define what you want to achieve in the end. This is the best way to explain the task formally. In my experience, the AI understands such definitions much better than plain English. Test Now, instead of diving into implementation details straight away, we can use an ancient technique — Test-Driven Development (TDD). Surprisingly, this approach helps in two important ways. First, it gives us a test that will validate our future implementation. Second, it provides a clear hint to the AI about what we want to implement. Most well-known AI tools can help generate such tests. For example, in aider, you can use the /add config.go command to add our interface, and then prompt: Generate unit tests for a Config interface. Assume the implementation will load configuration from a YAML file. The tests should simulate YAML-based input and validate that the interface handles it correctly. (In Copilot, you can use the /tests command for the same purpose — no prompt required.) As a result, the AI will generate something like this: const data = ` default-model: 4o models: 4o: model-id: gpt-4o ` func TestGetModel(t *testing.T) { tmp, _ := os.MkdirTemp("", "configtest") defer os.RemoveAll(tmp) path := tmp + "/config.yml" os.WriteFile(path, []byte(data), 0644) config := YamlConf(path) model, err := config.GetModel() if err != nil { t.Errorf("Expected no error, got %v", err) } if model != "gpt-4o" { t.Errorf("Expected model ID 'gpt-4o', got %v", model) } } Of course, the compiler will warn us that the YamlConf implementation is missing. We can generate an empty implementation using IDE capabilities, write it manually, or ask

Some claim that AI will soon replace us because it can code better than we can.
Already, the current models of AI, not necessarily the ones that you purchase
or that you just get through the retail ChatGPT, but the more advanced models
that are available now to companies, they can code better than, let’s call it
60%-70% of coders.Barack Obama
If you believe that, it's better to stop reading this article. Don't waste your time. AI is a useful assistant that helps us work much faster — true. But it’s not a replacement. Of course, the way we work has changed. We need fewer people focused solely on typing out code without understanding the bigger picture. Instead, we need more people who can design systems, think at a higher level about components, understand system architecture, follow programming best practices, and still be able to fix deep issues in AI-generated code.
I’m sure you’ve already heard of vibe coders:
Someone who generates code by repeatedly prompting a large language model,
and has a very limited awareness of how the code actually functions
This new approach — with its endless loop of
prompt → generation → testing & refinement
— wastes a lot of time, patience, nerves, and money if used with "very limited awareness." I got stuck in this loop myself. It just doesn’t work. However, I’ve been coding effectively with AI tools for a while now and seem to have found a more concise way to program with AI — one that doesn’t lead to frustration.
So, I’d like to share several ideas that I use regularly and that work in most cases. I hope they’ll save you time and money — and definitely speed up your development process. But this requires much more focus and — surprise, surprise — programming skills.
Idea
To keep things simple, I’ll explain the entire process using a small example in Go. Let’s begin with the idea behind the issue or task — whatever it may be. Suppose we need to parse a YAML configuration file that contains some information about an AI model we’re going to use in our application.
As a developer, you recognise that you’ll need something capable of retrieving specific values from this config file. Let’s express that through an interface:
type Config interface {
GetModel() (string, error)
}
Notice that there’s no AI involved yet (though you could ask the AI to generate the interface as well). There’s no implementation either. The key point is to clearly define what you want to achieve in the end. This is the best way to explain the task formally. In my experience, the AI understands such definitions much better than plain English.
Test
Now, instead of diving into implementation details straight away, we can use an ancient technique — Test-Driven Development (TDD). Surprisingly, this approach helps in two important ways. First, it gives us a test that will validate our future implementation. Second, it provides a clear hint to the AI about what we want to implement.
Most well-known AI tools can help generate such tests. For example, in aider, you can use the /add config.go
command to add our interface, and then prompt:
Generate unit tests for a Config interface. Assume the implementation will load configuration from a YAML file. The tests should simulate YAML-based input and validate that the interface handles it correctly.
(In Copilot, you can use the /tests
command for the same purpose — no prompt required.)
As a result, the AI will generate something like this:
const data = `
default-model: 4o
models:
4o:
model-id: gpt-4o
`
func TestGetModel(t *testing.T) {
tmp, _ := os.MkdirTemp("", "configtest")
defer os.RemoveAll(tmp)
path := tmp + "/config.yml"
os.WriteFile(path, []byte(data), 0644)
config := YamlConf(path)
model, err := config.GetModel()
if err != nil {
t.Errorf("Expected no error, got %v", err)
}
if model != "gpt-4o" {
t.Errorf("Expected model ID 'gpt-4o', got %v", model)
}
}
Of course, the compiler will warn us that the YamlConf
implementation is missing. We can generate an empty implementation using IDE capabilities, write it manually, or ask the AI to create a skeleton for us.
Again, we don’t have a real implementation yet — just tests and a placeholder. At this point, we need to review all the generated test cases. We might adjust our expectations, tweak the constructor, or modify test values. By doing this, we provide the AI with as many concrete clues as possible.
Implementation
Now we’re ready to ask the AI to implement the interface. This part is simple. All we need to do is mention both the interface and the tests in the AI prompt and request an implementation.
And voilà — we get a simple but functional result:
type YamlConfig struct {
Models map[string]map[string]string `yaml:"models"`
}
func YamlConf(filepath string) *YamlConfig {
configData, err := os.ReadFile(filepath)
if err != nil {
panic(err)
}
var config YamlConfig
err = yaml.Unmarshal(configData, &config)
if err != nil {
panic(err)
}
return &config
}
func (c *YamlConfig) GetModel() (string, error) {
model := c.DefaultModel
return c.Models[model]["model-id"], nil
}
As always, you need to review the generated code. Think of yourself as a code reviewer checking the work of a junior developer: the AI can assist, but you must ensure the implementation is correct. Moreover, since we are generating code in small and stable chunks, the review process is fairly straightforward.
During the review, you might notice that some tests fail or that the code has flaws like duplication or inefficiencies that could be improved. In most cases, these fixes are minor and quicker to apply manually. However, nothing stops you from asking the AI for help. For example, you can simply paste the test failure message into the prompt — no extra wording required. It will understand you based on the previous context.
Code Quality
As you may have noticed — or perhaps not — there are a few flaws in our generated code. Actually, I didn’t spot them myself at first:
config/yamlconfig_test.go:97:17: Error return value of `os.WriteFile` is not checked (errcheck)
os.WriteFile(path, []byte(data), 0644)
^
I noticed this issue only after running a linter. Sometimes we’re not familiar with the tricky edge cases or conventions of a particular language. A convenient way to catch and fix such issues — including formatting, styling, and even some security problems — is to use linters. For example, you can use golangci-lint for Go, ESLint for JavaScript, and Pylint for Python. In fact, almost every widely used programming language has its own linter or code quality tool. Linters are especially helpful when generating code with AI — they help keep your code clean and safe, at least to some degree.
Once again, we can either fix these issues manually or ask the AI to do it for us. There’s no need for a detailed prompt — just copy and paste the warning message. The AI will understand it. aider
even has a special /lint
command for this exact scenario.
By the way, there’s another important aspect of code quality to consider: conventions and preferred libraries. Some teams — or you, personally — may follow specific practices. For example, instead of checking values in tests with plain Go if
statements:
if err != nil {
t.Errorf("Expected no error, got %v", err)
}
if model != "gpt-4o" {
t.Errorf("Expected model ID 'gpt-4o', got %v", model)
}
I prefer to use the Go library testify, which significantly simplifies the code:
assert.NoError(t, err, "Error should be nil")
assert.Equal(t, "gpt-4o", model, "Model ID should match")
Of course, we could rewrite our tests to use testify
, but doing so every time we generate a test would be tedious. And unnecessary — this problem is already solved.
You can define a configuration file (called conventions in aider, or rules in Cursor) to instruct the AI on your preferred coding practices:
- Use the `testify` library (`github.com/stretchr/testify`) for all Go tests.
- Prefer `assert` and `require` from `testify` over Go's standard `testing` assertions.
This removes the burden of constantly rewriting and rephrasing the same prompts.
Do We Really Need to Write Tests First?
At this point, we've already discussed the core development loop:
interface -> test -> implementation -> code quality
But do we really need to write the test first? Although I personally believe it's the fastest way to reach working code, you can adjust the process to suit your needs. For example, you might ask the AI to generate an implementation skeleton before writing the test — this helps avoid compiler errors early on:
interface -> implementation (empty) -> test -> implementation (real) -> code quality
For some people, this approach may work better. In reality, the key principle behind all of these steps is to keep the context you're working with as small as possible — no matter which steps you put first. Don’t add the entire application to the AI’s context, and don’t try to solve the entire problem at once — that’s a path to nowhere. You’ll likely spend more time fixing large blocks of AI-generated code than if you’d built it up gradually, piece by piece.
Keep in mind: even though the AI has its own context, so do we. And since we must constantly review the generated code, keeping the scope small benefits us too. This way, you're more likely to review everything carefully, which leads to greater confidence and a more enjoyable development process.
Final Thoughts
I encourage you to embrace AI, but don’t be a vibe coder. Be an experienced developer — or even an architect — who treats AI as a tool to speed up the development process. Don’t think of it as a replacement for your expertise. Just integrate it effectively:
- Use interfaces and method signatures as formal requirements. This gives the AI a clear structure to work from and guides it toward generating the code you actually need.
- Generate tests extensively. Tests not only improve code quality but also help better understand the task. They also give the AI clearer objectives.
- Be focused and concise. Avoid overloading the AI with unnecessary context. The more you add, the more money you spend, the more cluttered and unpredictable the output becomes — and the more time you'll spend cleaning it up.
- Always review the changes. AI is not perfect; it makes mistakes all the time.
- Use linters to fix code style and catch subtle issues. They’re incredibly helpful — especially when generating code with AI.
It’s also worth noting that a solid understanding of the AI tools you use will make a big difference. Learn the available commands, modes, supported models, and prompt techniques. There’s always room for improvement and customisation to fit your specific workflow.
Happy coding!