Beyond Vibe Coding: Welcome to Vibe Modeling

Everybody talks about vibe coding, where you develop software by talking to an LLM tuned for coding. And keep asking it to create what you need until you get something that (apparently) works. And the "apparently" is key here. Even Andrej Karpathy, who coined the term, said it was great for throwaway weekend projects where people, even if they had no coding expertise, could quickly explore and build artefacts that mostly work. Despite all the hype, vibe coding should never be used for more than this, as getting the result of a vibe coding session ready for production would require: coding expertise and as much time to test and fix the code as if you started the project from scratch with a "traditional" process. But what if we could keep the magic of AI-assisted development without the unpredictability of LLM-generated code? That’s where vibe modeling comes in. What is vibe modeling? Vibe modeling is the process of building software through conversational interaction with an LLM trained for modeling, not coding. And then following a model-based / low-code approach to generate deterministic code from those "vibed models". Think of vibe modeling as a model-driven vibe coding approach. Indeed, in vibe modeling, the LLM does not aim to generate code but models. And the model-to-code step is performed with "classical" code-generation templates (or any other type of precise and semantically equivalent executable modeling techniques). This has two major advantages over vibe coding: Understandable output. A user is able to validate the quality of the LLM modeling output even if he has no coding expertise. Models are more abstract and closer to the domain and, therefore, a user should be able to understand them with limited effort. True, some basic modeling knowledge may still be required but for sure it's much easier to validate a model (e.g. a graphical class diagram) than a bunch of lines of code Reliable code-generation. The generation process is deterministic. If the model is good, we know the code is good and there is no need to check it. And vibe modeling is just one of the possible low-modeling strategies and could be combined with them. For instance, you could upload to the LLM (as context for the prompt) any document already describing the domain you want to model (interviews, manuals, tutorials, ....) and then have a chat with the LLM to improve / adapt this first partial model. Is there any vibe modeling tool? Not quite. Low-code tools (like BESSER or Mendix) offer more and more AI features. But right now, mostly focusing on some kind of smart autocomplete for models. Not so much offering a kind of an integrated chatbot you can use for vibe modeling. We do have implemented a tree-of-thoughts approach for domain modeling and now are working in integrating a chatbot in the BESSER web modeling editor to go from this one-shot model to an iterative, conversation-based, model refinement. But you'll need to wait a little bit more for that. In the meantime, we’d love to hear from you:

May 1, 2025 - 15:56
 0
Beyond Vibe Coding: Welcome to Vibe Modeling

Everybody talks about vibe coding, where you develop software by talking to an LLM tuned for coding. And keep asking it to create what you need until you get something that (apparently) works. And the "apparently" is key here. Even Andrej Karpathy, who coined the term, said it was great for throwaway weekend projects where people, even if they had no coding expertise, could quickly explore and build artefacts that mostly work.

Despite all the hype, vibe coding should never be used for more than this, as getting the result of a vibe coding session ready for production would require:

  • coding expertise and
  • as much time to test and fix the code as if you started the project from scratch with a "traditional" process.

But what if we could keep the magic of AI-assisted development without the unpredictability of LLM-generated code? That’s where vibe modeling comes in.

What is vibe modeling?

Vibe modeling is the process of building software through conversational interaction with an LLM trained for modeling, not coding. And then following a model-based / low-code approach to generate deterministic code from those "vibed models". Think of vibe modeling as a model-driven vibe coding approach.

Indeed, in vibe modeling, the LLM does not aim to generate code but models. And the model-to-code step is performed with "classical" code-generation templates (or any other type of precise and semantically equivalent executable modeling techniques).

This has two major advantages over vibe coding:

  1. Understandable output. A user is able to validate the quality of the LLM modeling output even if he has no coding expertise. Models are more abstract and closer to the domain and, therefore, a user should be able to understand them with limited effort. True, some basic modeling knowledge may still be required but for sure it's much easier to validate a model (e.g. a graphical class diagram) than a bunch of lines of code
  2. Reliable code-generation. The generation process is deterministic. If the model is good, we know the code is good and there is no need to check it.

And vibe modeling is just one of the possible low-modeling strategies and could be combined with them. For instance, you could upload to the LLM (as context for the prompt) any document already describing the domain you want to model (interviews, manuals, tutorials, ....) and then have a chat with the LLM to improve / adapt this first partial model.

Is there any vibe modeling tool?

Not quite. Low-code tools (like BESSER or Mendix) offer more and more AI features. But right now, mostly focusing on some kind of smart autocomplete for models. Not so much offering a kind of an integrated chatbot you can use for vibe modeling. We do have implemented a tree-of-thoughts approach for domain modeling and now are working in integrating a chatbot in the BESSER web modeling editor to go from this one-shot model to an iterative, conversation-based, model refinement. But you'll need to wait a little bit more for that.

In the meantime, we’d love to hear from you: