Why vmblu
■ Writing code has changed
Building software systems will never be the same again. LLMs accelerate coding and allow to build more, faster. Vibe coding is a thing now. But there are downsides, context scatters and systems become hard to reason about.
Both the human and the LLM would profit greatly from a common, formal reference of the architecture of the system. For the human to understand what is being made, for the LLM to build from an unambiguous blueprint.
vmblu addresses this by making the system’s architecture explicit, executable, and AI-centric.
■ Problem Framing
LLMs reduce time-to-code, generate boilerplate, apply well known design patterns and make far less errors then humans. The downside however is that LLMs can veer off in uncharted territory because of
- Context fragmentation - Prompts drift, assumptions diverge and inconsistencies can creep in between LLM sessions.
- Contract drift - What is exactly done, what has priority, what is returned, etc. becomes uncertain.
- Scaling pain - As the app grows, ad-hoc glue is added while other code becomes stale or redundant.
The benefits of LLM generated code are thus offset by time spent trying to understand the code and by removing redundant or irrelevant code. LLMs will undoubtedly continue to improve, but it will remain important to have unambiguous contracts and to be able to observe what has been built.
■ vmblu Restores Clarity
When the architecture itself becomes a first-class artifact — machine-readable and executable — humans and LLMs can collaborate safely, repeatedly, and at scale.
vmblu restores clarity by putting architecture at the center:
- Visual, composable architecture – A vmblu model reads like a circuit diagram: nodes, pins, interfaces, routes, and buses — simple, orthogonal primitives.
- AI-native by construction – Defined by a formal JSON schema that LLMs can read and write.
- Executable – The model doubles as the app’s wiring diagram and compiles into a runnable system.
- Parallelizable – Nodes can be implemented independently by multiple developers or LLMs, guided by shared contracts.
- Incrementally adoptable – Wrap existing code as nodes, then link and refactor over time - vmblu itself a vmblu app.
- Lightweight runtime – A lean message switch with request/reply, workers, and debugging — no framework lock-in.
Outcome: faster throughput, visual clarity, and sustained architectural integrity.
■ Additional Benefits
Message-based architectures — the foundation of vmblu — bring their own advantages:
- Loose coupling with clear boundaries – Nodes only communicate through messages, not direct calls. You can refactor, replace, or test components independently.
- Natural concurrency and scalability – Messages are asynchronous by design; the same model runs locally, across threads, or distributed systems without change.
- Explicit communication contracts – Each message has a defined structure and meaning, making behavior transparent and machine-understandable.
- Traceable and testable – Every interaction flows through observable messages, enabling time-travel debugging, targeted tests, and reproducible runs.
- Evolvable and AI-friendly – The message graph becomes a stable backbone. Both humans and LLMs can extend it safely by adding messages or handlers without breaking existing logic. The node-based design also makes vmblu a natural fit for MCP-based (Model Context Protocol) interaction.
■ Conclusion
To make the most off LLM assisted system development, vmblu offers
- A visual architecture
- A formal, shared context for AI
- A message based formalism
to build trustworthy, efficient and observable systems.