The AI coding assistant economy operates on a fundamental misalignment:
Models are rewarded for appearing productive rather than being correct, users lack time to verify outputs, and economic incentives favor speed over quality.
This article examines how training incentives, verification costs, and market dynamics create patterns that often lead to low-quality code. Based on direct observation of model behavior patterns in conversations.
𝗗𝗗𝗦𝗘 𝗙𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 𝗽𝘂𝗯𝗹𝗶𝘀𝗵𝗲𝘀 𝗖𝗘𝗙 — 𝗮𝗻 𝗢𝗥𝗠 𝗳𝗼𝗿 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗶𝗻 𝗟𝗟𝗠 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 Just as Hibernate abstracts databases for transactions, CEF abstracts knowledge stores for Context Engineering. Build, test, and benchmark intelligent context models in minutes, without the complexity of enterprise graph infrastructure.
Deterministic AI Design with Capability OS: Save from the AI Bubble - Live demo of Omni Agent
Everyone is piloting agents, copilots and AI platforms. Very few are asking a harder question: which of these systems will still be trusted when the AI bubble bursts? In this session I'll share my 1.5-year journey from raw LLM experiments and messy AI-generated code to a deterministic, decision-first architecture for agentic systems. I will demo Omni Agent - a Capability OS for Enterprise AI - and then walk through how it is designed and built using Decision-Driven Software Engineering (DDSE) and the Agentic Contract Model (ACM) so that execution stays bounded, auditable and aligned to your decisions, not the model's mood. What you'll see • End-to-end walkthrough of Omni Agent: goals, plans, tasks, ledgers, telemetry • A real scenario on a codebase (e.g. an Angular chat app) - from "investigate this" to concrete actions and tracked outcomes • How decisions, capabilities, contracts and context are modeled in DDSE & ACM • Architecture view of Omni Agent as a "Capability OS": planner, executor, context layers and extensibility • Honest trade-offs: what is still weak, what's missing, and where this approach may or may not fit your environment Who this is for • Engineering leaders and architects evaluating agentic platforms • Developers who want more than "prompt + tools" and care about system design • Anyone worried about the AI bubble and looking for deterministic, governable AI systems Format • ~40 minutes of platform demo + design walkthrough (via YouTube Premiere) • I'll be present live in the chat • Follow-up Q&A thread on LinkedIn for deeper questions
Most “Knowledge bases” today are just vector indexes with a chat UI. Without the LLM, they know nothing. With the LLM, every answer re-rents the same knowledge in tokens.
𝗞𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀: - A vector store isn’t a knowledge base; it’s a smart memory. The “knowledge” lives in the model you keep paying to re-read your own documents.
- Without a model (entities + relationships), you lock in two long-term costs: high tokens per question and shallow answers per question.
- A lightweight knowledge model lets you store facts once, query them cheaply, and use the LLM only for judgment and language — not for rediscovering the same truths forever.
𝗪𝗵𝗲𝗻 𝗘𝘃𝗲𝗿𝘆𝗼𝗻𝗲 𝗜𝘀 𝗮𝗻 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁 𝘏𝘰𝘸 𝘞𝘦 𝘚𝘤𝘢𝘭𝘦 𝘜𝘱 𝘐𝘨𝘯𝘰𝘳𝘢𝘯𝘤𝘦 𝘪𝘯 𝘚𝘰𝘧𝘵𝘸𝘢𝘳𝘦 Published on Medium in AI Advances Publication| Nov 20
This one is for teams where everyone suddenly carries the hashtag#architect label and every deck has an LLM box in the middle. My new piece, “When Everyone Is an Architect,” is a small reality check on how we build software and AI platforms now: more diagrams than foundations, more confidence than discipline. If that sounds uncomfortably familiar, you might enjoy it.