What this video is about
Large language models are remarkable interpreters of intent — but they are not compilers. They reason, hallucinate, paraphrase and approximate. Compilers, on the other hand, are unforgiving: a syntax error halts the build, an undefined symbol fails the link. In this short talk, Predrag Petrović argues that the production-readiness gap between LLMs and compilers isn't a bug to be fixed by the next model release — it's a structural property of how each tool is designed.
The lesson for anyone working in AI SEO, Generative Engine Optimization (GEO) or Answer Engine Optimization (AEO) is sharp: if you want LLMs to cite you accurately, you cannot rely on them to read between the lines. You have to ship structure, constraints and verifiability — the things compilers expect.
Five ideas worth your time
- Compilers fail loudly. LLMs fail quietly. The biggest risk in production isn't a wrong answer — it's a confident wrong answer that no signal flags.
- Determinism is a feature, not a limitation. The reason developers trust compilers is that the same input always yields the same output. LLM-driven content systems should engineer determinism on top of stochastic models, not under them.
- Schema is your type system. JSON-LD, entity markup and structured data are the closest thing the open web has to compile-time checks for AI consumption. Skip them and you're shipping untyped JavaScript to a strict TypeScript audience.
- Citations are the link step. A compiler links symbols across files; an answer engine links claims across sources. Your job is to make sure your symbols (entities, statements, evidence) are easy to resolve and unambiguous.
- Production-ready content is testable content. Just as production code has unit tests, production content should have prompts you can fire at GPT, Claude, Perplexity and Gemini — and assertions about how they should respond.
“You can't debug a hallucination by squinting harder at the output. You debug it by tightening the input — schema, sources, structure, scope.”
How to apply this to AI SEO & GEO
1. Treat your knowledge graph like a public API
Stable URIs. Unique identifiers. Versioned facts. The same way you wouldn't change the shape of a public REST endpoint without a contract, you shouldn't change the shape of an entity that's already cited inside an LLM's training distribution.
2. Write content with grounding hooks
Every claim worth citing should be paired with a verifiable anchor — a date, a study, a source URL, a number. LLMs select for retrievability; ungrounded prose is invisible.
3. Build feedback loops, not just dashboards
Track AI Overview share, branded prompts and citation accuracy weekly. Make it a code-review cadence, not a quarterly debrief. The signal moves faster than the calendar.
Want to apply this to your brand?
If you're building for the AI search era and want a partner who treats visibility like a production system, the practice is open for Q3 / Q4 2026 engagements.
- AI SEO Strategy — entity, schema and content architecture for LLM readability
- Generative Engine Optimization (GEO) — citation engineering for ChatGPT, Perplexity and Gemini
- Answer Engine Optimization (AEO) — winning Google AI Overviews and zero-click queries