The AI ​​scaffold layer collapses. The CEO of LlamaIndex explains what survives.



The layer of scaffolding that developers once needed to ship LLM applications—layers of indexing, query engines, search pipelines, carefully orchestrated agent loops—is crumbling. According to LlamaIndex co-founder and CEO Jerry Liu, that’s not a problem. That’s the point.

“As a result, there is less need for frameworks to help users design these deterministic workflows in a lightweight and shallow way,” said Jerry Liu, co-founder and CEO of LlamaIndex. VentureBeat Beyond the Pilot podcast.

The context becomes a ditch

Liu’s LlamaIndex is one of the most advanced search-augmented generation (RAG) frameworks that integrate personal, specific, and domain-specific data into LLMs. But even he admits that such frameworks are becoming less and less relevant.

He notes that with each new release, the models demonstrate an increasing ability to reason about “huge amounts” of unstructured data, and they’re getting better at it than humans. They can be relied upon to think broadly, self-correct, and carry out multi-step planning; The Modern Context Protocol (MCP) and Claude Agent Skills plugins allow models to independently discover and use tools without requiring integration for each.

Agent patterns converged toward what Liu called "controlled agent diagram" — a harness layer combined with more tools, MCP connectors, and skill plugins from a custom-built orchestration for each workflow.

Additionally, coding agents excel at writing code, meaning developers don’t need to rely on extensive libraries. In fact, about 95% of LlamaIndex’s code is generated by artificial intelligence. “Engineers don’t actually write real code,” Liu said. “They all write in natural language.” This means that the layers between programmers and non-programmers are breaking down because “the new programming language is essentially English.”

Instead of coding by hand or struggling to understand API and documentation integration, developers can simply point to Claude Code. “This kind of work was either very inefficient or would have broken the agent three years ago,” Liu said. “It’s easier for people to build relatively advanced searches with fairly simple primitives.”

So what’s the key differentiator when a stack collapses?

Context, says Liu. Agents must be able to decode file formats to extract the correct information. Providing higher accuracy and cheaper analysis becomes a key issue, and LlamaIndex is well positioned here, it claims, as it thrives on agentive document processing through optical character recognition (OCR).

“We actually found that all of these file format containers have a core set of data locked in,” he said. Finally, “It doesn’t matter whether you use OpenAI Codex or Claude Code. What they all need is context.”

Modular storage of stacks

There is growing concern about constructors such as Anthropic locking in session data; in light of this, Liu emphasizes the importance of modularity and agnosticism. Builders should not bet on any boundary model or overbuild the components of the stack in a way that makes them too complex.

Search has become an “agent-plus-sandbox,” as he describes it, and enterprises must ensure their codebases are free of technological debt and adaptable to changing patterns. They must also recognize that some parts of the stack will naturally have to be discarded eventually.

“Because with every new model release, there’s always a different model that wins,” Liu said. “You really want to make sure you have some flexibility to take advantage of that.”

Listen to the podcast to hear more about:

  • LlamaIndex initially started as a “toy project” with about 40% accuracy;

  • How SaaS companies can use complex workflows that need to be standardized and repeatable for middle-skilled workers;

  • Why vertical AI companies are starting up and why “build vs. buy” is still a very relevant question in the agent era.

You can also listen and subscribe Off the pilot about Spotify, apple or wherever you get your podcasts from.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *