
When startup fundraising platform VentureCrowd started implementing AI coding agents, they saw the same gains as other businesses: they reduced advanced development cycles by 90% on some projects.
But it didn’t come easy or without a lot of trial and error.
VentureCrowd’s first challenge was around the quality of data and context, as Diego Mogollon, chief product officer at VentureCrowd, told VentureBeat that agents tend to think against any information they can get on the job and then confidently “wrong” because they base their knowledge only on the context they’re given.
Their other obstacles, like many others, were confusing information and unclear processes. As in context, Mogollon said coding agents would amplify bad data, so the company had to build a well-structured code base first.
“The challenges are rarely with the coding agents themselves; they’re with everything around them,” Mogollon said. “It’s a context problem masquerading as an AI problem, and it’s the number one failure mode I see in agent implementations.”
Mogollon said VentureCrowd faced a number of obstacles when overhauling its software development.
VentureCrowd’s experience illustrates a broader issue in AI agent development. Models don’t beat agents; rather, they are overwhelmed by too much context and too many tools at once.
Too much context
It comes from this phenomenon Context is called bloatAs AI systems collect more and more data, tools or instructions, workflows become more complex.
The problem arises because agents need context to work better, but too much of it creates noise. The more contexts an agent has to traverse, the more tokens it uses, the slower it gets, and the higher the cost.
One way to avoid context bloat is through context engineering. Context engineering helps agents understand code changes or take requests and match them with their tasks.
However, context engineering often becomes an external task, rather than embedded in the coding platforms that enterprises use to build their agents.
How encoding agent providers respond
VentureCrowd relied on one solution in particular to help tackle the challenges of enterprise AI agent deployment: Salesforce’s Agentforce Vibes, a coding platform that lives inside Salesforce. available for all plans starting with free.
Salesforce was recently updated to Agentforce Vibes version 2.0, which extends support for third-party frameworks such as ReAct. Most importantly for companies like VentureCrowd, Agentforce Vibes added Capabilities and Skills they can use to guide agent behavior.
“Contextually, our entire platform, frontend and backend, runs on the Salesforce ecosystem. So when Agentforce Vibes launched, it naturally fit into an environment we already know well,” Mogollon said.
Salesforce’s approach does not minimize the use of context agents; rather, it helps enterprises ensure that contexts remain within their data models or codebases. Agentforce Vibes adds further implementation through the new Skills and Abilities feature. Abilities define what agents want to achieve, and Skills are the tools they will use to get there.
Other encoding agent platforms handle context differently. For example, Claude Code and OpenAI’s Code focus on autonomous execution, continuously reading files, executing commands, and expanding the context as tasks evolve. There is a Claude Code context pointer which compresses the context when too large.
A consistent pattern with these different approaches is that most systems manage increasing contexts for agents, rather than limiting them. Context continues to grow, especially as workflows become more complex, making it difficult for enterprises to control costs, delays, and reliability.
Mogollon said his company chose Agentforce Vibes not only because much of its data already lives in Salesforce, making integration easier, but also because it gives its agents more control over the context they’re feeding.
What builders need to know
There is no single way to deal with context bloat, but the pattern is now clear: more context does not always mean better results.
In addition to investing in context engineering, enterprises should experiment with the context constraint approach they are most comfortable with. For enterprises, this means that the challenge is not just to give agents more information, but to decide what to leave out.





