Who decides what AI tells you? Campbell Brown, once Meta’s head of news, has some ideas


Campbell Brown has spent his career pursuing accurate information, first as a well-known television journalist and then as Facebook’s first and only head of news. Now, as he watches artificial intelligence reshape how humans use information, he sees history threatening to repeat itself. He doesn’t wait for someone else to fix it this time.

His company, AI Forum — recently discussed with TechCrunch’s Tim Fernholz at the StrictlyVC evening in San Francisco — evaluates how stock models perform on what he calls “high-stakes topics”—geopolitics, mental health, finance, recruiting—topics where there are no yes-no answers.

The idea is to find the world’s most advanced experts, give them architect criteria, then train AI judges to evaluate the models at scale. For Forum AI’s geopolitics work, Brown has hired Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former Speaker of the House Kevin McCarthy and Anne Neuberger, who led cybersecurity in the Obama administration. The goal is to get AI judges to about 90% consensus with these human experts, a threshold that Forum AI can reach, he said.

Brown traces the origins of Forum AI, which was founded in New York 17 months ago, to a specific point. “When ChatGPT first went public, I was at Meta,” he recalled, “and I remember shortly after realizing that this was going to be a funnel through which all the data flowed. And it wasn’t very good.” The implications for his own children made the moment feel almost present. “If we don’t figure out how to fix this, my kids are going to be really dumb,” she recalled thinking.

What frustrated him most was that accuracy didn’t seem to be anyone’s forte. Foundation model companies, he said, are “extremely focused on coding and math,” while news and data are more difficult. But tougher, he argued, does not mean optional.

Indeed, when Forum AI began evaluating leading models, the findings were not encouraging. He noted that Gemini pulled from Chinese Communist Party websites “for stories that had nothing to do with China,” and noted that nearly all of the models had a left-leaning political bias. More subtle failings abound, he says, including missing context, missing perspectives, unacknowledged strawman arguments. “There’s a long way to go,” he said. “But I also think there are very easy fixes that would greatly improve the results.”

For years, Brown watched what happened at Facebook when the platform was optimized for the wrong thing. “We failed at a lot of things we tried,” Fernholz said. The fact-checking program he founded no longer exists. Even if social media turns a blind eye to it, the lesson is that optimizing for engagement has been bad for society and left many less informed.

His hope is that artificial intelligence can break this cycle. “It could go either way right now,” he said; companies could give users what they want or “give people what’s real, what’s honest, and what’s true.” He acknowledged that the idealistic version of this—an artificial intelligence that optimizes the truth—might seem naïve. But he thinks enterprise can be an unlikely ally here. Businesses that use AI for credit decisions, lending, insurance and recruiting care about liability and “will want you to optimize to get it right.”

This enterprise demand is also what Forum AI is betting its business on, though turning compliance interest into consistent revenue remains a challenge, especially given that much of the current market is still satisfied with checkbox audits and standardized criteria, which Brown finds inadequate.

According to him, the compatibility picture is “a joke”. When New York City passed the first hiring bias law requiring AI background checks, the state comptroller found more than half had undetected violations. A realistic assessment, he said, requires domain expertise to deal not only with known scenarios, but also with things that “may cause a problem that people don’t think about.” And this work takes time. “Smart generals aren’t going to cut it.”

Brown – his company raised last fall 3 million dollars Led by Lerer Hippeau—he is uniquely positioned to describe the relationship between the AI ​​industry’s self-image and the reality for most users. “You hear from the leaders of big tech companies, ‘This technology will change the world’, ‘it will put you out of a job’, ‘it will cure cancer,'” he said. “But the average person who just uses a chatbot to ask basic questions is still getting a lot of blank and wrong answers.”

Trust in artificial intelligence is unusually low, and he thinks skepticism is often justified. “The conversation is kind of going around something in Silicon Valley, and it’s a whole different conversation going on among consumers.”

When you purchase through links in our articles, we may earn a small commission. This does not affect our editorial independence.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *