OpenAI Shelvs Erotic ChatGPT After Outcry From Employees, Investors, and Advisors


OpenAI has indefinitely suspended plans to add an erotic “adult mode” to ChatGPT. The Financial Times reported on this on Wednesdaycovers a five-month saga in which the feature was confidently announced, delayed twice, and ultimately abandoned after staff, advisors, and investors backed out. The withdrawal is the third major product change for OpenAI in a week, following the shutdown of video creation software Sora on Monday and the subsequent collapse of a planned $1 billion investment from Disney.

Adult Mode was first announced in October 2025 by CEO Sam Altman, who said at the time that X was confident that OpenAI could make explicit sex chats ageable, and that the move was the company’s “treat adult users like adults.” It was originally planned for December 2025, then moved to the first quarter of 2026, and now it has been delayed without a release schedule. OpenAI told the Financial Times it plans to conduct “long-term research into the impact of sexualized conversations and emotional attachments” before making a product decision.

What went wrong

The challenges were technical, ethical and commercial in nature and compounded each other. Engineers working on the feature discovered that training models built to avoid sexual content for security reasons were more difficult than expected to reliably produce explicit material. When using datasets containing sexual content, the models also produced results involving illicit scenarios, including sex with animals and incest. This feature was not only controversial; was resistant to safe construction.

OpenAI’s own advisory board has raised concerns that go beyond content moderation. Counselors have warned that overtly sexual ChatGPT interactions can create an unhealthy emotional connection with serious mental health consequences. One consultant described the risk as making ChatGPT β€œa habit.sexual suicide coach,” a phrase that resonates eerily with the company’s statement existing legal exposure. OpenAI is currently facing At least eight lawsuits alleging that ChatGPT contributed to user deathsIncluding the case of 16-year-old Adam Ray of Southern California, whose family claims the chatbot discussed suicide methods with him more than 200 times before he committed suicide in April 2025. In a financial filing released to investors earlier this week, OpenAI listed these claims among the main risks to its business.

Employees also began to question whether this feature served OpenAI’s mission. The company’s charter commits it to the creation of artificial general intelligence for the benefit of humanity. Some employees have had difficulty reconciling this ambition with the engineering effort required to make the chatbot talk dirty without breaking the law.

πŸ’œ of EU technology

The latest rumblings from the EU tech scene, a story from our wise founder Boris and some questionable AI art. Free in your inbox every week. Register now!

Calculation of the investor

Investors raised a crucial objection: the economics did not justify the risk. Some investors have questioned why OpenAI would risk its reputation for a product, two people familiar with the matter told the Financial Times.a relatively small rise.“The market for AI-generated adult content exists, but it’s served by a constellation of smaller, less vetted companies. For a $300 billion company raising capital and attract corporate clientsthe brand damage associated with open content outweighed the potential return.

The issue of age verification exacerbated this concern. OpenAI’s approach relied on AI-based age prediction rather than rigorous identity checks, and internal tests revealed an error rate of about 10 percent, meaning about one in ten users could be misclassified. For a product designed to keep explicit content away from minors, this margin is not a rounding error. This is a regulatory and reputational disaster especially in a legal environment where many US states have passed or proposed laws requiring age verification of users before granting access to adult material.

One week retreat

The adult mode decision does not exist in isolation. On Monday, OpenAI announced that Sora, the AI ​​video creation tool it had positioned as a creative platform for filmmakers and content creators, would cease operations. Sora consumed massive computing resources relative to its revenue, and its most prominent commercial partnership fell apart after it announced the end of a three-year licensing deal with Disney that allowed users to create videos featuring Disney, Marvel, Pixar and Star Wars characters. Disney planned to invest $1 billion in OpenAI as part of the deal. No money changed hands.

Together, the three turnarounds paint a picture of a company pulling back from its consumer product experiences and refocusing on its core business. Investors are more interested in seeing OpenAI integrate ChatGPT with coding assistants, the Financial Times reports.super program” is focused on a vision to transform how businesses operate, with clearer monetization and fewer reputational threats.

OpenAI has said it will reallocate resources to robotics and autonomous software agents, areas where the path from research to commercial value is more direct and the regulatory landscape is complex, but not without the specific toxicity of sexualized artificial intelligence and child safety.

An example

There’s a recurring dynamic in OpenAI’s product strategy: announce ambitiously, face the real-world complications that less confident organizations might expect, and then step back to backtrack as a matter of caution. The adult mode was announced before the technical issues surrounding the creation of safe content had been resolved, the age verification system had achieved acceptable accuracy, and the advisory board’s concerns about harm to mental health had been addressed. Disney’s partnership with Sora was announced before the product demonstrated commercial viability. In both cases, the announcement created scope and hinted at ambition, but follow-up revealed gaps between what was promised and what could be delivered.

The company’s willingness to shelve the feature instead of pushing it despite the risks is noteworthy. This suggests that the pressure of lawsuits, investors and internal discontent is beginning to act as a corrective mechanism, pulling OpenAI back from the edges of what is technically possible. what is commercially and ethically sustainable. Whether this mechanism is reliable or just a response to the most visible crises is a question that the next product announcement will answer.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *