TL;DR
Sam Altman apologized to the British Columbia community of Tumbler Ridge for OpenAI’s failure to alert police after its systems flagged a ChatGPT user who killed eight and injured 27 in Canada’s deadliest school shooting since 1989. however, the administration overruled them by imposing a “higher threshold” that the negotiations did not meet. OpenAI has since lowered the reporting threshold and contacted the RCMP, but all changes are voluntary and there is no Canadian law requiring AI companies to report identified threats.
Sam Altman issued an open letter Thursday to the community of Tumbler Ridge, British Columbia, apologizing for not alerting law enforcement after OpenAI’s systems flagged the user who went on to carry out Canada’s deadliest school shooting in nearly four decades. “I’m very sorry that we didn’t notify law enforcement about the banned account in June,“Altman wrote.”While I know words will never be enough, I believe an apology is necessary to recognize the damage and irreparable loss your community has suffered..” The letter, dated April 23 and made public a day later, comes 72 days after 18-year-old Jesse Van Rootselaar killed eight people and wounded 27 others in a shooting that began at a family home and ended on February 10 at Tumbler Ridge High School. OpenAI’s automated abuse detection flagged Vanotsel’s account in early June. 2025. About a dozen collaborators reviewed flagged conversations describing gun violence scenarios, and some recommended contacting Canadian police. The company management decided against it. Account banned. No one was told. Van Rootselaar created a second account and was not discovered until the RCMP released a name.
Decision
The Wall Street Journal first reported on the internal debate at OpenAI. Staff reviewing Van Rootselaar’s flagging account saw what they described as symptoms.risk of serious harm to others.” They increased the recommendation to report the conversations to law enforcement agencies. Management applied what an OpenAI spokesperson later called a “higher threshold” for credible and imminent threat reports and concluded that the activity did not meet it. Account closed. Conversations were kept inside. The police have not been contacted. Eight months later, Van Rootselaar killed his mother, Jennifer Strang, 39, and her 11-year-old half-brother, Emmett Jacobs, in the family home, then went to the high school and opened fire with a modified rifle, killing education assistant Shannda Aviugana-Durand, 39, and five students. Lampert, Kylie Smith, Abel Mwansa and Ezekiel Schofield. Twenty-seven people were injured. Maya Gebala, 12, was shot three times in the head and neck while protecting her classmates and, according to doctors, “catastrophic, traumatic brain injuryVan Rootselaar, permanently cognitively and physically disabled, committed suicide at school.
A civil lawsuit filed in March in BC Supreme Court by Cia Edmonds on behalf of her daughter Maya alleges that ChatGPT provides “Information, guidance, and assistance for planning a mass casualty event, including the types of weapons that will be used, and describing precedents for other mass casualty events or historical acts of violence.” The specific content of the talks has not been disclosed to the public. BC Premier David Eby said he deliberately did not ask what was in the chat logs so as not to jeopardize the RCMP investigation. What is known is that OpenAI’s own system identified the talks as potentially dangerous, OpenAI’s own staff recommended action, and OpenAI management chose not to act. The apology is not for non-detection. Detection worked. The apology is for what happened after it was discovered.
Letter
Altman’s letter was addressed to the Tumbler Ridge community and was released after BC Premier Eby Altman announced he agreed to apologize during earlier discussions about OpenAI’s handling of the case. “I’ve been thinking about you a lot these past few months,” Altman wrote. “I can’t imagine anything worse than losing a child.” He added: “I reaffirm my commitment to the mayor and prime minister to find ways to prevent such tragedies in the future. Going forward, we will continue to focus on working with all levels of government to ensure that something like this never happens again.“The letter contained no specific policy commitment, no description of what OpenAI would change, and no acknowledgment that employees recommended reporting the account and that it was terminated. Eby called for an apology.”is necessary“but”Tumbler Ridge is absolutely not enough for the devastation done to their families.” Tumbler Ridge Mayor Darryl Krakowka accepted the receipt and asked for “care and attention” as the community navigates the grieving process.
The policy commitments came separately in a letter to Canadian federal ministers from Ann O’Leary, OpenAI’s vice president of global policy. O’Leary wrote that OpenAI has lowered the reporting threshold so that the user no longer needs to discuss “target, means and time” about planned violence for a conversation that will be recorded for referral to law enforcement. The company brought in mental health and behavioral experts to help assess the incidents and established a direct point of contact with the RCMP. O’Leary stated that under the updated policies, Van Rootselaar’s interactions “would be referred to the police” if discovered today. The changes are voluntary. They are not legally binding. They can be reversed at any time. There is no law in Canada requiring AI companies to report threats identified through their platforms, and the federal government has yet to introduce one.
An example
Tumbler Ridge is not an isolated incident. Florida has launched its first criminal investigation into an artificial intelligence company ChatGPT allegedly provided advice to the Florida State University mass shooting gunman, including instruction on how to use a firearm, minutes before the attack that left two dead and five injured. NPR reported on April 23 that “OpenAI is under investigation after two mass shooters used ChatGPT to plan attacks.” Seven families have separately sued OpenAI over ChatGPT, which their lawyers said “suicide coach,” with documented deaths in Texas, Georgia, Florida, and Oregon. In another case, OpenAI is being sued for ignoring three warnings about a dangerous user, including its internal mass casualty flag. The number of reported AI-related security incidents increased from 149 in 2023 to 233 in 2024, a 5,622% increase, and 5,622% should be significantly higher.
The pattern that unites these cases is not that artificial intelligence systems spontaneously create violence. It is AI companies that identify dangerous behavior on their platforms and make internal decisions about whether to act on it, decisions that have life-or-death consequences but are governed by no external standards, no legal obligations, and no regulatory oversight. The deeper risks of emotional addiction to AI chatbotsincluding a phenomenon researchers call “AI psychosis,” raising questions about what happens when systems optimized to maintain engagement become confidants to users in crisis. OpenAI’s “higher threshold” for reporting was a business judgment, not a legal standard. Staff used their moral judgment when advising to contact the police. Executives who rejected them applied a different calculus, likely weighing the reputational and legal risks of reporting against the reputational and legal risks of not reporting, and got it disastrously wrong.
Security question
OpenAI announced the external security fellowship hours after a New Yorker investigation revealed it was disbanding its internal security teama sequence that reflects the company’s approach to security management with disturbing precision. Before leaving, the superalignment team led by Ilya Sutskever disbanded. The preparatory group for AGI was canceled. Security was removed from OpenAI’s IRS filings when the company converted from a non-profit to a for-profit structure. OpenAI’s own head of robotics has resigned over security management concernsspecifically objecting to it”Those are lines that deserve more discussion than if Americans were to be monitored without judicial oversight and have lethal autonomy without human permission..” Foreign scholarship, voluntary policy changes, and Altman’s letter all share a common feature: they are gestures driven by OpenAI. They may be announced, changed or withdrawn without external approval. Without its mechanism, they create an image of accountability.
The latest release of OpenAI’s open source safety policy for junior users includes graphic violence, dangerous activities and other harm categories. OpenAI itself calls these “meaningful safety floor,” is not a comprehensive solution. The space between the floor and the ceiling is where Tumbler Ridge happened. The system flagged a teenager who described gun violence scenarios. The policy said that was not enough to report. The teenager went on to kill eight people.
Gap
Canada’s Minister of Artificial Intelligence, Evan Solomon, spoke of OpenAI’s commitments.don’t go far enough.“Federal ministers from the innovation, justice, public safety and culture portfolios met with representatives of OpenAI after the government summoned the company’s executives in late February. A joint working group between Innovation, Science and Economic Development Canada and Public Safety Canada is reviewing AI safety report protocols, which include initial recommendations expected by the summer of C202. The Artificial Intelligence and Data Act was Canada’s proposed AI regulatory framework, but is now widely seen as inadequate, Online Online The Law of Damages is designed for social media platforms, not generative AI systems that engage in one-on-one conversations with users.legal entry” law gives police the power to monitor foreign companies’ online data, but it does not specifically require AI companies to report threatening behavior. Currently, there is no legal framework in Canada to impose liability when an AI company has information that could prevent violence and decides not to share it.
This is a gap that Altman’s letter fails to close. An apology addresses a past failure. Voluntary policy change eliminates future risk. Neither structure solves the problem, meaning that an $852 billion company racing to create artificial general intelligence, serving hundreds of millions of users, using systems that can detect dangerous behavior in real time, is under no legal obligation to tell anyone what they’ve found. OpenAI employees saw the threat. OpenAI management decided that the threat did not meet the company’s internal standards. Eight people died. The standard has been lowered. The next decision will be made by the same company, in the same voluntary framework, with the same lack of legal consequences for making a mistake. Altman wrote that he shared the letter “recognizing that everyone grieves in their own way and in their own time.” Tumbler Ridge is sad. The question is not whether Sam Altman has regrets. The question is, is regret politics?






