News

OpenAI CEO agreed to apologize to Tumbler Ridge community, says B.C. premier

The CEO of OpenAI has agreed to apologize to the community of Tumbler Ridge in the wake of last month’s horrific mass shooting, and to help develop recommendations for mandatory reporting of potentially harmful uses of artificial intelligence, according to B.C. Premier David Eby.

Eby said those commitments were made during a “tough” conference call Thursday afternoon with Sam Altman, OpenAI vice-president of global policy Ann O’Leary, Tumbler Ridge Mayor Darryl Krakowka and members of the premier’s staff.

“Mr. Altman is prepared to apologize,” Eby said. “Everybody on the call recognized that an apology is nowhere near sufficient, but also that it is completely necessary.”

OpenAI previously confirmed a ChatGPT account connected to shooter Jesse Van Rootselaar was banned in June 2025, and that the company considered notifying police about concerning interaction that violated its policies but ultimately chose not to do so—a decision that has faced sharp criticism, including from Eby.

“OpenAI had the opportunity to notify authorities and potentially even to prevent this tragedy from happening,” he said.

“It’s obviously an incredibly devastating reality, but it’s the reality we’re in.”

Going forward, Eby said his government will be pushing for federal “duty to report” standards, and that OpenAI has agreed to participate in order to ensure such regulations “would be effective” and “could actually be implemented.”

OpenAI confirmed both commitments in a statement to CTV News on Thursday evening, saying that Altman will be working with Eby and Krakowka to “find the best way to convey his apology” to Tumbler Ridge.

“What happened in Tumbler Ridge was an unspeakable tragedy, and our thoughts remain with the victims, their families, and the entire community,” the company said.

Eby credited Altman for participating in the call, acknowledging he was not obligated to do so, and suggested that, based on a review by the premier’s staff, OpenAI has better reporting standards than any similar companies operating in Canada.

“For clarity, I don’t believe OpenAI’s current standard is sufficient,” Eby added. “Where there is an option to report, that option to not report could be taken again.”

The company previously confirmed that it turned over information on Van Rootselaar’s account to the RCMP following last month’s tragedy, and that it has been co-operating with law enforcement.

OpenAI has also sought to assure the public and policymakers that its standards have been strengthened in light of the incident, which marked one of the deadliest mass shootings in Canadian history.

In a letter to Artificial Intelligence Minister Evan Soloman last week, O’Leary said those changes would have resulted in Van Rootselaar’s account being flagged to police last year.

“With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today,” O’Leary wrote.

The company has not shared specifics about how Van Rootselaar’s account was used, but O’Leary said the company did not “identify credible and imminent planning” that would have met its previous threshold for referring to police.

Eby called AI a technology with “incredible promise,” including in providing medical care and tackling issues such as climate change, but said it’s “not acceptable” for companies to have inconsistent standards or to make potentially life-or-death decisions guided strictly by internal policy.

Without a standardized approach enforced through regulations, the premier said, there will be an ongoing “threat” that such safeguards will fail.

“Nobody knows that better than the people of Tumbler Ridge,” he added. “It is the threat of people using this tool to be more effective in killing than they otherwise would have been.”