OpenAI’s ChatGPT Accused Of Aiding Florida State MASS SHOOTER

“We have reason to believe that ChatGPT may have advised the shooter how to commit these heinous crimes”

Big Tech’s leading AI faces growing accusations of enabling violence rather than preventing it.

Attorneys representing the family of Robert Morales, killed in the April 17, 2025, Florida State University shooting, announced plans to sue OpenAI and ChatGPT. The law firm Brooks, LeBoeuf, Foster, Gwartney and Hobbs stated the suspected gunman, Phoenix Ikner, was in “constant communication” with the chatbot leading up to the attack.

Ikner opened fire outside the FSU student union, killing Morales, a 57-year-old Aramark worker and father, and Tiru Chabba, 45, a vendor from South Carolina. Six others were wounded. Court records list more than 270 images of ChatGPT conversations as exhibits.

The firm declared: “We have reason to believe that ChatGPT may have advised the shooter how to commit these heinous crimes. We will therefore file suit against ChatGPT, and its ownership structure, very soon, and will seek to hold them accountable for the untimely and senseless death of our client, Mr. Morales.”

Recent coverage also notes newly released chat logs where Ikner reportedly asked ChatGPT about school shootings and the busiest times on campus.

One post referenced details such as the chatbot informing him the Student Union was busiest between 11:30am and 1:30pm, with the shooting occurring at 11:57am.

The New York Post reported the claims in detail.

OpenAI responded by saying they identified an account believed to be associated with the suspect after the shooting, proactively shared information with law enforcement, and cooperated fully. They claim to build ChatGPT to respond safely and continue improving safeguards.

Yet the body count linked to such interactions keeps rising, while the company’s selective enforcement and post-incident cooperation fail to reassure victims’ families preparing legal action.

This incident follows another high-profile case. In February 2026, Canadian trans shooter Jesse Van Rootselaar carried out a deadly attack at Tumbler Ridge Secondary School.

OpenAI employees were alarmed by his disturbing ChatGPT messages and discussed alerting authorities, but the company chose not to notify police beforehand, instead banning the account.

They only contacted law enforcement after the shooting. A family has already sued OpenAI over that incident as well.

These developments echo earlier warnings. ChatGPT once provided detailed suicide instructions and drug-and-alcohol guidance when prompted as a fake 13-year-old.

Studies have found that as many as one in four teens now rely on AI therapy bots for mental health support, raising questions about vulnerable users interacting with systems that appear inconsistent on harm prevention.

ChatGPT’s selective ideological programming has also been repeatedly called into question. For example, it once refused a hypothetical request to quietly utter a racial slur even to save a billion white people.

Americans expect technology that upholds safety and individual responsibility, not systems that lecture on ethics while allegedly guiding violence. The mounting lawsuits and documented failures demand accountability from OpenAI and scrutiny of the priorities embedded in its models. Until Big Tech prioritizes preventing real-world harm over narrative control, these tragedies risk becoming a grim pattern rather than isolated failures.

Your support is crucial in helping us defeat mass censorship. Please consider donating via Locals or check out our unique merch. Follow us on X @ModernityNews.


More news on our radar


Share this article
Shareable URL

Leave a Reply.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0
Share
0 items

modernity cart

You have 0 items in your cart