Big Tech’s leading AI faces growing accusations of enabling violence rather than preventing it.
Attorneys representing the family of Robert Morales, killed in the April 17, 2025, Florida State University shooting, announced plans to sue OpenAI and ChatGPT. The law firm Brooks, LeBoeuf, Foster, Gwartney and Hobbs stated the suspected gunman, Phoenix Ikner, was in “constant communication” with the chatbot leading up to the attack.
Ikner opened fire outside the FSU student union, killing Morales, a 57-year-old Aramark worker and father, and Tiru Chabba, 45, a vendor from South Carolina. Six others were wounded. Court records list more than 270 images of ChatGPT conversations as exhibits.
BREAKING: Florida State University gunman had 270+ chats with ChatGPT right before the shooting that left 2 people dead.
— DogeDesigner (@cb_doge) April 7, 2026
Victims’ attorney just said it “may have advised the shooter how to commit these heinous crimes.”
ChatGPT acted as mass murder consultant. pic.twitter.com/odQYv9LOg8
The firm declared: “We have reason to believe that ChatGPT may have advised the shooter how to commit these heinous crimes. We will therefore file suit against ChatGPT, and its ownership structure, very soon, and will seek to hold them accountable for the untimely and senseless death of our client, Mr. Morales.”
A mass shooter used ChatGPT to plan the FSU shooting, killing 2 and injuring 5.
— Katie Miller (@KatieMiller) April 8, 2026
ChatGPT advised the shooter on executing the deadly shooting on a college campus.
There are more than 270 ChatGPT conversations listed as exhibits in the case.
This is now the 20th death tied to…
Recent coverage also notes newly released chat logs where Ikner reportedly asked ChatGPT about school shootings and the busiest times on campus.
One post referenced details such as the chatbot informing him the Student Union was busiest between 11:30am and 1:30pm, with the shooting occurring at 11:57am.
The New York Post reported the claims in detail.
ChapGPT helped Florida State University gunman plan mass shooting, victim's attorney claims https://t.co/NDv8zx2Zbg pic.twitter.com/m2tavLoLAx
— New York Post (@nypost) April 8, 2026
OpenAI responded by saying they identified an account believed to be associated with the suspect after the shooting, proactively shared information with law enforcement, and cooperated fully. They claim to build ChatGPT to respond safely and continue improving safeguards.
Yet the body count linked to such interactions keeps rising, while the company’s selective enforcement and post-incident cooperation fail to reassure victims’ families preparing legal action.
This incident follows another high-profile case. In February 2026, Canadian trans shooter Jesse Van Rootselaar carried out a deadly attack at Tumbler Ridge Secondary School.
OpenAI employees were alarmed by his disturbing ChatGPT messages and discussed alerting authorities, but the company chose not to notify police beforehand, instead banning the account.
Canadian trans shooter's disturbing ChatGPT messages alarmed employees – but company never alerted cops https://t.co/Jl8KhxKZeo pic.twitter.com/Mi8BNrsRFZ
— New York Post (@nypost) February 21, 2026
They only contacted law enforcement after the shooting. A family has already sued OpenAI over that incident as well.
FAMILY SUES OPENAI: “CHATGPT HELPED PLAN MASS SHOOTING”
— NewsForce (@Newsforce) March 11, 2026
A lawsuit says the Tumbler Ridge shooter used ChatGPT to help plan the attack, and that employees allegedly flagged the chats as an imminent risk before anyone got hurt.
Source: NewsForce pic.twitter.com/SulETFiGtR
These developments echo earlier warnings. ChatGPT once provided detailed suicide instructions and drug-and-alcohol guidance when prompted as a fake 13-year-old.
Studies have found that as many as one in four teens now rely on AI therapy bots for mental health support, raising questions about vulnerable users interacting with systems that appear inconsistent on harm prevention.
ChatGPT’s selective ideological programming has also been repeatedly called into question. For example, it once refused a hypothetical request to quietly utter a racial slur even to save a billion white people.
Americans expect technology that upholds safety and individual responsibility, not systems that lecture on ethics while allegedly guiding violence. The mounting lawsuits and documented failures demand accountability from OpenAI and scrutiny of the priorities embedded in its models. Until Big Tech prioritizes preventing real-world harm over narrative control, these tragedies risk becoming a grim pattern rather than isolated failures.
Your support is crucial in helping us defeat mass censorship. Please consider donating via Locals or check out our unique merch. Follow us on X @ModernityNews.
More news on our radar














