top of page
Amanda Nurse

Can AI Technology Help Prevent Violence?


Can AI Technology Help Prevent Violence?


Just hours before the Apalachee High School shooting, Colt Gray sent ominous text messages to his mother: "You’ll see what happens" and "I feel like I have no one," according to The Washington Post. Though vague, these words were indicators of his distress. His mother called the school, saying there was an emergency and asking officials to check on him immediately, but they were unable to reach him in time. Authorities say Gray shot and killed two teachers and two students with an AR-15-style rifle.


The Apalachee shooting has reignited concerns about the importance of threat detection in preventing tragedies. This is not an isolated case. In Michigan, Ethan Crumbley, 15, was convicted of killing four students and injuring seven people in 2021. Just months before the attack, Crumbley texted a friend: “now it’s time to shoot up the school,” according to transcripts published by WDIV-TV, a Detroit TV station. He is now serving life in prison without the possibility of parole.


Despite numerous warnings, like these, school shootings continue to rise. CNN reports there were 82 school shootings in 2023, compared to 79 in 2022 and 73 in 2021. Yet the systems in place to detect these threats before they escalate remain limited and often ineffective.


The reality is alarming. A 2019 report from the U.S. Secret Service Threat Assessment Center examined 41 incidents of school violence and found that while 50% of schools had resource officers and 68% had lockdown measures, only 17% had systems to alert officials about problematic behavior before an attack. In most cases, these systems were rudimentary, involving phone numbers, email addresses, or paper referrals. Every one of the assailants displayed problematic behaviors before the attack — mostly in school — and 74% of them did so online. 


And failing to detect threats can be costly. The Justice Department in 2021 agreed to pay roughly $130 million to survivors and families of victims of the 2018 Parkland, Fla. highschool shooting in a settlement over the F.B.I.’s inadequate investigation into tips that the shooter might unleash a barrage of bullets at a school.


What’s in place is not working. Can AI technology help prevent violence?


The rise in violence calls for more advanced threat detection systems, capable of identifying signs of danger in real-time digital communications. Nearly half of 170 shooters who engaged in mass shootings between 1966 and 2019 publicly leaked their plans in advance, according to a 2021 JAMA study. This critical moment — where intent is communicated — presents an opportunity for intervention.


Threat detection technologies have evolved significantly, focusing on prevention rather than responding after the fact. “The role for law enforcement has to be prevention,” Marc Zimmerman, co-director of the Institute for Firearm Injury Prevention and the National Center for School Safety, told The New York Times. “Because if its role is enforcing laws and dealing with a shooting, that’s way too late.”


This is where Alphy’s Reflect AI comes in. Reflect AI is designed to flag harmful or dangerous language in real-time, detecting threats before they materialize. Our AI systems are trained not just on keywords but on actual human language and legal frameworks. If such technology had been in place, it could have flagged messages like those from Colt Gray or Ethan Crumbley, offering a chance for intervention before lives were lost.


The need for advanced threat detection has never been more urgent. As violence escalates in schools and other public spaces, it’s clear that more needs to be done to protect people before it’s too late. Reflect AI offers a critical layer of defense by analyzing communication for red flags and helping schools, law enforcement, and families take proactive measures.


Threat detection isn’t just for law enforcement — it’s a tool for schools, families, and communities. As we move forward, technology like Reflect AI will play a key role in ensuring that we act proactively to prevent violence before it happens.


Amanda Nurse is the editorial and operations coordinator at Alphy.


Reflect AI by Alphy is an AI communication compliance solution that detects and flags language that is harmful, unlawful, and unethical in digital communication, including disabilities discrimination in the workplace and in fair lending practices. Alphy was founded to reduce the risk of litigation from harmful and discriminatory communication while helping employees communicate more effectively.


Contact us for a demo at sales@alphyco.com.

Comentarios


bottom of page