top of page

Destructive Communication: Can AI Help Us Converse More Humanely?





Behaving professionally at work should be a given. But in reality, the divide between professional decorum and shocking communication is alarmingly thin.


In building our AI classifier, Reflect AI, which detects harmful, unlawful, and unethical language in digital communication in real-time, we regularly test out offensive sentences. It’s not for juvenile kicks. It’s to see how the classifier responds — finding strengths, weaknesses, and gaps to improve our detection coverage and accuracy. Our mission is to build an AI that helps us communicate more humanely, thereby saving companies from costly lawsuits and reputational harm spurred by toxic communication.


We routinely type sentences that represent mild expressions of harm, along with more severe types of language:  intimation, threats, harassment, hate, and anger. We scour lawsuits and exposés for the details of true stories of verbal and written abuse at work. You’d think that the language used in corporate America — and beyond — would be fairly mundane. Aren’t we called to a higher standard when interacting with people at work than we are after hours? To our surprise, we’ve been stunned at the types of profane and profoundly disturbing language being used on the job, including threats against people’s lives. 


Below are a few examples of inhumane language captured in texts or emails of employees and high-ranking managers at a blue chip bank, a nationally recognized insurance company, an anti-diversity and inclusion organization, a San Francisco Bay Area police force, an employment discrimination law firm, a federal fiscal oversight agency, and a national sports organization  (asterisks added to spare you):


“Do NOT knock on the door cause if I see U ::: I will rape U in the hallway.”


“You have been in this country long enough, you should adapt to the way our country runs.”


“A healthy society requires patriarchy.” 


Asian countries don’t have same-sex marriage, but “more wholesome policies like prison” for gays.


“Tell him that he’s the reason why most people hate Jews.” 


“What’s this f*ggot’s problem?” 


“And I am going to set you on fire” and “[H]ug [your children] tight” because your “world will be over [tomorrow]”


“Women need to use sex to get ahead at the FDIC.”


“I’m only stopping them cuz they black. F–k them. Kill each other.”


As someone who loves language and communication, and knows the power of words to help or harm, I’m genuinely alarmed by the vile, prohibited, discriminatory, and hate-filled emails and texts sent by employees at all levels and across all industries. Today, most corporate compliance and surveillance departments respond to harm rather than mitigate harm. 


We know our workplace is drastically different from what it was even a few years ago, with remote work and downsizing adding stressors that didn’t exist in the past. The pace of work has increased as well, and digital communications make this more efficient, with an estimated 347 billion emails sent daily. 


Could harmful communication spring from the perceived anonymity and detachment afforded by digital platforms? When we’re talking face-to-face, social norms and immediate feedback help regulate our behavior. Seeing someone’s reactions triggers empathetic responses in us, and a sense of social responsibility. Digital communication lacks these cues. The screen is a barrier, emboldening people to say things they wouldn’t dare to in person. This phenomenon, known as the “online disinhibition effect,” explains why people might engage in “flaming” or other harmful behaviors online.


The speed of digital communications also allows for impulsivity, if we let it. Who among us hasn’t sent messages without the tact or deliberation we might use in a one-on-one discussion in person? Lag times in response (with people working in different time zones, for example) can play a role. That lack of immediate, visible consequence can also contribute to the illusion that our actions are not as impactful or real. That leads to a disconnect between online behavior and its real-world effects.


Compounding all of this are the changes in our modern workforces, which are increasingly varied in age, race, gender, orientation, ability, and religion, and — thanks to remote setups — often spread across cultures around the globe. Miscommunication can escalate, affecting the organization's core and its public reputation. It can influence customer trust and stakeholder confidence. 


The psychology behind this is complex. Humans need social connection but we can be depersonalized by connecting through technology. Digital platforms can reduce individuals to mere avatars, stripping away the humanity and empathy that we use in real-life interactions. Detachment allows harmful communications to thrive. 


The impact of negative communication is profound, which is why companies need a commitment to communication and the technology to back it up. The Harvard Business Review reports that 80% of employees affected by incivility repeatedly dwell on the behavior, with 48% reducing their effort. Some 92% of workers queried in a 2023 American Psychological Association survey said it was important to work for an employer that values their psychological well-being at work. 


Until now, compliance solutions for companies of all sizes could only flag and detect harm in emails or texts after the fact. Reflect AI addresses this challenge head-on. 

Our proprietary technology detects potentially harmful and unlawful content in real time. It's a proactive, practical step towards protecting companies. Preemptively identifying problematic communication can nip it in the bud, rather than letting it fester into a toxic culture.


In addition to technological solutions, companies need to educate employees about the impact of their digital communications so they understand the potential fallout from their language. It only takes one email or text to wreak havoc on an employee, team, or company. 


Organizations must establish clear guidelines for digital communication, emphasizing the importance of maintaining the same standards of professionalism and courtesy as in face-to-face interactions. While digital communication has transformed the workplace, it has also exposed the darker side of human thought and interaction. Adopting innovative new compliance solutions can safeguard companies and protect individuals.


Julian Guthrie is the CEO and founder of Alphy.


Reflect AI by Alphy is an AI communication compliance solution that detects and flags language that is harmful, unlawful, and unethical in digital communication. Alphy was founded to reduce the risk of litigation from harmful and discriminatory communication while helping employees communicate more effectively.

bottom of page