Halloween frights come once a year. What’s even scarier to corporate HR and legal teams is the ongoing risk of litigation over miscommunication in the workplace.
Sometimes the miscommunication is about minor misunderstandings. Other times the messages conveyed lead to lawsuits. This is a significant issue in the U.S., where 27% of annual corporate revenues are typically spent on litigation stemming from harmful communication, so it's important to remember that your words can get you sued.
We hear a lot about toxic and discriminatory language. What’s the difference? We’re here to explain, and to help keep you out of hot water with our Reflect AI, a communication solution that gives feedback on your messages in real-time, prior to send.
You can get our free Reflect AI for Gmail in Google Workspace here:
Last week’s blog looked at toxic, harmful language — communication that is disrespectful, offensive, and hurtful. This week, we’re exploring the meaning of discriminatory, unlawful language. Discriminatory language targets a person or group of people based on federally protected characteristics, such as race, color, religion, sex, national origin, age (40 and older), disability or genetic information. Federal law bans discrimination in all aspects of employment.
Here are 15 examples of unlawful language relating to employment, and the response you’d get to typing this into email using Reflect AI:
“Asians don’t make good managers”
“His qualifications are good, but we’re not hiring any more Indians on the team”
“She's about to pop out another baby and won't be able to concentrate at work”
“Women don't have what it takes to be a tough CEO”
“I took one look at him in his wheelchair and told him we weren't hiring at this time, even though we were”
“Pass on the visually impaired candidate — we'll have to make accommodations and that'll cost money”
“Wearing that hijab to the interview was one reason we didn’t give her the job”
“You really think the guys will tolerate a lesbian manager?”
“Pull the trans employee from the sales team — she might make customers uncomfortable”
“I took one look at him in the interview and thought we need younger blood”
“It'd be so great to have you under me”
“I'd love it if you'd come to the office on Halloween in costume dressed as a sexy nurse”
“If that new little thing from marketing was a little older I'd try to hit it”
“When it's time for raises, I'm certainly going to remember that he testified in that lawsuit against the company”
“I'm tired of hearing her whine about how she thinks the guys in the shop are harassing her, so let's make her want to quit”
Reflect AI is designed to hold up a mirror to what you’re saying, so you can see how the words you use might be interpreted on the other end. It’s an always-on monitor that protects employees from serious mistakes that lead to lower morale, lower productivity, high turnover and costly lawsuits.
We don’t tell you what to say or prevent you from saying it. You’re in control of your messages. Our goal is to steer you in a thoughtful, respectful, helpful direction, every time you communicate. That’s a road everyone wants to travel. Otherwise, you just may get ghosted — or worse — for your frightful language.
Carolyne Zinko is the editorial director and AI editor at Alphy.
Reflect AI by Alphy is a SaaS platform that flags harmful language, including topic, tone, “isms,” confidence, mindset and appropriateness. Our AI language classifier detects risks in emails prior to send, flags conversational missteps (and successes) in video meetings in real-time, and upskills individual communication with targeted and personalized microlearning.
Photos by engin akyurt and Brooke Cagle on Unsplash
Comments