In the workplace, the impact of language — especially discriminatory or bigoted language — can reverberate far beyond hurt feelings or offended sensibilities. Companies and organizations that don't adequately defend against such language can face substantial financial and professional repercussions. This is part of what makes Alphy’s Reflect AI indispensable in helping individuals and companies preemptively identify problematic communication.
Citigroup might have saved itself a lot of harm if it had Reflect AI installed for all employees. The banking giant was recently fined $24.5 million for blocking Armenian Americans from obtaining credit cards. The company used the discriminatory assumption that applicants with last names ending in -ian or -yan, particularly from a large enclave in Southern California known as “Little Armenia,” were more likely to be involved in fraud.
Employees dubbed the applicants “bad guys,” reported the New York Times, and as the "Southern California Armenian Mafia," according to the Wall Street Journal. The Armenians were not applying for Citibank cards, but for credit cards from retailers such as Home Depot and Best Buy which were underwritten by the bank. Evidence of racial stereotyping was found in employee emails. The credit card denials occurred from 2016 to 2021.
The Consumer Financial Protection Bureau (CFPB) found this a violation of the Equal Credit Opportunity Act. In addition to the multi-million dollar fine, the bank had to pay $1.4 million in restitution to affected consumers. In short, bigotry costs money — and causes damage to a brand and relationships.
Financial institutions must ensure they’re familiar with their clients to prevent facilitating fraudulent activities. However, they can’t make discriminatory decisions based on a person's national origin or surname, the Wall Street Journal noted.
“Citi stereotyped Armenians as prone to crime and fraud,” Rohit Chopra, the director of the consumer bureau, was quoted in the New York Times. “In reality, Citi illegally fabricated documents to cover up its discrimination.”
Citigroup explained that its employees were attempting to respond to a documented Armenian fraud ring operating in specific areas of California, but acknowledged those workers went too far and engaged in “impermissible actions,” the Wall Street Journal reported.
In addition to financial loss, the actions damaged Citigroup’s reputation and might affect its long-term profitability — the bank is now under intense regulatory scrutiny to overhaul its risk management systems, which will take time and cost money.
The takeaway for companies is clear: fostering an environment that actively combats discriminatory language and practices isn't just the right thing to do — it's a financial imperative. Companies must be vigilant and proactive, with policies and programs that mitigate risk and protect the bottom line.
Reflect AI plays a crucial role in this process. As an always-on monitor that alerts users in real-time to how their words might be interpreted by others, it offers a proactive solution for positive and respectful workplace communication while pinpointing the true bad guys.
Carolyne Zinko is the editorial director and AI editor at Alphy.
Reflect AI by Alphy is a SaaS platform that flags harmful language, including topic, tone, “isms,” confidence, mindset, and appropriateness. Our AI language classifier detects risks in emails prior to send, flags conversational missteps (and successes) in video meetings in real time, and upskills individual communication with targeted and personalized microlearning.