AI Programs in Japan are Forcing Workers to Smile More – Would That Work in the U.S.?
Insights
9.05.24
A Japanese supermarket chain is getting attention for implementing an AI tool called “Mr. Smile” that monitors workers for the quality and quantity of their smiles when interacting with customers, raising questions around the globe about how far to allow AI into the workplace. Mr. Smile, introduced at eight Aeon locations earlier this year, initially monitored over 3,000 employees with artificial intelligence technology, using more than 450 elements to assess facial expressions, length and sincerity of smiles, and volume and tone of voice. Deeming the trial a success, Aeon just announced it will roll out the system to all 240 stores and monitor tens of thousands of workers across Japan “to standardize staff members’ smiles and satisfy customers to the maximum.” Could companies in the U.S. get away with AI-driven emotional monitoring? Here’s what employers in Japan and the U.S. should consider when looking into AI technology that mandates specific emotions from its workers.
Worker Advocates in Japan Worried Mr. Smile Could Lead to Kasuhara (Customer Harassment)
Some believe that Aeon’s nationwide rollout of Mr. Smile is well-intentioned. After all, the standards for customer service in Japan are famously high and this program will help provide feedback to workers about changes to improve their skills and create a happier experience for customers.
But there’s a dark side. According to a recent report, worker advocates are worried about rising rates of kasuhara – customers harassing workers for not being friendly enough to them. They believe that a system like Mr. Smile will put even more pressure on workers to maintain a state of constant happiness when dealing with customers, potentially leading to dissatisfied customers complaining if they don’t feel their experience is matching these very high expectations.
- In fact, several other large employers in Japan have recently passed standards in an attempt to protect workers from customers who exploit their superior consumer positions to commit illegal acts or make unreasonable demands, even banning violators or calling the police. New policies prohibit abusive language, loud voices, insults, threats, excessive demands, and other unreasonable behavior.
- This problem has caught the eye of government regulators, who want to take action to stop the harassment. Tokyo will likely be the first of Japan’s 47 prefectural assemblies to pass an ordinance prohibiting customer harassment by March 2025, and the ruling Liberal Democratic Party will soon propose a law to be brought before the Diet (national legislative body). This follows the Labour Ministry issuing guidelines on how to protect employees from customer harassment and recognizing trauma caused by kasuhara as a workplace accident.
AI to the Rescue?
A recent report described two new AI tools that will aim to counteract the scourge of customer harassment:
- Masayuki Kiriu, Toyo University’s dean of sociology, is developing an AI-driven training tool that can coach employees on how best to respond to abusive customers. It also assesses each workers’ threshold for harassment in an effort to educate companies developing policies on customer harassment.
- Another tech company is developing AI software that tunes out the anger in people’s voices on phone calls, making angry people sound calm and protecting workers from abuse.
Mr. Smile Might Not Be Welcomed in the U.S.
Any enterprising employer that wants to consider using an AI system to monitor facial expressions at work and mandate more smiling and happier tones will need to overcome a few potential legal barriers.
Disability Discrimination and Accommodations
Employees who are unable to conform to typical smiling standards due to physical, neurological, or mental conditions might not fare well under Mr. Smile’s watchful eye, and that could cause you problems under the Americans with Disabilities Act (ADA) and state disability laws. For example:
- Employees who suffer from Bell’s Palsy or other physical conditions that involve facial paralysis, or who have suffered a stroke, or have facial nerve damage, are just some of the types of individuals who could have a physical or neurological reason they cannot smile.
- Those with depression, anxiety disorders, PTSD, bipolar disorder, or other mental health conditions might not meet the standards set by a mandatory smiling program.
- Neurodiverse workers, such as those on the autism spectrum, may have difficulty interpreting and expressing emotions in socially typical ways. This could include them smiling less frequently or at unexpected times, which may not align with conventional social cues.
Under the ADA, employers must make reasonable accommodations for employees with disabilities unless such accommodations would cause undue hardship to the business. In the context of facial expression monitoring, if a disability prevents an employee from meeting this smiling standard, you may need to consider whether there are alternative ways to achieve the desired customer service outcomes without discriminating against that employee.
Potential AI Bias
AI systems that track facial expressions can have biases, particularly in recognizing emotions across different racial or ethnic groups. These systems may inaccurately evaluate the facial expressions of non-white employees, leading to unfair treatment or discrimination claims based on race or ethnicity. The ACLU recently alleged that a company that uses a video tool to aid with interviewing prospective workers is likely to discriminate based on race and other protected characteristics because of the underlying AI data relied upon by the programs. A similar argument might be made against any tool like Mr. Smile.
Privacy and Biometric Concerns
While employers generally have the right to monitor employees performing their duties in the workplace, constantly tracking facial expressions could be seen as an invasion of employees’ privacy, especially when such data could be collected continuously throughout the workday. Any employer that uses a facial recognition system would also need to ensure that any information collected about the workers’ faces is not mishandled or disclosed without consent. Collecting, sharing, or using this data in ways that could compromise employee privacy could lead to legal concerns.
Also, implementing an AI system to monitor employees’ facial expressions could raise several legal concerns under state privacy laws. The Illinois Biometric Information Privacy Act (BIPA) is arguably the most stringent. If the AI system captures and analyzes employees’ facial geometry to monitor expressions, this could fall under the part of the law that regulates the treatment of biometric identifiers. To start, employers would need to obtain informed consent from workers before collecting this information, and would also need to provide certain disclosures to workers, among other requirements.
Employee Morale and Stress
Aside from the legal hurdles, employees who know they are being monitored for smiling could feel additional stress or pressure to conform to what they consider to be arbitrary and artificial behavioral standards, which might reduce morale and productivity. It could lead to high turnover, difficulty recruiting new workers, and a poor reputation in the marketplace. You could also face the situation where workers feel the need to turn to a union to help address what they consider to be a troubling work environment.
Labor Relations
Even non-unionized employers are required to comply with federal labor law, and the National Labor Relations Board could have at least two potential concerns over a system like Mr. Smile in the workplace. First, the Board’s General Counsel warned employers several years ago that agency investigators would be targeting electronic workplace surveillance” to ensure it didn’t interfere with employees’ protected workplace activity. A system that tracked employee smiles might very well be in its crosshairs. Separately, the Board has been scrutinizing workplace civility policies under a relatively new standard, concluding that many otherwise common and seemingly benign rules might conceivably chill employees’ organizing rights. Given that at least one current Board proceeding is challenging rules that require individuals to “be positive” and “smile and have fun,” it would not be a stretch to see the agency put a policy requiring workers to smile under the microscope.
Conclusion
While some businesses in Japan might be open to a mandatory-smile policy enforced by AI, there are hurdles to overcome if you want to consider a similar program in the States. We’ll continue to monitor developments and provide the most up-to-date information directly to your inbox, so make sure you are subscribed to Fisher Phillips’ Insight System. If you have questions, contact your Fisher Phillips attorney, the authors of this Insight, or any attorney in our AI, Data, and Analytics Practice Group, Privacy and Cyber Group, or International Practice Group.
Related People
-
- Kate Dedenbach, CIPP/US
- Of Counsel
-
- Joshua D. Nadreau
- Regional Managing Partner and Vice Chair, Labor Relations Group
-
- Karen L. Odash
- Associate