The Journal of Things We Like (Lots)
Select Page
James P. de Haan, Preventing #MeToo: Artificial Intelligence, the Law, and Prophylactics, 38 Minn. J. Law & Ineq. 69 (2020).

Increasingly sophisticated data analytics paired with machine learning is changing the world, and workplace applications are already a thriving industry. Over the last five years or so, legal scholars have increasingly explored the legal implications of these new technologies. Most of that work has focused on concerns related to privacy or discrimination, and quite a bit focuses on use of this technology in hiring. This focus only reaches part of the “people analytics” industry–it leaves out the application of predictive analytics to first analyze and then shape worker behavior and the working environment.

In Preventing #MeToo: Artificial Intelligence, the Law and Prophylactics, James P. de Haan tackles this kind of application of AI in the workplace by looking at how predictive analytics could be used to prevent harassment. It’s a great time to be thinking of this potential application for at least three reasons. The effects of the #MeToo movement have caused employers to pay more attention to preventing harassment, the technology appears to be soon in reach, and thinking about this application might help us think carefully about other ways AI might be used to shape worker behavior and the working environment.

Reviewing this article for the Journal of Things We Like Lots posed a challenge because I do not like the kind of surveillance and AI-driven, behavior-prediction program described in this article. What I do like is that, knowing this is coming, de Haan has identified the outlines of how such a program would work, explained the appeal as a tool to prevent harm, and set up several preliminary concerns we should have, highlighting some challenges we need to continue to think through.

As de Haan notes, sexual harassment (as well as harassment on the basis of other identity characteristics) remains widespread, despite the #MeToo movement. Part of the reason harassment remains so widespread is that it is grossly underreported, and part of the reason it is unreported is a fear of retaliation. That fear is well founded. According to the EEOC, about 75% of employees who speak out about harassment report that they experienced some form of retaliation for doing so.

As de Haan further describes, employers have responded to their legal obligations by nearly universally adopting harassment policies and training. While the existence of policies and training may provide a defense to a harassment claim, in some instances, as the EEOC has noted, there is no evidence that they are effective at preventing harassment. de Haan chalks this up to human involvement, noting that “sexual harassment policies . . . are only as good as the managers who implement them and are responsible for making sure there is broad compliance.” Because the legal standard for determining when a working environment has become objectively hostile is ill defined, people are notoriously bad at recognizing it.

From that central observation, de Haan explores what it might mean to remove the human evaluator by employing AI. After summarizing sexual harassment law and the obligations imposed on employers to prevent or remedy it, de Haan summarizes the critiques that the law fails the harassed. The bulk of the article examines how AI might be trained to recognize sexual harassment and identifies a number of legal implications of such a system, namely expanding employer duties to monitor and prevent harassment, and the implications of that monitoring for privacy, reputation, and workplace comradery.

One of de Haan’s central premises is that harassment harms both employees and employers. Thus, when it comes to prevention of harassment, the interests of the employer and employee are not diametrically opposed. For this reason, de Haan notes, creating a cooperative system that recognizes the joint interests of the harassed employee and the employer would more likely correctly pinpoint the workplace problem as the harasser, rather than incorrectly labelling the harassed employee as the workplace problem. In this way, such a cooperative system could promote reporting.

AI is that kind of system. Generally, one of the key uses of AI is to extract patterns and then “map[] out extant and predicted relationships based on these patterns.” In de Haan’s view, this is exactly the kind of thing that is needed to prevent harassment. Such a program could recognize when interactions between employees might risk creating a hostile environment—and could be more accurate than a person at identifying it early, given the difficulty in defining when an environment becomes hostile.

And as he describes, socio-cultural studies show that harassment is predictable when social situations, personalities, and context clues are analyzed. In fact, businesses are already using software to identify and prevent harassing conduct in email communications. The existing programs, though, are essentially not sensitive enough. They require actionable harassment to occur or almost occur before they alert human resources. To really prevent harm, de Haan argues that the prediction and warning must come earlier in the process—before an actionable harassment claim arises.

de Haan next explains how the AI would be trained to gather the data that would allow such a prediction. He suggests that new hires might play a game that could “allow the program to assign a ‘sensitivity profile’ for each employee.” Based on the research he identifies, that game should also test for a person’s problem-solving skills, propensity for confrontation, and notions of justice. But this is only part of the data that would be needed. The program would also have to learn what conduct is likely to be perceived as harassment. For this, de Haan suggests that

Permitting the program to review internal HR files and complaints would help it understand what actually leads to low-level complaints. . . .Taking this a step further, the program could even reach out of network to comb the internet for all publicly available information about a company’s employees. It can use an employee’s photo to identify social networks and map out relationships with co-workers based on extant connections, photos, conversations, tags, and content interaction.

And this is where the potential gets particularly scary. The program de Haan describes

ranks people based on subjectivity to sexual harassment; categorizes them as potential victims and harassers; consolidates mounds of highly sensitive, private information into one central location; and, perhaps most worryingly, potentially punishes people for acts never committed.

To be effective, an AI harassment prevention program has to have an early warning system to prevent harm even before an official report is made and an accurate “map” of the organization’s employees (including their personalities, work functions, and power relative to each other). In order to achieve this, the program will have to consume “massive amounts of data.”

After painting this picture, de Haan warns of four main legal implications. The ability of an employer to monitor employees this way may create a duty to monitor them if in fact that monitoring is effective at preventing harassment. And the possibility of a system that creates red flags may also create a duty on employers to warn employees who might be targets of harassment. Additionally, the expanded capability and warnings will likely trigger a duty to investigate more employee conduct. And lastly, the collection of so much data—particularly data outside of the workplace—and its use to potentially label an employee a harasser, raises significant privacy and autonomy concerns.

In the end, de Haan notes that nothing he has explained solves the problem of what an employer should do with the information that a particular situation risks creating a hostile environment at some point in the future, although he raises the specter of what might happen and how employees at risk of harassment might react upon finding out if nothing is done in the face of that kind of warning. Notably, he does not do much to address the converse of these concerns—how to prevent protection of at-risk employees from resulting in limits to their work opportunities. As he briefly notes, one response to the #MeToo movement has been that men in positions of power have stopped mentoring women subordinates or engaging in social interactions with them. And because this program would assess all employees, we might worry that a rational employer would segregate those employees assessed “highly” or “over-sensitive” to harassment in career-limiting ways, or maybe not hire them in the first place.

As I said at the beginning, reviewing this article for the Journal of Things We Like Lots posed a challenge because I do not like the kind of program described in this article. But de Haan has provided an important first look at how this kind of program would work, explained the appeal as a tool to prevent harm, and set up an early warning of some concerns and challenges. It is a good step that highlights how many new challenges we need to continue to think through.

Download PDF
Cite as: Marcia L. McCormick, An Early Warning on an Early Warning System: Avoiding Minority Report, Workplace Edition, JOTWELL (April 16, 2021) (reviewing James P. de Haan, Preventing #MeToo: Artificial Intelligence, the Law, and Prophylactics, 38 Minn. J. Law & Ineq. 69 (2020)), https://worklaw.jotwell.com/an-early-warning-on-an-early-warning-system-avoiding-minority-report-workplace-edition/.