In Male Same-Sex “Horseplay”: The Epicenter of Sexual Harassment?, Professor Kimberly D. Bailey explores the depth and limits of one of the carveouts given sanction from sexual harassment liability by the Supreme Court in its 1998 decision, Oncale v. Sundowner Offshore Services: male horseplay. In Oncale, the Supreme Court acknowledged that same-sex sexual harassment is actionable under Title VII, but also stated that horseplay among male employees was not sexual harassment. Using a “masculinities-modified” lens, Professor Bailey delves into the notion that even gender-conforming men have gendered relationships and interpersonal interactions in order to properly classify a lot of what has been presently dismissed as “horseplay” as sex discrimination in the workplace.
Masculinities theory approaches structural and other sex discrimination against women by focusing on men: how they are socialized, and how they perform masculinity. Using this lens, Professor Bailey elaborates upon the often-levied critique that Oncale would, as she put it, “reinforce the sexual desire paradigm.” (P. 95.) Bailey explains that horseplay is often “masculinity competition that leads to harassment among gender-conforming men.” Therefore, she concludes that gender-conforming men are deprived of a good deal of legal protection to which they should be entitled under Title VII, advocating for the abolition of the male-horseplay carveout in order to eradicate sexual harassment more broadly in the workplace. (P. 95.)
One of the many aspects of this article that scholars will “like lots” is how it provides an exposition of, and then builds upon, past scholarly critiques of Oncale. This, in and of itself is valuable, and the article reads like a treatise on how Oncale centers desire-based harassment through its analysis and carveouts. To the extent that Professor Bailey elaborates on scholarly views on the impact of Oncale and its progeny on the regulation of discrimination against the LBGTQ community, this post-Bostock analysis is most thought-provoking and useful.
Possibly the piece’s most valuable contribution is its laser focus on the interpersonal workplace dynamics between gender-conforming men. As Professor Bailey recites, the “gendered hierarchy” created when men perform masculinity to show one another and women that they are more masculine than other men, renders sexual harassment “not just a product of men’s relationships with women, it is also a product of their gendered relationships with one another.” (P. 99.) With this nuanced understanding, Professor Bailey demonstrates how much of what courts relegate to the realm of horseplay is actually the harassment of men, absent desire.
This piece will also be of interest to those who teach Employment Discrimination. I plan to discuss Bailey’s insights regarding Title VII’s true purpose and function in my Employment Discrimination classes. This piece also beautifully summarizes some of the foundational scholarship that established bedrock principles related to sexual harassment, including that sexual harassment is discrimination because of sex, as well as critiques of centering sexual desire when identifying sexual harassment.
When I teach my Employment Discrimination class, I encourage students to contrast lenses through which to view and regulate harassment. Is sexual harassment fundamentally about sexual exploitation and subordination, power and sabotage, or gender regulation and punishment? Just how much room is there in a discrimination-free workplace for sexual expression or conduct? Thanks to Professor Bailey, my students and I not only have a primer to guide us through these and other important discussions, we have a new topic for discussion: What is “horseplay,” actually? Why was it protected? Did this carveout age well? And are same-sex male horseplay and what drives it at the heart of all that Title VII should be seeking to regulate?
Cite as: Kerri Lynn Stone, No More Haven For Horseplay?
(August 17, 2021) (reviewing Kimberly D. Bailey, Male Same-Sex "Horseplay": The Epicenter of Sexual Harassment?
, 73 Fla. L. Rev.
95 (2021)), https://worklaw.jotwell.com/__trashed/
There are plenty of legal rules that were originally born from faulty reasoning and that somehow ended up becoming firmly entrenched despite their flaws. One hopes that among the many changes it has brought, COVID-19 will cause courts and other legal authorities to revisit well-established legal rules, the shortcomings of which have been exposed during the pandemic. Professor Michelle Travis discusses one of these areas in her forthcoming article A Post-Pandemic Antidiscrimination Approach to Workplace Flexibility.
Travis takes aim at what she calls the “full-time face-time norm,” a term she coined fifteen years ago. The phrase describes “the judicial presumption that work is defined by long hours, rigid schedules, and uninterrupted, in-person performance at a centralized workspace.” (P. 203.) This presumption appears repeatedly in reasonable accommodation cases under the Americans with Disabilities Act (ADA). Courts often use some variation of the phrase “attendance is an essential function” almost as boilerplate when explaining why a plaintiff is not entitled to a reasonable accommodation such as telecommuting or a flexible work schedule. One also sees this “full-time face-time norm” appear in Title VII disparate impact cases involving female employees who also have primary caregiving responsibilities. In these cases, courts often treat an employer’s practice of requiring full-time face-time attendance as a basic component of a job, rather than the type of “particular employment practice” that is subject to challenge as part of a disparate impact claim.
As Travis discusses, this approach is flawed from a purely legal perspective. For example, the fact that courts treat regular in-person attendance as an “essential function” of a job is significant because employers are not required to accommodate an employee with a disability by eliminating an essential function. But as ADA regulations make clear, an “essential function” is a fundamental “task or “duty,” like emptying the trash, making deliveries, or counseling a client. Requiring an employee to be in the office from 9 to 5 every day is an employment practice or requirement, not a “task,” “duty,” or “function.” The effect of the misinterpretation is to shield employers from the ADA’s reasonable accommodation requirement, which might otherwise require an employer to allow for telecommuting or flexible work schedules.
This has been the approach courts have taken almost since the ADA became effective in 1992. The fact that Congress did not revisit the issue when it amended the statute in 2008 suggested that, sadly, the “full-time face-time norm” would remain the norm. Likewise, the assumption that “full-time face-time” is a basic component of employment has become embedded in Title VII caselaw.
And then came the pandemic. Travis reviews how COVID-19 forced employers to jettison old workplace practices in order to adjust to a world in which regular full-time face-time work was (at least temporarily) potentially dangerous. She observes that “[t]he successful shift of millions of employees into remote and flexible work arrangements due to COVID-19 has rendered indefensible the judicial treatment of full-time face-time requirements as ‘essential job functions’ under the ADA.” (P. 218.) She cites statistics capturing not just the number of employees who have worked from home during the pandemic, but statistics that seem to rebut some of the most common arguments against permitting telecommuting, flexible work schedules, and similar working arrangements.
For example, one of the most common objections to telecommuting is that it will result in a lack of productivity. Yet, Travis describes one study, among many, in which “two-thirds of managers reported that employees increase their productivity when working from home, and eighty-six percent of employees reported being most productive when working alone.” (P. 219.) Indeed, one of the more interesting aspects of the article is the extent to which the data that Travis relates involving the workplace in the COVID-19 world undercuts the traditional arguments against treating telecommuting and similar arrangements as reasonable accommodations.
Travis reports that “employees are filing more claims against employers alleging failure to accommodate their disabilities than any other COVID-related claim.” (P. 225.) As these claims make their way through the courts, judges will have an opportunity to revisit the full-time face-time norm. While there are certainly jobs for which traditional full-time face-time attendance is necessary, the pandemic has demonstrated to millions of employers and employees alike that rigid adherence to past practices and policies is not necessarily essential to a productive workforce. Title VII’s disparate impact theory and the ADA’s reasonable accommodation requirement are designed, in part, to force employers to re-evaluate whether past practices and policies are truly essential. Travis’ article provides the sort of data-informed legal and policy arguments that one would hope would cause courts to consider their own past approaches when it comes to the full-time face-time norm.
Many Jotwell readers choose to subscribe to Jotwell either by RSS or by email.
For a long time Jotwell has run two parallel sets of email mailing lists, one of which serves only long-time subscribers. The provider of that legacy service is closing its email portal next week, so we are going to merge the lists. We hope and intend that this will be a seamless process, but if you find you are not receiving the Jotwell email updates you expect from the Worklaw section, then you may need to resubscribe via the subscribe to Jotwell portal. This change to email delivery should not affect subscribers to the RSS feed.
The links at the subscription portal already point to the new email delivery system. It is open to all readers whether or not they previously subscribed for email delivery. From there you can choose to subscribe to all Jotwell content, or only the sections that most interest you.
Yvette Butler, Aligned: Sex Workers’ Lessons for the Gig Economy
, 26 Mich. J. Race & L.
__ (forthcoming, 2021), available at SSRN
Yvette Butler’s forthcoming article, Aligned: Sex Workers’ Lessons for the Gig Economy, is one of those pieces that sticks with you, that pops back into your head multiple times as you go about your day after reading it. This is because it is so packed full of framework-shifting insights about gig work, sex work, racial justice, gender justice, employment law, labor law, and worker solidarity, to name just a few of the topics it covers.
To paraphrase Professor Butler’s central insight, different types of work have different and complicated relationships with legal protections and with stigma. Sex workers have a long history of negotiating both legal status issues and stigma, and have much to offer gig workers in the way of strategy and solidarity lessons.
As Professor Butler observes, some work is performed primarily by employees, and those workers consequently benefit from the variety of protections offered within the “fortress of employment,” as Cynthia Estlund has labeled employee status. Other work is performed by less-protected independent contractors; still other labor is criminalized, leaving those workers entirely unprotected, and vulnerable to arrest and prosecution. Beyond legal protection, another axis for analysis is stigma: some types of work, regardless of legal status, are heavily stigmatized. Because of this stigma, these jobs have historically been occupied by women and workers of color, or perhaps the jobs have become stigmatized because of the race and gender of their occupants.
In Professor Butler’s telling, gig work—cleaning houses on demand, running errands, driving people around, delivering food—is “merely a formalization and de-stigmatization of labor” that is currently and was historically performed primarily by women and workers of color. Though the stigma around gig work may have lessened, particularly as that work has become associated with apps and tech platforms, most gig workers still work outside of employee status, and some struggle with “inconsistent jobs, non-negotiable pay, and no benefits.”
Professor Butler defines sex workers as “individuals who engage in commercial sexual exchange (in any number of ways), regardless of whether they do so because of choice, circumstance, or coercion.” She points out that many sex workers, too, work outside employee status, and even risk criminal prosecution, while performing work that is heavily stigmatized. Yet even in this shadow economy, sex workers have won some battles for legal protection and progress toward de-stigmatization. Some exotic dancers have sued club management, successfully claiming that they are employees who are entitled to wage and hour protections. In Washington state, exotic dancers lobbied successfully for legislation to improve their safety and working conditions in clubs.
Sex workers have also demonstrated the necessity of centering the voices and insights of workers themselves in efforts to legislate change. Drawing from the disability rights movement, and in particular its slogan, “Nothing about us without us,” Professor Butler notes that twin federal laws designed to protect sex workers online (FOSTA-SESTA) in fact deprive workers from access to online forums in which they can “find and negotiate their own work,” thereby limiting “their independence—a key factor in worker power.” Professor Butler cautions that current efforts to increase legal protections for gig workers may be similarly flawed, leaving out the voices of gig workers themselves, who are the authorities on the realities of gig work and have the best eye for unintended consequences. She urges solidarity between sex workers and other types of gig workers, so that sex workers can benefit from gig workers’ relatively greater privilege, and gig workers can benefit from sex workers’ decades of struggle for more rights and less stigma.
In sum, this article sets out an incredibly interesting outline for a rich future research agenda, while also providing some useful, hard-won insights for gig worker advocates today. By situating sex work alongside other types of work, and identifying its commonalities with contemporary gig labor, Professor Butler allows sex workers to teach us about how gig work might be improved. At the same time, she tells a compelling story about sex workers’ own self-advocacy, and their fight for legal status and against stigma. Finally, she connects all of these narratives to the struggle for racial and gender justice, as workers of all types struggle “to work free from exploitation” and hustle not only to survive but also to thrive.
Cite as: Charlotte S. Alexander, Learning from Sex Workers: Lessons in Advocacy, Stigma, and Struggle
(June 11, 2021) (reviewing Yvette Butler, Aligned: Sex Workers’ Lessons for the Gig Economy
, 26 Mich. J. Race & L.
__ (forthcoming, 2021), available at SSRN), https://worklaw.jotwell.com/learning-from-sex-workers-lessons-in-advocacy-stigma-and-struggle/
Professors Grossman and Thomas have written a wonderful article that describes how courts have applied Young v. United Parcel Service, 575 U.S. 206 (2015), in which the Court considered whether pregnant employees are entitled to workplace accommodations that they need because of pregnancy. The Court’s decision did not resolve the issue; it merely provided trial and appellate courts a structure for thinking about the issue. Consequently, courts have used the Young decision in various, inconsistent ways.
Reading this article, Making Sure Pregnancy Works: Accommodation Claims After Young v. United Parcel Service, Inc., was fun because it is smart, straightforward scholarship that discusses a live controversy that lingers because the Supreme Court did not resolve the issue when it had the opportunity to do so. It reminds us that the Supreme Court often addresses only the case directly before it, leaving trial and appellate courts to consider broader issues in later cases. That is worth remembering in this era in which the Supreme Court’s job is thought by some to include fully resolving important legal issues for good.
The article describes the issue that triggered Young—discrimination in the accommodation of pregnant employees. The Pregnancy Discrimination Act (PDA) deems pregnancy discrimination unlawful sex discrimination under Title VII of the Civil Rights Act of 1964 and requires pregnant employees be treated “the same for all employment-related purposes . . . as other persons not so affected but similar in their ability or inability to work.” Prior to Young, many employers had policies that granted or denied accommodations based on how a worker became injured or why the worker needed to be accommodated, with no specific provision for pregnancy-based accommodation. Under such policies, pregnancy was essentially grouped with the reasons that did not trigger accommodation rather than with the reasons that did trigger accommodation. Consequently, the Court needed to decide whether the PDA requires pregnant employees be accommodated to the same degree, e.g., given light duty assignments, as other workers who were accommodated for non-pregnancy reasons.
Rather than decide whether the PDA requires accommodation, the Young Court created a structure for trial and appellate courts to use to decide whether an employer discriminated against pregnant employees. The Court modified the three-part McDonnell Douglas test, which has been used for decades in employment discrimination cases to uncover intentional discrimination, to apply in this situation. The modified test helps a factfinder decide whether an employer who declines to accommodate a pregnant worker has done so for discriminatory reasons or nondiscriminatory reasons. The inquiry focuses on the employer’s justification for denying a pregnant worker an accommodation that other workers have been granted.
The article briefly discusses the approaches courts have taken in applying Young. Some courts—consistent with Young’s thrust—have aimed primarily at the employer’s justification for the refusal to accommodate. Other courts have focused less on the employer’s reasoning and more broadly on whether the employer discriminated against the pregnant employee. Some have required the pregnant plaintiff identify a similarly situated non-pregnant employee who was accommodated to support a possible inference of discrimination and escape summary judgment. Those approaches can lead to different outcomes in different circuits, yet all arguably stem from the Court’s decision. This is the result of the Court’s decision to elide Young’s key question.
The article reminds us that a Supreme Court decision does not always resolve an issue. A decision may move the legal dispute from one gray area to different but equally gray area. The Young decision provided trial and appellate courts latitude to decide whether employers discriminated when refusing to accommodate pregnant employees. Unsurprisingly, those courts have utilized multiple approaches to resolve the issue.
Ideally, the law should be clearer after the Supreme Court issues an opinion. As this article makes plain, that does not invariably happen. Employment law practitioners, students, and academics should read this article not to revel in the Supreme Court’s failure to clarify the law, but to consider what the law is and to think about how a responsible attorney should counsel clients regarding an employer’s obligation to accommodate and an employee’s right to accommodation when the law remains uncertain. In addition, the authors invite readers to consider the steps that should be taken to clarify the law on accommodating pregnant workers. The Supreme Court may need to revisit and clarify its decision or legislation may be necessary to resolve the issue.
There is much more law and policy embedded in this clear and enjoyable article. Readers will find out just how much more when they peruse the article.
Increasingly sophisticated data analytics paired with machine learning is changing the world, and workplace applications are already a thriving industry. Over the last five years or so, legal scholars have increasingly explored the legal implications of these new technologies. Most of that work has focused on concerns related to privacy or discrimination, and quite a bit focuses on use of this technology in hiring. This focus only reaches part of the “people analytics” industry–it leaves out the application of predictive analytics to first analyze and then shape worker behavior and the working environment.
In Preventing #MeToo: Artificial Intelligence, the Law and Prophylactics, James P. de Haan tackles this kind of application of AI in the workplace by looking at how predictive analytics could be used to prevent harassment. It’s a great time to be thinking of this potential application for at least three reasons. The effects of the #MeToo movement have caused employers to pay more attention to preventing harassment, the technology appears to be soon in reach, and thinking about this application might help us think carefully about other ways AI might be used to shape worker behavior and the working environment.
Reviewing this article for the Journal of Things We Like Lots posed a challenge because I do not like the kind of surveillance and AI-driven, behavior-prediction program described in this article. What I do like is that, knowing this is coming, de Haan has identified the outlines of how such a program would work, explained the appeal as a tool to prevent harm, and set up several preliminary concerns we should have, highlighting some challenges we need to continue to think through.
As de Haan notes, sexual harassment (as well as harassment on the basis of other identity characteristics) remains widespread, despite the #MeToo movement. Part of the reason harassment remains so widespread is that it is grossly underreported, and part of the reason it is unreported is a fear of retaliation. That fear is well founded. According to the EEOC, about 75% of employees who speak out about harassment report that they experienced some form of retaliation for doing so.
As de Haan further describes, employers have responded to their legal obligations by nearly universally adopting harassment policies and training. While the existence of policies and training may provide a defense to a harassment claim, in some instances, as the EEOC has noted, there is no evidence that they are effective at preventing harassment. de Haan chalks this up to human involvement, noting that “sexual harassment policies . . . are only as good as the managers who implement them and are responsible for making sure there is broad compliance.” Because the legal standard for determining when a working environment has become objectively hostile is ill defined, people are notoriously bad at recognizing it.
From that central observation, de Haan explores what it might mean to remove the human evaluator by employing AI. After summarizing sexual harassment law and the obligations imposed on employers to prevent or remedy it, de Haan summarizes the critiques that the law fails the harassed. The bulk of the article examines how AI might be trained to recognize sexual harassment and identifies a number of legal implications of such a system, namely expanding employer duties to monitor and prevent harassment, and the implications of that monitoring for privacy, reputation, and workplace comradery.
One of de Haan’s central premises is that harassment harms both employees and employers. Thus, when it comes to prevention of harassment, the interests of the employer and employee are not diametrically opposed. For this reason, de Haan notes, creating a cooperative system that recognizes the joint interests of the harassed employee and the employer would more likely correctly pinpoint the workplace problem as the harasser, rather than incorrectly labelling the harassed employee as the workplace problem. In this way, such a cooperative system could promote reporting.
AI is that kind of system. Generally, one of the key uses of AI is to extract patterns and then “map out extant and predicted relationships based on these patterns.” In de Haan’s view, this is exactly the kind of thing that is needed to prevent harassment. Such a program could recognize when interactions between employees might risk creating a hostile environment—and could be more accurate than a person at identifying it early, given the difficulty in defining when an environment becomes hostile.
And as he describes, socio-cultural studies show that harassment is predictable when social situations, personalities, and context clues are analyzed. In fact, businesses are already using software to identify and prevent harassing conduct in email communications. The existing programs, though, are essentially not sensitive enough. They require actionable harassment to occur or almost occur before they alert human resources. To really prevent harm, de Haan argues that the prediction and warning must come earlier in the process—before an actionable harassment claim arises.
de Haan next explains how the AI would be trained to gather the data that would allow such a prediction. He suggests that new hires might play a game that could “allow the program to assign a ‘sensitivity profile’ for each employee.” Based on the research he identifies, that game should also test for a person’s problem-solving skills, propensity for confrontation, and notions of justice. But this is only part of the data that would be needed. The program would also have to learn what conduct is likely to be perceived as harassment. For this, de Haan suggests that
Permitting the program to review internal HR files and complaints would help it understand what actually leads to low-level complaints. . . .Taking this a step further, the program could even reach out of network to comb the internet for all publicly available information about a company’s employees. It can use an employee’s photo to identify social networks and map out relationships with co-workers based on extant connections, photos, conversations, tags, and content interaction.
And this is where the potential gets particularly scary. The program de Haan describes
ranks people based on subjectivity to sexual harassment; categorizes them as potential victims and harassers; consolidates mounds of highly sensitive, private information into one central location; and, perhaps most worryingly, potentially punishes people for acts never committed.
To be effective, an AI harassment prevention program has to have an early warning system to prevent harm even before an official report is made and an accurate “map” of the organization’s employees (including their personalities, work functions, and power relative to each other). In order to achieve this, the program will have to consume “massive amounts of data.”
After painting this picture, de Haan warns of four main legal implications. The ability of an employer to monitor employees this way may create a duty to monitor them if in fact that monitoring is effective at preventing harassment. And the possibility of a system that creates red flags may also create a duty on employers to warn employees who might be targets of harassment. Additionally, the expanded capability and warnings will likely trigger a duty to investigate more employee conduct. And lastly, the collection of so much data—particularly data outside of the workplace—and its use to potentially label an employee a harasser, raises significant privacy and autonomy concerns.
In the end, de Haan notes that nothing he has explained solves the problem of what an employer should do with the information that a particular situation risks creating a hostile environment at some point in the future, although he raises the specter of what might happen and how employees at risk of harassment might react upon finding out if nothing is done in the face of that kind of warning. Notably, he does not do much to address the converse of these concerns—how to prevent protection of at-risk employees from resulting in limits to their work opportunities. As he briefly notes, one response to the #MeToo movement has been that men in positions of power have stopped mentoring women subordinates or engaging in social interactions with them. And because this program would assess all employees, we might worry that a rational employer would segregate those employees assessed “highly” or “over-sensitive” to harassment in career-limiting ways, or maybe not hire them in the first place.
As I said at the beginning, reviewing this article for the Journal of Things We Like Lots posed a challenge because I do not like the kind of program described in this article. But de Haan has provided an important first look at how this kind of program would work, explained the appeal as a tool to prevent harm, and set up an early warning of some concerns and challenges. It is a good step that highlights how many new challenges we need to continue to think through.
- Jason R. Bent, Is Algorithmic Affirmative Action Legal?, 108 Georgetown L. J. 803 (2020).
- Ifeoma Ajunwa, Race, Labor, and the Future of Work, The Oxford Handbook of Race and Law, (Emily S. Houh, Khiara M. Bridges, Devon W. Carbado, eds., December 12, 2020), available at SSRN.
Jason Bent and Ifeoma Ajunwa have authored recent papers I like a lot as they help to uncover and prescribe some solutions to the potential racist treatment of workers through technology as we advance into 2021. Their suggestions on how to address this form of employment discrimination come at a crucial time for workers of color. The nature of racial discord in our society reached a crescendo in 2020 and raised many questions for workers of color. The Covid-19 pandemic placed unusual health and economic burdens on black and brown workers as the insidious nature of the virus afflicted communities of color more harshly. So-called essential workers, many of whom are vulnerable people of color, were forced to risk exposure to the virus in order to perform their work duties in-person as most other workers scurried off to their homes to perform their work duties in a virtual manner. Meanwhile, militia and white supremacist groups have taken a more active role in our society as a response to the national and international protests calling for racial justice after the senseless killing of George Floyd by a police officer in Minnesota.
With the racial consequences from Covid-19 and the George Floyd protests still looming, the country will attempt to recover from the events of 2020. As these recovery efforts proceed, we must not forget that workers of color also face another racial problem, the effects from increasing technological advances aimed at giving employers greater opportunities to capitalize on the use of big data. Both Bent and Ajunwa have authored papers that examine similar concerns related to racial problems caused by technological developments as employers attempt to use algorithms aimed at achieving greater operating efficiencies. Although their suggested resolutions to this problem offer different approaches, both authors, as discussed below, give their readers an interesting take on how workers of color may be subjected by their employers to racism through algorithms and how that form of workplace discrimination should be addressed.
As we advance into 2021 with the hope of stalling the virus and reclaiming or reinvesting in many of the advances that can expand the economy and restore any business growth delayed and stifled by the pandemic, Bent and Ajunwa keep us focused by providing insights about the use of technology that can result in discrimination against workers based on race. Their contributions add to some of the recent literature advancing the concerns about the broad racial implications arising within the growth of algorithms. In the midst of the Black Lives Matter movement and other indicators that highlight how pervasive discrimination exists in our society, the use of racist algorithms has started to generate some concern. Fallible humans operating in a society involving systemic discrimination provide the input for the algorithms. Then those inputs infected with the existing racism of the past and the present can lead to racist outputs even if unintended by those using an algorithm they perceive to be a neutral application.
With general concerns for workers of color subjected to racist algorithms and following the work of technology scholars suggesting that the best way to respond to this form of discrimination is to take affirmative action steps to identify and prevent any discriminatory results, Bent and Ajunwa recommend unique solutions to address this predicament. A few workplace law scholars have started to address the discriminatory problems presented by using emerging technologies to fortify racist treatment of employees. The recent papers of Bent and Ajunwa add to those prior contributions while also offering important considerations for workers of color at such a crucial time for employers to be aware of any actions they take that might create racial concerns for their employees.
On page 824, Bent asks: “Are race-aware algorithm fairness solutions permissible under U.S. anti-discrimination law?” By posing this question, Bent begins an inquiry as to whether employers can correct for racist effects from a neutrally-applied algorithm without violating employment discrimination laws when the correction operates as a form of affirmative action. Bent navigates Supreme Court precedent finding that corrections to or failures to certify test results to prevent disparate impact liability can operate as a form of disparate treatment discrimination against the workers who would have benefitted from the certified results.
In a thoughtful and thorough analysis of the Supreme Court’s jurisprudence on Title VII disparate impact theory and affirmative action law, as well as constitutional equal protection analysis based on race for public sector employers, Bent constructs a possible framework whereby employers may escape disparate treatment discrimination liability when pursuing race-conscious changes to algorithms. Bent turns to the Court’s affirmative action law analysis to distinguish the Court’s disparate impact law jurisprudence as a means for an employer to escape liability when correcting a racist algorithm. According to Bent, an employer’s race-conscious attempt to prevent racial disparities resulting from application of an algorithm could be viewed as part of an affirmative action plan instead of an attempt to correct a disparate impact result.
From Bent’s review of the cases, an ex ante consideration of actions intended to prevent racial disparities will find better traction in complying with the Supreme Court analysis as a proper and legal action under affirmative action law. (Pp. 836-37.) Under Bent’s reasoning, the Supreme Court’s disparate treatment concerns for those workers who would have achieved certain measures under the status quo would be averted by the employer being motivated to take affirmative action steps in designing the algorithm rather than seeking to correct a disparate impact liability resulting from the application of the algorithm. By pursuing changes in the design phase of the algorithm, this affirmative action approach differs from an employer attempting to nullify or change an algorithm’s result as an ex post correction of any discriminatory disparate impact results. (Id.)
Ajunwa has also written about the impacts of technology in the workplace. But, her forthcoming paper addresses more specifically technology’s racial consequences. Ajunwa provides an interesting history of the ebbs and flows of labor actions throughout our country with any growth in improving working conditions for workers of color subsequently being met with newer ways to suppress opportunities and economic benefits for those workers. In cataloguing the historical developments from slavery to Jim Crow codes to increased incarceration to immigrant worker legal restrictions to a current lack of educational and professional opportunities, Ajunwa clarifies the depth of the plight faced by many workers of color seeking equality in our society.
After the illuminating discussion of the historical characterization of racist treatment of workers of color, Ajunwa transitions to the modern-day concerns of racism presented through technological innovation. Ajunwa condemns the expansion of digital platforms to employ gig workers classified as independent contractors who lack many of the legal protections provided to employees. Ajunwa also criticizes the use of algorithms in hiring especially through video interviews. According to Ajunwa, the door to discrimination opens “when these algorithms are trained on white male voices and faces, [because] they put applicants of color at a disadvantage.” Similar to some of the general scholarly concerns about racist algorithms, Ajunwa captures so clearly how design decisions about which inputs to consider can produce a neutral algorithm with racially-discriminatory results.
Ajunwa also discusses the concern that newer developments in surveillance technology create added burdens for workers of color who already have to find ways to “manage stereotypes attached to their race” in an effort to combat racism. With increasing surveillance creating key problems for the experiences of workers of color and the use of automated job tasks resulting in eradicating jobs typically employing those same workers, especially women of color, Ajunwa proposes a solution to rectify the racism produced by technological advances through any algorithms.
Similar to others, Ajunwa calls, in general, for broader and globalized labor protections to address the effects of advancing technologies on vulnerable workers. Though more specifically, Ajunwa ties the proposal to the needed protections for certain precarious workers including prisoners, undocumented immigrants, and guest workers who all tend to be people of color. Ajunwa also asserts that legislatures should conduct racial impact studies before implementing legislation that may create racial disparities. Also, Ajunwa identifies the need for greater economic stability for communities of color as a means of protection.
As a bottom line, Ajunwa beseeches us to not accept the digital growth as a futile development that will create a further racial divide. In this respect, both Bent and Ajunwa raise our consciousness to not only be aware of the racial disparities presented by employer use of technology and algorithms, they offer some methods to prevent this result. As a consequence, Bent and Ajunwa present us with some interesting reads while providing workers of color a potential respite from another racial hurdle in the workplace in 2021. These works also arrive at a time when racial unrest in our society has reached a boiling point and workers of color and their employers need some prescriptions to avoid race discrimination while moving forward in a positive manner.
Michael Z. Green, Confronting Workplace Discrimination from Automated Algorithms in Times of Racial Unrest
, JOTWELL (March 12, 2021) (reviewing Jason R. Bent, Is Algorithmic Affirmative Action Legal?
, Jason R. Bent, Is Algorithmic Affirmative Action Legal?, 108 Geo. L. J. 803 (2020)
. (2020); Ifeoma Ajunwa, Race, Labor, and the Future of Work
, Ifeoma Ajunwa, Race, Labor, and the Future of Work, The Oxford Handbook of Race and Law, (Emily S. Houh, Khiara M. Bridges, Devon W. Carbado, eds., forthcoming 2021), available at SSRN.
Much has been written and said about police unions lately, most of it justifiably impassioned but not all of it well-informed by public-sector labor law rules and practices. This article is both. And while the question of the effect of police unions on police reform has been a hot topic in 2020, it is worth noting that Professor Hardaway identified this as a significant issue before it was as much in the limelight as it is now.
The article begins by recounting a series of tragic killings by police and calls for reform via the Violent Crime Control and Law Enforcement Act of 1994. The article then carefully describes a long history of racism in policing. Moving to modern times, the article catalogues the inadequacies of private litigation in achieving police reform.
It then moves to Justice Department investigations into police misconduct, which have led to settlements and court-monitored reforms such as use of deadly force policies and ways in which to hold officers accountable for misconduct. Police unions, in reaction, insisted that at least some of those reforms involved areas that were mandatory subjects of bargaining under relevant labor laws, and thus could not be made without bargaining with the police union. The article provides a very useful survey of all consent decrees since 1997, discussing their interaction with union contracts. Courts, Professor Hardaway shows, have liberally granted unions the right to intervene in these settlements. Notably, unions argue that they cannot be bound by these settlements because they were not parties to them. This, Professor Hardaway persuasively argues, has hampered enforcement and court oversight.
The article correctly notes that use of force polices are not mandatory subjects of bargaining. The article suggests limiting collective bargaining rights such that unions could not claim that other issues that might be covered by consent decrees, generally labeled “police accountability” and “public interest” matters, are mandatory subjects of bargaining. Unlike some discussions of this issue, Professor Hardaway does a nice job of specifying what specific changes she would like to see in terms of rules about negotiability.
Professor Hardaway understands relevant and critical legal rules. For example, not all states permit police to bargain collectively, and in general, in states that do, some topics are not mandatory subjects of bargaining because the issue impinges too much on the public interest. She traces the history of police and public-sector bargaining rights effectively and accurately. Overall, her discussion of these issues is more sophisticated and nuanced than much of what one sees in popular and even scholarly debates on this topic.
I personally would have talked a bit more about how police management fits into this picture, both in negotiating and enforcing collective bargaining agreements, and also about the political role of both police management and unions. Perhaps these topics could be next in a series. But this article takes on a wealth of complex material, both as to law and policy, and does really well with it. And I strongly agree with her main conclusion: as far as labor law reforms go, this issue is best dealt with by excluding, in a surgical manner, certain topics of bargaining that may intersect with reform efforts.
This is an especially impressive piece given that Professor Hardaway was appointed as an Assistant Professor in 2016, and thus wrote this while still quite early in her career. I liked it a lot.
Cite as: Joseph Slater, Smart Thinking about Police Unions and Labor Law
(February 17, 2021) (reviewing Ayesha Hardaway, Time is Not On Our Side: Why Specious Claims of Collective Bargaining Rights Should Not be Allowed to Delay Police Reform Efforts
, 15 Stan. J. Civ. Rts. & Civ. Liberties
137 (2019)), https://worklaw.jotwell.com/smart-thinking-about-police-unions-and-labor-law/
In their article, The Invisible Web at Work: Artificial Intelligence and Electronic Surveillance in the Workplace, Professors Bales and Stone argue that illegitimate employer uses of artificial intelligence (“AI”) in the workplace may largely outweigh legitimate uses, creating a potentially problematic, but not necessarily unlawful, encroachment on human workers’ rights. The article is divided into three main sections. First, it comprehensively describes the numerous ways in which employers are utilizing AI to transform traditional managerial prerogatives. Second, it analyzes possible workers’ rights violations, concluding that existing law is unlikely fully to protect those rights. Third, it presents areas for future reform. The article concludes with an ominous observation: “companies are collecting unfathomable quantities of data on workers that will significantly tilt the balance of workplace power in favor of employers at workers’ expense.” (P. 62.)
Section I comprehensively surveys employers’ ubiquitous use of AI to transform traditional managerial prerogatives. The authors note that employers “utilize a dizzying array of electronic mechanisms—including trackers, listening devices, surveillance cameras, metabolism monitors, and wearable technology—to watch their workers, measure their performance, avoid disruption, and identify shirking, theft, or waste.” (P. 4.) While these mechanisms may serve legitimate employer goals, they often allow managers to “observe each worker’s every movement, both inside and outside the workplace, and during and after working hours.” (Id.) Moreover, AI algorithms can transform collected data “into a permanent electronic resume that companies are using to track and assess current workers,” and which “could potentially be shared among companies as workers move around the boundaryless workplace from job to job.” (Id.) This “invisible electronic web threatens to invade worker privacy, deter unionization, enable subtle forms of employer blackballing, exacerbate employment discrimination, render unions ineffective, and obliterate the protections of the labor laws.” (Id.)
Section I is itself divided into three parts: Part A (Pp. 5–9) briefly traces AI’s development in production processes. Innovations such as computerized robotic arms, machine learning, computer vision, AI-amplification of human capability, and voice recognition—initially introduced to the workplace to enhance productivity—are now being turned into surveillance instruments. Part B (Pp. 9–15) introduces the concept of People Analytics or data-driven human resources. The authors explain that the People Analytics field uses AI “to guide HR decisions for many areas, including making hiring decisions, monitoring performance, predicting an individual’s work trajectory, evaluating workers to set compensation, and determining an employee’s likelihood of terminating the employment relationship.” (P. 9.) Part C (Pp. 15–22) traces creative uses of electronic surveillance devices to monitor workers. For example, firms are now using electronic badges not only to access buildings but also to record employee conversations, track employee movements, and monitor employees’ vital signs. (Pp. 17–20.) The Section ends with a discussion of how the development of these technologies can be used to monitor workers’ off-duty activities, which “not only creates the potential for highly intrusive monitoring, but also raises questions about how employers will use the data they collect about employees’ performance, with whom they will share it, and how long they will keep it. AI-enhanced data collection, retention, and analytic capabilities threaten to create a permanent record of employee productivity, activity, and medical and physiological attributes.” (Pp. 20–22.)
Part II, which focuses on potential employer liability, picks up where Part I.C. leaves off. After all, that employers can retrain AI-enhanced data to create worker profiles raises two questions about which every worker should be concerned: How can employers use these data collections to harm workers and does the existing legal framework provide sufficient protection to workers? Like so many questions that have arisen during the technological revolution, the authors show that the law is often woefully inadequate to protect workers’ rights. (Pp. 22–59.)
Part II is divided into four parts, each taking a deep dive into a different area of law. Take employment discrimination. AI can amplify bias, by replicating past employment decisions that generated profitable results for the company but themselves were laced with bias. For example, “a hiring algorithm based on current workplace demographics” in Silicon Valley, which “has long been criticized for its white-male-dominated workplaces … likely will replicate and entrench [those] past hiring practices.” (Pp. 22–23.) While the authors recognize that AI could also reduce bias, it can also augment AI, which may reduce the “salutary effect of AI.” (P. 29.) Part II.B. showcases the legal limitations of workplace privacy laws, providing as one example the use of pre-hire videos to evaluate job candidates. Those interviews can “collect data from an applicant’s own devices or from cookies or other technological tracking devices.” (p. 34.) Part II.C. shows how antitrust laws might be used to sue employers who “use shared employee information amassed through AI and electronic surveillance to set compensation, engage in a no-raiding agreement, or blacklist an employee.” (P. 36.) However, this area of law is currently untested. Finally, there are labor law implications, primarily concerning surveillance, bargaining over privacy and surveillances issues, and representation, all of which are discussed in Parts II.D. and II.E. (Pp. 48–59.) But there are limitations here as well. Although the NLRA has historically protected concerted activity from surveillance, for example, the Trump Board has vastly cut back on those protections in recent cases. (Pp. 53–55.)
Part III establishes a compact agenda for future research and reform. (pp. 59–62.) This section cogently explains that while “gathering and using such data have enormous implications for the application of existing workplace laws,” such activities “are occurring with no legal or regulatory oversight.” The authors opine that “[p]erhaps existing laws will be sufficiently adaptable to respond to these new conditions, but there is significant risk they will not.” Accordingly, the authors identify the need to clarify “the law of disparate impact … to ensure that plaintiffs need to show, in their prima facie case, only that an algorithm as a whole caused a disparate impact; plaintiffs should not be expected to show precisely how the algorithm produced the bias.” (P. 60.) The authors also suggest that Congress amend electronic surveillance laws along the lines of “the European General Data Protection Regulation … as a starting point but augmenting it to specifically address data collection in the employment context.” (P. 61.) Such amendments would give workers greater control over their own personal data collected by their employers. The authors further suggest that the Federal Trade Commission clarify that data collected and shared by several employers about workers are in fact antitrust violations. Finally, the authors suggest the Board affirm the following: electronic surveillance violates Section 7; employers must bargain over such “the existence and scope of electronic monitoring and the use of algorithms in decisions involving discipline, job assignment, promotion, or pay;” and that “the existing duty on employers to provide unions with information necessary for meaningful bargaining and grievance-resolution should be extended to information about an employer’s practices and plans regarding the use of AI in personnel management decisions, and to information about algorithms or data collected by AI that an employer has used in personnel decisions affecting individual grievants.” (Pp. 61–62.)
Were this article limited to Section I, alone, Professors Bales and Stone would have made a significant contribution to work law literature. While descriptive, those accounts are the necessary initial step for understanding the social problem presented—employer data collection jeopardizes workers’ rights. But the article is impactful because it goes beyond the descriptive. For example, Section II explains how technological advances in AI are connected to modern HR decisions. Drawing those connections must have been painstaking and tedious for the authors, yet they read as the most interesting part of the article. Moreover, that Section makes an original contribution because the problems the authors expose have not yet arisen in the caselaw.
In closing, the article’s in-depth analysis of possible workplace violations and conclusion that existing law is likely insufficient to protect workers’ rights expose an uncomfortable truth. Namely, technological progress is currently progressing at such a rapid pace that existing law is unlikely to catch up in time to protect workers. As Professors Bales and Stone explain: “given the blinding pace at which companies currently are collecting data on workers, a legal response may quickly become a moot point. Once sufficient data are collected, it likely will be difficult to put the genie back in the bottle.” (P. 59.) Welcome to the brave new workplace.
When enacted in 2008 at the end of the Bush Administration, the Genetic Information Nondiscrimination Act (GINA) seemed like it had come from the future. Although the hard-won result of over a decade of advocacy by Rep. Louise Slaughter of New York, GINA addressed a problem that seemed more hypothetical than real. Genetic testing had been around for a while, introduced to the public in part through the O.J. Simpson trial. It seemed unlikely, though, that employers or insurers would not only secure DNA testing but then use it to discriminate on the basis of genetic difference. Yes, it made sense as a plot for a science-fiction movie like Gattaca, but not as a depiction of current reality.
This assessment is largely borne out in the empirical results in GINA, Big Data, and the Future of Employment Privacy by Bradley Areheart and Jessica Roberts. Examining GINA cases from federal courts during the statute’s first decade of existence, Areheart and Roberts found a mere 48 unique GINA cases, only 26 of which involved terminations. Moreover, most plaintiffs failed to find relief, often losing because of fundamental flaws: they had voluntarily disclosed their genetic information; they could not prove the employer possessed the genetic information; or their information was not considered “genetic.” In fact, the authors “uncovered no cases alleging discrimination based on genetic-test results.” (P. 744.) The article makes a plausible case that GINA has been a failure—or, perhaps more charitably, addressed a nonexistent problem.
Most law review articles would stop there, having provided a solid sense of the litigation picture for a relatively new statute. But Areheart and Roberts flip the script by illuminating a completely alternative justification: namely, GINA as information privacy regulation. Rather than simply a nonentity as an antidiscrimination statute, GINA is instead a powerful deterrent against employer snooping into an employee’s genetic background. Areheart and Roberts persuasively argue that the absence of GINA litigation is in fact evidence of GINA’s success in staking out genetic information as a no-fly zone, and that the Act can be a model for other employee privacy protections. A statutory phoenix rises from the ashes!
It wasn’t intended this way. Areheart and Roberts chronicle the history of the Act, one that was rooted in the ability of health insurers to identify and fence out riskier patients. When insurers were able to hike rates or deny coverage because of pre-existing conditions, exploring one’s own genetics became a financially hazardous endeavor. People were passing up the opportunity to find out genetic predispositions to certain conditions and illnesses to avoid being labeled as a poor risk. And since employers were the source of health insurance coverage for a vast swath of Americans, they too might decide to terminate an employee or pass on hiring someone due to genetic danger signs. Congress stepped in to prevent this type of discrimination, even though the feared phenomenon had not really manifested itself in significant numbers.
Two years after the passage of GINA, the Affordable Care Act prohibited consideration of pre-existing conditions as part of health insurance coverage and rate-setting decisions. So one of GINA’s big rationales was no longer in play. Predictably, the courts have not seen much in the way of GINA-related litigation based on employment discrimination. However, Areheart and Roberts have unearthed a hidden set of imperatives that GINA has placed on employers. The Act prohibits employers from seeking, obtaining, or possessing their employees’ genetic information. That narrow but complete prohibition has carved out genetic information from the panoply of data that employers are otherwise free to collect. As Areheart and Roberts report, the few GINA cases that have been litigated “show that employers are seeking information about their employees, and employees are pushing back.” (P. 734.)
GINA’s “unexpected second life as a privacy statute” (P. 755) is especially important in the employment context. The current state of privacy law has left workers largely exposed. Many assume that the Health Insurance Portability and Accountability Act (HIPAA) protects all health information, but the statute is much narrower in focus. It only applies to health care providers, health care clearinghouses, and health plans, and specifically exempts information within employee personnel files. The Americans with Disabilities Act (ADA) places restrictions on required medical examinations but has a number of exceptions. Although GINA’s scope is also narrow, it is comprehensive in its protection of this information. Moreover, the definition of “genetic information” goes beyond the paradigmatic DNA test to include family medical history. This routine part of employee health fitness forms is now protected by GINA.
Beyond the ramifications for employee genetic privacy, Areheart and Roberts pull out larger implications from GINA’s unexpected impact. They note that GINA’s privacy provisions have a two-pronged effect: they protect the information itself and also prevent the employer from discriminating based on that information. In contrast, the Pregnancy Discrimination Act (PDA) does not prohibit employers from asking about an employee’s pregnancy status—which leaves pregnant workers more vulnerable to discrimination. (P. 770.) The article acknowledges that there may be benefits from the sharing of genetic data between worker and firm that GINA forecloses. (Pp. 773-76.) But this loss may be the necessary expense to preserve the confidentiality of employees’ genetic makeup in such an effective manner.
GINA’s future remains to be written; Areheart and Roberts’ empirical investigation shows only a small number of current cases, whatever the underlying theory. But their article tells an important story of how the original Congressional plan went awry—and nevertheless led to a surprising and potentially influential new way of protecting employee information. Given the dizzying accumulation of innovative and disturbing encroachments—from RFID chips to round-the-clock health and location monitoring—protecting workers from ever-mounting surveillance and dissection has become an imperative. Areheart and Roberts have staked a claim for GINA as a model for how employee privacy might be protected in other areas of their lives. Their article is a terrific contribution to our understanding of the future of employment.