The Journal of Things We Like (Lots)
Select Page
Keith Cunningham-Parmeter, Discrimination by Algorithm: Employer Accountability for Biased Customer Reviews, 70 UCLA L. Rev. __ (forthcoming 2023), available at SSRN.

Among the things I like a lot are articles that make me question my original take on a topic. That is certainly true of Keith Cunningham-Parmeter’s Discrimination by Algorithm: Employer Accountability for Biased Customer Reviews. Some of his points are familiar (customer biases are pervasive and, employers, while paying lip service to antidiscrimination values, don’t deploy meaningful strategies to counteract such biased reviews). Others are increasingly accepted (such biases threaten to have more and more concrete employment consequences as technology allows them to be aggregated and acted upon in real time).

But I found most interesting Cunningham-Parmeter’s arguments as to how an employer might be liable for acting on customer reviews it knows (or could know) were discriminatory. And how an employer could avoid such liability but still retain the benefits of customer feedback. I’m not sure I’m totally persuaded on either point, but I gained a much better understanding of the dimensions of the problem and doctrinal challenges dealing with it under current law.

Discrimination by Algorithm’s paradigm case is the gig driver whose overall rating is hovering just over the platform’s automatic cut-off when a low customer review pushes the driver off the platform (although presumably any other low review factored into the rating would also be a but-for cause of the termination). Assuming a given “one star” review is biased, what does Title VII say about this situation? Prior scholarship noted possible disparate impact challenges to the use of customer reviews with such effects, but, as the author notes, a stumbling block is the business necessity defense.

Cunningham-Parmeter focuses instead on whether disparate treatment can be invoked. Under a disparate treatment theory, the customer would furnish the requisite intent to discriminate, and the employer would bear responsibility if it knows or could reasonably know about such bias.

However, agency law presents a challenge to employer liability. Since customers are not employees, how can any of their biases render the employer liable? Discrimination by Algorithm argues that “employer liability for biased customer ratings should depend on the level of delegated authority that customers receive when firms ask them to rate workers.” Cunningham-Parmeter draws a distinction between “action managers”–customers with the “functional authority to fire workers,” by virtue of the rating algorithm–and “advisory clients who provide input on a worker’s performance.” He argues that companies should be strictly liable for biased decisions by “action managers.” Liability can also follow in the “advisory client” context, but only for the firm’s failure to deal appropriately with the bias of such clients.

The latter is perhaps the easier to reconcile with current doctrine. Indeed, in the sexual harassment sphere, employers are often held liable for the actions of third parties, or, more accurately, for failing to deal appropriately with the actions of third parties such as customers. It doesn’t seem a huge jump to apply the same principle to biased customer reviews resulting in adverse actions. Further, the piece also explores “cat’s paw” cases and customer preference cases that, at a greater level of generality, arguably point in the same direction.

But strict liability for a biased customer review that tumbles a worker off an “algorithmic cliff” is harder to justify. Cunningham-Parmeter argues that “action managers are company agents not because firms employ them but because action managers possess the actual and apparent authority to deactivate workers.” While such a view has much to commend it from a theoretical perspective, Discrimination by Algorithm goes further to argue that adoption of this scheme is not such a “doctrinal leap” as some (including me!) might think.

Looking to Title VII’s language defining an “employer” to include “agents,” he explores the “agency theories of actual authority, apparent authority, and ratification [to] help courts analyze the delegated powers that action managers now possess.” I found ratification most plausible. Regardless of potential doctrinal limitations on authority, ratification applies when a principal agrees to be bound in return for receiving a benefit, which arguably embraces the benefit the employer obtains from the customer review.

As for the article’s critical last point–that employers can know, and, therefore, address the biases of customers submitting reviews–the problem is obvious once we’re past the “easy” (and probably rare) cases when the customer uses explicit racist or sexist language. The harder cases involve reviews that simply award a low rating with little or no comment or detail. Discrimination by Algorithm argues that firms could validate customer ratings either “by requesting additional information from users or by comparing negative ratings to objective performance criteria.” On the former point, he suggests asking customers to provide descriptive feedback along with numerical ratings. He makes a good case for the efficacy of such techniques in some cases, but clearly the value of some soft variables (e.g., “friendliness” as opposed to “oversharing”) will often limit this kind of monitoring. And, in many settings, it’s precisely such soft variables that distinguish good workers from bad ones.

That leaves us with “auditing” reviews to identify those that might be biased. This seems to be pretty much a statistical game: has the customer reviewed sufficient workers to make a statistical case of bias? The answer will often be no but maybe, contrary to my initial intuition, frequently yes: a customer who routinely uses, say, Uber or Door Dash will soon have a robust history of reviews and a pattern of under-rating (compared to other customers’ ratings) of protected class members might be easily ascertainable by the very technology that permits the rating to begin with. Excluding such reviews from the overall rating would be relatively easy and, given the small stakes in removing any given rating, largely cost-free.

I was, however, left with some puzzling questions that might be addressed in a later article. For example, does vicarious liability apply even if the auditing and other mechanisms don’t identify bias in a particular review but the worker somehow proves it to have operated? And, speaking of worker identification of a biased review, how is that to happen if, as is usually true, employers shield the identity of reviewers from their workers?

Bottom line: a very provocative read, which I liked a lot.

Download PDF
Cite as: Charles A. Sullivan, When Customers Become Bosses, JOTWELL (November 9, 2022) (reviewing Keith Cunningham-Parmeter, Discrimination by Algorithm: Employer Accountability for Biased Customer Reviews, 70 UCLA L. Rev. __ (forthcoming 2023), available at SSRN), https://worklaw.jotwell.com/when-customers-become-bosses/.