Machine learning systems are making more and more decisions about people -- from everyday decisions like what you see in your news feed to extremely important, life-changing decisions like whether to send Child Protective Services to a family.

Often people disagree with the decisions these systems make. In this workshop, we will explore how to design systems so that users (or experts or regulators) who disagree with decisions can shape those decisions and the decision making process.

We begin with one illustration, but note that the issue of contestability can be found in many domains, which we describe in more detail below.

An Illustration: Recidivism Prediction

One example of a case where machine learning decision making systems have come under scrutiny is predicting criminal recidivism, that is whether someone accused of a crime is likely to commit another. These prediction systems are used in many states to decide whether to allow defendents out on bail or even in sentencing decisions.

Pro Publica extensively reported on this issue, and has found that even defendents often disagree with the assessments (even when it benefits them):

Sometimes, the scores make little sense even to defendants.


James Rivelli, a 54-year old Hollywood, Florida, man, was arrested two years ago for shoplifting seven boxes of Crest Whitestrips from a CVS drugstore. Despite a criminal record that included aggravated assault, multiple thefts and felony drug trafficking, the Northpointe algorithm classified him as being at a low risk of reoffending.


“I am surprised it is so low,” Rivelli said when told by a reporter he had been rated a 3 out of a possible 10.

In addition to challenges around their accuracy and the limited context that the prediction scores incorporate, one of the major problems with the machine learning predictions is the fact that defendents are not allowed to contest the prediction that is shared with judges:

Defendants rarely have an opportunity to challenge their assessments. The results are usually shared with the defendant’s attorney, but the calculations that transformed the underlying data into a score are rarely revealed.

The companies that manufacture the recidivism prediction systems claim that the calculations are proprietary information.

This case serves as an example that demonstrates the complexities of these decision making systems: the need to explain decisions (whether to the defendents themselves or experts such as judges and lawyers), the need to protect intellectual property and prevent gaming the system, and most fundamentally the need to maintain human rights.

The Breadth of the Issue

While recidivism prediction is an extremely important decision, opportunities for contestability can be found throughout daily interactions with automated decision making systems:

These examples are drawn from the existing literature, but one of the goals of this workshop is to capture additional domains where it is important to design for contestability.

The CSCW 2019 Workshop

In this workshop, we look forward to exploring all of the issues highlighted in the example and more:

We encourage everyone to join us! Practioners of machine learning, ethicists, designers, professionals in domains with contestability practices (like law, credit scoring or insurance) -- one of the primary goals of the workshop is to draw together a community of interest with as many perspectives as possible.

Submissions

To achieve as diverse a group as possible, participants are encouraged to submit any form of document that can suggest their current thinking on this topic: case studies, position papers, design fictions, and so on.

Submissions should be no longer than 10,000 words, excluding references. We recommend authors aim for approximately 4-6 pages in length. You may also submit previously published work, but should add an ~1000 word cover letter identifying the relevance of the work to the themes of the workshop.

Submissions are single-blind reviewed; i.e., submissions must include the author’s names and affiliation. The workshop`s organizing committee will review the submissions and accepted papers will be presented at the workshop. At least one of the authors of each accepted paper must register for and attend the workshop.

Email submissions directly to kvaccaro@illinois.edu with the subject line “Contestability Workshop Submission” on or before September 20, 2019.

Download the conference submission

Organizers

Kristen Vaccaro PhD candidate in Computer Science at the University of Illinois Urbana-Champaign. Her research focuses on designing algorithmic decision making systems for user agency and control, with two primary mechanisms of interest: control settings and contestability. Her recent work has explored designing for contestability via participatory design workshops, as well as by running large-scale online surveys that assess the effect of common contestability designs.

Karrie Karahalios Professor of Computer Science at the University of Illinois in Urbana-Champaign, the director of the Social Spaces Group, and the co-director of the Center for People and Infrastructures. Her work focuses on the signals that people emit and perceive in social computer mediated communication. More recently, she has explored how algorithmic curation alters these signals and people's perception of communication. Karahalios studies existing systems and builds infrastructures for new communication systems (that move control to people, allow for inferences of bias and fairness, and evaluate algorithm explainability).

Deirdre K. Mulligan Associate Professor in the School of Information at UC Berkeley, a faculty Director of the Berkeley Center for Law & Technology, a co-organizer of the Algorithmic Fairness & Opacity Working Group, an affiliated faculty on the Hewlett funded Berkeley Center for Long-Term Cybersecurity, and a faculty advisor to the Center for Technology, Society & Policy. Mulligan's research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems.

Daniel Kluttz Postdoctoral Scholar at the UC Berkeley School of Information's Algorithmic Fairness and Opacity Working Group (AFOG). Drawing from intellectual traditions in organizational theory, law and society, and technology studies, Kluttz's research is oriented around two broad lines of inquiry: 1) the formal and informal governance of economic and technological innovations, and 2) the organizational and legal environments surrounding such innovations. He holds a PhD in sociology from UC Berkeley and a JD from the UNC-Chapel Hill School of Law.

Tad Hirsch Professor of Art + Design at Northeastern University, where he conducts research and creative practice at the intersection of design, engineering, and social justice. He is currently developing automated assessment and training tools for addiction counseling and mental health; prior work has tackled such thorny issues as human trafficking, environmental justice, and public protest. His pioneering work on "Designing Contestability" identified contestability as a new principle for designing systems that evaluate human behavior.