Machine learning systems are making more and more decisions about people -- from everyday decisions like what you see in your news feed to extremely important, life-changing decisions like whether to send Child Protective Services to a family.

Often people disagree with the decisions these systems make. In this workshop, we will explore how to design systems so that users (or experts or regulators) who disagree with decisions can shape those decisions and the decision making process.

We highlight a few important aspects of this design challenge in the examples below.

The Breadth of the Issue

We have considered a number of examples and case studies in developing this workshop. We use a few examples to highlight the different aspects of the problem:

Example 1 Example 2 Example 3

These examples are drawn from the existing literature, but one of the goals of this workshop is to capture additional domains where it is important to design for contestability.

The CSCW 2019 Workshop

In this workshop, we look forward to exploring all of the issues highlighted in the example and more:

We encourage everyone to join us! Practioners of machine learning, ethicists, designers, professionals in domains with contestability practices (like law, credit scoring or insurance) -- one of the primary goals of the workshop is to draw together a community of interest with as many perspectives as possible.

Submissions

To achieve as diverse a group as possible, participants are encouraged to submit any form of document that can suggest their current thinking on this topic: case studies, position papers, design fictions, and so on.

Submissions should be no longer than 10,000 words, excluding references. We recommend authors aim for approximately 4-6 pages in length. You may also submit previously published work, but should add an ~1000 word cover letter identifying the relevance of the work to the themes of the workshop.

Submissions are single-blind reviewed; i.e., submissions must include the author’s names and affiliation. The workshop`s organizing committee will review the submissions and accepted papers will be presented at the workshop. At least one of the authors of each accepted paper must register for and attend the workshop.

Email submissions directly to kvaccaro@illinois.edu with the subject line “Contestability Workshop Submission” on or before September 20, 2019. Due to conflicts with the ACM CHI deadline, the submission deadline has been extended to October 4, 2019!.

Download the conference submission

Organizers

Kristen Vaccaro PhD candidate in Computer Science at the University of Illinois Urbana-Champaign. Her research focuses on designing algorithmic decision making systems for user agency and control, with two primary mechanisms of interest: control settings and contestability. Her recent work has explored designing for contestability via participatory design workshops, as well as by running large-scale online surveys that assess the effect of common contestability designs.

Karrie Karahalios Professor of Computer Science at the University of Illinois in Urbana-Champaign, the director of the Social Spaces Group, and the co-director of the Center for People and Infrastructures. Her work focuses on the signals that people emit and perceive in social computer mediated communication. More recently, she has explored how algorithmic curation alters these signals and people's perception of communication. Karahalios studies existing systems and builds infrastructures for new communication systems (that move control to people, allow for inferences of bias and fairness, and evaluate algorithm explainability).

Deirdre K. Mulligan Associate Professor in the School of Information at UC Berkeley, a faculty Director of the Berkeley Center for Law & Technology, a co-organizer of the Algorithmic Fairness & Opacity Working Group, an affiliated faculty on the Hewlett funded Berkeley Center for Long-Term Cybersecurity, and a faculty advisor to the Center for Technology, Society & Policy. Mulligan's research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems.

Daniel Kluttz Postdoctoral Scholar at the UC Berkeley School of Information's Algorithmic Fairness and Opacity Working Group (AFOG). Drawing from intellectual traditions in organizational theory, law and society, and technology studies, Kluttz's research is oriented around two broad lines of inquiry: 1) the formal and informal governance of economic and technological innovations, and 2) the organizational and legal environments surrounding such innovations. He holds a PhD in sociology from UC Berkeley and a JD from the UNC-Chapel Hill School of Law.

Tad Hirsch Professor of Art + Design at Northeastern University, where he conducts research and creative practice at the intersection of design, engineering, and social justice. He is currently developing automated assessment and training tools for addiction counseling and mental health; prior work has tackled such thorny issues as human trafficking, environmental justice, and public protest. His pioneering work on "Designing Contestability" identified contestability as a new principle for designing systems that evaluate human behavior.

This work was supported by the National Science Foundation (NSF Award 1564041) and Capital One.