This page lays out the broader themes for the workshop. More detail can be found in the conference submission.

Themes

The concept of contestability in technology systems has deep roots in human-computer interaction. Mixed-initiative systems were designed for users to compose final outcomes via negotiation with a system, and even the earliest experiments in expert systems revealed that experts must be able to "correct one or more of [the deductive steps and/or facts used] if necessary." This history of designing for a conversation between a user and system has become ever more important as machine decisions have taken on greater impact.

Defining Contestability

Is contestability merely the ability to contest a decision? Or is it a deeper system property -- the ability to interrogate, investigate, scrutinize the system throughout the process of coming to a joint decision between human and algorithm? The former must surface information to the user; but the latter must also support interaction and co-construction of the final decision.

In this workshop, we look to explore how these different conceptions of contestation and contestability impact system design. For example, a system that designs for contestation might assume that the user has little to no authority and can only contest a decision after the fact, whereas a system designed for contestability might assume the user has a real voice in the decision and allow that contestation during the decision making. The timing, expected interaction, and relative authority vary in these different processes in ways that matter. While most work in HCI has focused on expert users, contestability can also be designed into end user systems.

One of the major goals of this workshop is to explore how HCI and CSCW researchers can draw on experiences in other domains to understand the design of contestability as part of the algorithmic experience. Both from within computing (e.g., the history of expert systems) and in other domains (e.g., the law, education, insurance, etc.), how has contestability been defined and designed? What insights can be transferred, and what needs to be explored anew? And what would it mean to move from contesting decisions to achieving a deeper contestability? Finally, we hope to develop an understanding of how contestability offers value that differs from or can add to other popular approaches to automated decision making (e.g., transparency or explanations).

Foundations

What are the ethical foundations of contestability? And how does it connect to (or provide tensions for) notions of fairness, accountability, and trustworthiness? While feeling one's voice has been heard addresses a fundamental need for perceptions of fairness, with particularly strong effects for marginalized or disempowered populations, current designs for contestability don't always achieve this result. Is this because they are designed for contestation instead of deeper notions of contestability? Or simply due to failures in the system design? In addition, when contestability within the decision making process results in inconsistent results across users, does that challenge notions of fairness overall? In this workshop, we hope to engage deeply from practical questions of implementing machine learning decision making systems to the ethical frameworks that underlie them.

Goals

What are the possible goals for contestability in algorithmic systems? In many existing systems, the target for contestation or appeals is changing an individual output. However, even these small goals may not succeed for users; recent work on content moderation systems has found that users experience the appeal process as ``speaking into a void". However, given the high impact of algorithmic decision making, interest has also grown in contesting decision making processes at scale, often in the form of auditing and detecting bias in these systems (for example, collectively auditing Twitter's content moderation around harassment). Thus, alternative goals for contesting algorithmic systems may be changing fundamental aspects of decision making processes or systematically increasing user trust.

Audiences

Who are the audiences of contestability? How and why does contestability differ when the audience is experts, lay users or even regulators? Are there other audiences that should be taken into account? Decisions about definitions about contestability signal important judgements about relative authority, but defining the audience and goals of contestability for decision making systems also impact the timing and degree of interaction. That is, regulators might interact with a decision making systems at different times and in different ways than lay users, even if all designs aimed for contestability.

Agenda

The workshop will be a one-day workshop Saturday. More details on the conference agenda coming soon.