Course Description

Human decision making is increasingly being displaced by predictive algorithms. Judges sentence defendants based on statistical risk scores; regulators take enforcement actions based on predicted violations; advertisers target materials based on demographic attributes; and employers evaluate applicants and employees based on machine-learned models. One concern with the rise of such algorithmic decision making is that it may replicate or exacerbate human bias. This course surveys the legal and ethical principles for assessing the equity of algorithms, describes statistical techniques for designing fairer systems, and considers how anti-discrimination law and the design of algorithms may need to evolve to account for machine bias. Concepts will be developed in part through guided in-class coding exercises. Prerequisite: CS 106A or equivalent knowledge of coding.

Admission is by consent of instructor and is limited to approximately 20 students. If you're interested in taking the class, please complete the course application by Friday, March 20, 2020. Decisions will be announced by March 27, 2020.

Instructors
Sharad Goel ()
Jerry Lin (TA) (email)
Schedule
Tuesdays @ 4:30 PM - 7:20 PM
Evaluation
Grades are based on class attendance, participation, short reflection papers, and a final project. You are required to submit reflection papers for two class sessions (of your choosing), and they should generally be 2-3 pages (double-spaced). These papers should analyze, critique, and address any aspect of the material discussed in class (and may bring in outside sources), but should not merely summarize the issues. Reflection papers are due one week after the corresponding class session.
Final projects
Final projects should be conducted in teams of 3-5 students, with the requirement that the team be interdisciplinary. The expectation is to complete a research paper that: (a) describes the emerging use of algorithms in a new domain and analyzes the potential benefits and harms; (b) analyzes the legal and policy implications of new uses of algorithms (e.g., whether current anti-discrimination law will need to adjust to these developments); and/or (c) develops computational methods for assessing the equity of algorithms in a given domain setting. These projects are opportunities to examine both a wider range of techniques (e.g., image recognition) and a wider range of laws (e.g., Title VI of the Civil Rights Act of 1964).