Course Description

Human decision making is increasingly being displaced by predictive algorithms. Judges sentence defendants based on statistical risk scores; regulators take enforcement actions based on predicted violations; advertisers target materials based on demographic attributes; and employers evaluate applicants and employees based on machine-learned models. A predominant concern with the rise of such algorithmic decision making is that it may replicate or exacerbate human bias. Algorithms might discriminate, for instance, based on race or gender. This course surveys the legal and ethical principles for assessing the equity of algorithms, describes techniques for designing fair systems, and considers how anti-discrimination law and the design of algorithms may need to evolve to account for machine bias.

Concepts will be developed in part through guided in-class coding exercises. Admission is by consent of instructor and is limited to 20 students. Grading is based on response papers, class participation, and a final project. Prerequisite: CS 106A or equivalent knowledge of coding.

Instructors
Sharad Goel ()
Daniel Ho (email)
Jerry Lin (TA) (email)
Schedule
Tuesdays @ 4:30 PM - 7:15 PM in Neukom 102.
Evaluation
Grades are based on class attendance, participation, short reflection papers, and a final project. You are required to submit reflection papers for four class sessions (of your choosing), and they should generally be 2-3 pages (double-spaced). These papers should analyze, critique, and address any aspect of the reading (and may bring in outside materials), but should not merely summarize the reading. Reflection papers are due by 5pm the day before a class session.
Final projects
Final projects should be conducted in teams of 3-5 students, with the requirement that the team be interdisciplinary. The expectation is to complete a research paper that either (a) describes the emerging use of algorithms in a new domain and analyzes the potential for bias, (b) analyzes the legal and policy implications (e.g., whether current anti-discrimination law will need to adjust to these developments), and/or (c) adapts methods for testing for bias and measuring fairness in a given domain setting. These projects are opportunities to examine both a wider range of methods (e.g., image recognition) and a wider range of laws (e.g., Title VI of the Civil Rights Act of 1964).