How a new program at Stanford is embedding ethics into computer science
Shortly after Kathleen Creel started her position at Stanford as the inaugural Embedded EthiCS fellow some two years ago, a colleague sent her a 1989 newspaper clipping about the launch of Stanford’s first computer ethics course to show her how the university has long been committed to what Creel was tasked with: helping Stanford students understand the moral and ethical dimensions of technology.
While much has changed since the article was first published in the San Jose Mercury News, many of the issues that reporter Tom Philp discussed with renowned Stanford computer scientist Terry Winograd in the article remain relevant.
Describing some of the topics Stanford students were going to deliberate in Winograd’s course – a period Philp described as “rapidly changing” – he wrote: “Should students freely share copyrighted software? Should they be concerned if their work has military applications? Should they submit a project on deadline if they are concerned that potential bugs could ruin peoples’ work?”
Three decades later, Winograd’s course on computer ethics has evolved, but now it is joined by a host of other efforts to expand ethics curricula at Stanford. Indeed, one of the main themes of the university’s Long Range Vision is embedding ethics across research and education. In 2020, the university launched the Ethics, Society, and Technology (EST) Hub, whose goal is to help ensure that technological advances born at Stanford address the full range of ethical and societal implications.
That same year, the EST Hub, in collaboration with Stanford Institute for Human-Centered Artificial Intelligence (HAI), the McCoy Family Center for Ethics in Society, and the Computer Science Department, created the Embedded EthiCS program, which will embed ethics modules into core computer science courses. Creel is Embedded EthiCS’ first fellow.
Stanford University, situated in the heart of Silicon Valley and intertwined with the influence and impact inspired by technological innovations in the region and beyond, is a vital place for future engineers and technologists to think through their societal responsibilities, Creel said.
“I think teaching ethics specifically at Stanford is very important because many Stanford students go on to be very influential in the world of tech,” said Creel, whose own research explores the moral, political, and epistemic implications of how machine learning is used in the world.
“If we can make any difference in the culture of tech, Stanford is a good place to be doing it,” she said.
Establishing an ethical mindset
Creel is both a computer scientist and a philosopher. After double-majoring in both fields at Williams College in Massachusetts, she worked as a software engineer at MIT Lincoln Laboratory on a large-scale satellite project. There, she found herself asking profound, philosophical questions about the dependence on technology in high-stake situations, particularly when it comes to how AI-based systems have evolved to inform people’s decision-making. She wondered, how do people know they can trust these tools and what information do they need to have in order to believe that it can be a reliable addition or substitution for human judgment?
Creel decided to confront these questions head-on at graduate school, and in 2020, she earned her PhD in history and the philosophy of science at the University of Pittsburgh.
During her time at Stanford, Creel has collaborated with faculty and lecturers across Stanford’s Computer Science department to identify various opportunities for students to think through the social consequences of technology – even if it’s just one or five minutes at a time.
Rather than have ethics be its own standalone seminar or dedicated class topic that is often presented at either the beginning or end of a course, the Embedded EthiCS program aims to intersperse ethics throughout the quarter by integrating it into core course assignments, class discussions, and lectures.
“The objective is to weave ethics into the curriculum organically so that it feels like a natural part of their practice,” said Creel. Creel has worked with professors on nine computer science courses, including: CS106A: Programming Methodology; CS106B: Programming Abstractions; CS107: Computer Organization and Systems; CS109: Introduction to Probability for Computer Scientists; CS221: Artificial Intelligence: Principles and Techniques; CS161: Design and Analysis of Algorithms; and CS47B: Design for Behavior Change.
During her fellowship, Creel gave engaging lectures about specific ethical issues and worked with professors to develop new coursework that demonstrates how the choices students will make as engineers carry broader implications for society.
One of the instructors Creel worked with was Nick Troccoli, a lecturer in the Computer Science Department. Troccoli teaches CS 107: Computer Organization & Systems, the third course in Stanford’s introductory programming sequence, which focuses mostly on how computer systems execute programs. Although some initially wondered how ethics would fit into such a technical curriculum, Creel and Troccoli, along with course assistant Brynne Hurst, found clear hooks for ethics discussions in assignments, lectures, and labs throughout the course.
For example, they refreshed a classic assignment about how to figure out a program’s behavior without seeing its code (“reverse engineering”). Students were asked to imagine they were security researchers hired by a bank to discover how a data breach had occurred, and how the hacked information could be combined with other publicly-available information to discover bank customers’ secrets.
Creel talked about how anonymized datasets can be reverse engineered to reveal identifying information and why that is a problem. She introduced the students to different models of privacy, including differential privacy, a technique that can make privacy in a database more robust by minimizing identifiable information.
Students were then tasked to provide recommendations to further anonymize or obfuscate data to avoid breaches.
“Katie helped students understand what potential scenarios may arise as a result of programming and how ethics can be a tool to allow you to better understand those kinds of issues,” Troccoli said.
Another instructor Creel worked with was Assistant Professor Aviad Rubinstein, who teaches CS161: Design and Analysis of Algorithms.
Creel and Rubinstein, joined by research assistant Ananya Karthik and course assistant Golrokh Emami, came up with an assignment where students were asked to create an algorithm that would help a popular distributor decide the locations of their warehouses and determine which customers received one versus two-day delivery.
Students worked through the many variables to determine warehouse location, such as optimizing cost with existing customer demand and driver route efficiency. If the algorithm prioritized these features, closer examination would reveal that historically redlined Black American neighborhoods would be excluded from receiving one-day delivery.
Students were then asked to develop another algorithm that would address the delivery issue while also optimizing even coverage and cost.
The goal of the exercise was to show students that as engineers, they are also decision-makers whose choices carry real-world consequences that can affect equity and inclusion in communities across the country. Students were asked to also share what those concepts mean to them.
“The hope is to show them this is a problem they might genuinely face and that they might use algorithms to solve, and that ethics will guide them in making this choice,” Creel said. “Using the tools that we’ve taught them in the ethics curriculum, they will now be able to understand that choosing an algorithm is indeed a moral choice that they are making, not only a technical one.”
Developing moral courage
Some students have shared with Creel how they themselves have been subject to algorithmic biases.
For example, when the pandemic shuttered high schools across the country, some school districts turned to online proctoring services to help them deliver exams remotely. These services automated the supervision of students and their space while they take a test.
However, these AI-driven services have come under criticism, particularly around issues concerning privacy and racial bias. For example, the scanning software sometimes fails to detect students with darker skin, Creel said.
Sometimes, there are just glitches in the computer system and the AI will flag a student even though no offense has taken place. But because of the proprietary nature of the technology, how the algorithm came to its decision is not always entirely apparent.
“Students really understand how if these services were more transparent, they could have pointed to something that could prove why an automated flag that may have gone up was wrong,” said Creel.
Overall, Creel said, students have been eager to develop the skillset to help them discuss and deliberate on the ethical dilemmas they could encounter in their professional careers.
“I think they are very aware that they, as young engineers, could be in a situation where someone above them asks them to do something that they don’t think is right,” she added. “They want tools to figure out what is right, and I think they also want help building the moral courage to figure out how to say no and to interact in an environment where they may not have a lot of power. For many of them, it feels very important and existential.”
Creel is now transitioning from her role at Stanford to Northeastern University where she will hold a joint appointment as an assistant professor of philosophy and computer science.