CSCI 174: Fall 2024

AI, Ethics, & Society

CSCI 174: Fall 2024

Lecture Discussion Questions: Coded Bias

The following discussion questions are based on todays readings / videos:

General Impressions

  1. What information or interviews stood out to you while watching the film? Why?
  2. What were some of the AI examples discussed in the film and how did they work? Have you encountered any of these algorithms? If so, which one(s)?

1. Predictive Algorithms

Cathy O’Neil defines algorithms as: “Using historical information to make a prediction about the future.”

  1. When is this a reasonable thing to do?
  2. When is this an unreasonable thing to do?
  3. What are some examples of systems that use dubious input variables to predict certain kinds of outcomes?
  4. Studies found racial bias in algorithms used in the courts for sentencing and used in hospitals for healthcare recommendations even though the AI did not factor in race data. How does this happen?! What is causing biased results?

2. Intelligence

Early AI developers measured the intelligence of the technology by its ability to play games, such as chess. As Meredith Brousard (from Coded Bias) notes:

“The people who were at the Dartmouth Math Department 1956 got to decide what the field was (100 people in the whole world). One faction decided that intelligence could be demonstrated by the ability to play games. And specifically, the ability to play chess….Of course intelligence is so much more than that. There are many different kinds of intelligence. Our ideas about tech and society that we think are normal are actually ideas that come from a very small and homogeneous group of people.”

  1. Why might this definition of intelligence be limiting?
  2. Are there other forms of intelligence that you think are important to take into account when building AI systems?

3. Who benefits, who is harmed?

  1. Who is harmed by AI bias? How does power factor into who is harmed and who benefits from AI products?
  2. What civil rights are at stake when it comes to automated decision making? What protections do we need to safeguard as AI continues to develop?

4. AI and the Media

  1. What are some popular cultural depictions of artificial intelligence you have seen in film or on television? How do you think popular culture and/or the media has influenced American perceptions of AI?
  2. Have you ever talked about algorithms with friends, family, or co-workers? How would you describe an algorithm to someone who is unfamiliar with the technology?

5. Surveillance

  1. Have you ever had the experience of being surveilled? If so, how did it make you feel?
  2. Do you think police or immigration enforcers should be allowed to search databases that store your driver’s license photo or passport photo? Why or why not?

6. Cultural Biases

  1. Joy Buolamwini describes Amazon’s response to her research on bias in its products as “a continuation of the experiences I’ve had as a woman of color in tech. Expect to be discredited. Expect your research to be dismissed.” Is this an experience you can relate to in your work or in school? Why or why not?
  2. What effect do you think a more inclusive workforce would have on the tech industry? Do you think tech products would be less biased? Why or why not?

7. “We need an FDA for algorithms”

Joy Buolamwini says in the film “Because of the power of these tools, left unregulated there’s really no kind of recourse if they’re abused. We need laws.”

  1. Do you agree that AI should be regulated by the government? Why or why not?
  2. If AI is regulated, how do you think it might be regulated? Try asking ChatGPT!

Credit

Some of these discussion questions were taken and modified from the Coded Bias discussion guide.