Skip to main content
Logo image

Section 13.3 Impact of program design

Usually when we think about our program design we are thinking about how to make our program correct and efficient. But in an age of rapid development in technology and automation, the work of software developers has real impacts on people’s lives and we need to think about the effects the software we build will have on people and society.
Programmers also have to be aware of legal and ethical issues in how software is developed. Much of modern software is developed using code written by others and and published as open source which is free to use. But even freely available open source software comes with licenses that developers have to abide by if they want to use it.
Professional software developers and computer scientists have codes of conduct that they can follow such as ACM professional code of ethics which includes guidelines such as β€œAvoid harm” and β€œRespect privacy” but in order to live up to codes like that we need to understand the ways software can have unintended harmful effects.

Subsection 13.3.1 Software reliability

Sometimes the harm is caused by unreliable software. Software can work in many cirumstances and still fail sometimes catastrophic ways. System reliability refers to a program being able to perform its tasks as expected under stated conditions without failure. Responsible programmers make an effort to maximize system reliability by testing the program under a variety of conditions.
But even programs that work perfectly, according to their intended design, can have unintended harmful effects that we need to think about.

Subsection 13.3.2 AI and machine learning

The fields of AI (Artificial Intelligence) and Machine Learning increasingly pose ethical questions in our world. For example, self-driving cars that use machine learning to learn to follow lanes and avoid collisions could make our world much safer. Self-driving cars do not get distracted by text messages and don’t drink and drive. However, what if the car needs to make an ethical decision about avoiding a collision with a pedestrian by causing an accident that may also lead to the loss of life? Who makes these decisions? The software? the programmers? If you were a programmer for a self-driving car, how would you approach such decisions? Here’s a great Wired article or BBC video about self-driving cars and ethical problems.
Watch the following video about the impacts of AI:
Here are some other interesting videos about the impacts of AI in different domains:

Subsection 13.3.3 Algorithmic bias

AI and machine learning also raise ethical questions about data collection, privacy, and resource use. Machine learning algorithms to create software like ChatGPT require massive amounts of data to learn from and also use massive amounts of energy to both train and run. But where does this data come from? Often the data is collected from the internet, and the internet is full of biases. For example, if you search for professions like β€œprogrammer”, β€œdoctor”, β€œCEO” in https://images.google.com/, you will probably see mostly images of white men. This reflects a bias in our world that AIs may learn.
An AI could then generate text or images that are biased against historically underrepresented groups. For example, a prominent tech company in 2014 started building an automated hiring tool, a resume filtering AI trained on their current employees resumes, and ended up with an AI system that was biased against women (https://www.aclu.org/news/womens-rights/why-amazons-automated-hiring-tool-discriminated-against). This is a problem because the AI is learning from biased data and then reproducing the same bias in outcomes that could affect people’s lives.
Algorithmic bias describes systemic and repeated errors in a program that create unfair outcomes for a specific group of users. Bias in data can lead to unfair and unethical outcomes. For instance, facial recognition software has been shown to have higher error rates for people with darker skin tones. This is because the data used to train these algorithms often contains fewer examples of people with darker skin tones. As a result, the software is less accurate for these individuals, which can lead to discriminatory practices.
Watch the following Gender Shades video about gender and race bias in face recognition algorithms by computer scientist Joy Buolamwini (MIT Media Lab and Algorithmic Justice League).

Activity 13.3.1.

Explain the importance of recognizing data quality and potential issues such as data bias when using a data set in AI/ML applications. (A data set or dataset is a collection or set of data.) We encourage you to work on this activity in pairs or groups.
Here are some other interesting videos to watch about bias in algorithms:
Programmers should be aware of the data set collection method and the potential for bias before using the data to extrapolate new information or drawing conclusions. Some data sets are incomplete or contain inaccurate data. Using such data in the development or use of a program can cause the program to work incorrectly or inefficiently. Or the contents of a data set might be related to a specific question or topic and might not be appropriate to give correct answers or extrapolate information for a different question or topic.
It is important for programmers and data scientists to take steps to mitigate bias in data collection and use. This can include using diverse and representative datasets, regularly testing algorithms for bias, and being transparent about the limitations of the software. Here are some steps that can be taken to address bias in machine learning:
  • Use diverse and representative data sets to train algorithms.
  • Regularly test algorithms for bias and accuracy.
  • Be transparent about the limitations and potential biases of the software.
  • Involve diverse teams in the development and testing of algorithms.
  • Implement ethical guidelines and standards for the use of AI and machine learning.

Project 13.3.2.

Go to https://datasetsearch.research.google.com/ and search for a dataset for β€œface recognition” or another topic you are interested in. Find an appropriate (β€œgood”) dataset and an inappropriate (β€œbad”) dataset for your topic and put links to them below. Explain why the datasets are appropriate or inappropriate, for example the inappropriate dataset might be too small or incomplete or biased. Explain how the choice of your dataset could affect the results of a program that uses the data. We encourage you to work on this activity in pairs or groups.

Subsection 13.3.4 Data privacy

Your phone keeps a lot of information about you, including where you have been, what you buy, what games you play, etc. Here’s a video about the massive amounts of data our smart phones and computers collect about us. If you have used your phone to give you directions to go somewhere, it probably tracks your location. Follow the directions in https://www.lifewire.com/location-history-google-maps-iphone-1683392 to see if you have location history settings on. You can also turn off location tracking, but it is useful when you want directions and it’s free. Do the benefits of apps that provide driving directions outweigh the lack of privacy for you? In what situations would it be beneficial or harmful for the app to track your location?
Figure 13.3.1.
As users, we often don’t realize how much personal data we are giving away. If you use a computer or phone, your personal privacy is at risk. As computer programmers, we must be aware of the risks to data privacy when our code collects and stores personal data on computer systems. Programmers should attempt to safeguard the personal privacy of the user. Legally and ethically, we must ask the users for permission to access and store their data. And if there are data breaches where the data is stolen, we must inform the affected users. The laws are slowly catching up to our technology, and many countries and states are passing laws to protect data privacy.
Computer use and programs have beneficial and/or harmful impacts on personal security. Software apps for maps and driving directions are very useful, but they have impacts on personal security and privacy if they keep track of your location. This information could be beneficial, for example if you are lost and need to be found, but could be harmful and unsafe if someone unauthorized gains access to your location.

Activity 13.3.3.

Explore a popular app or web site and its data collecting practices. Explain the risks to privacy from collecting and storing personal data on computer systems. Discuss the ethical and legal problems that may arise and how programmers can try to avoid them. We encourage you to work on this activity in pairs or groups.
Here are some interesting video resources about data collection and data privacy:

Subsection 13.3.5 Groupwork: Impacts of CS

In groups, choose a software application that has social and ethical implications. Discuss the beneficial and harmful effects of this software application. Discuss the ethical problems that may arise and how programmers can try to avoid them. If you chose an AI application, make sure you narrow it to a particular domain or use. Prepare a presentation for your class.

Project 13.3.4.

Describe the software application that you have chosen and discuss its beneficial and harmful effects.

Subsection 13.3.6 Summary

  • (AP 3.2.A.1) System reliability refers to the program being able to perform its tasks as expected under stated conditions without failure. Programmers should make an effort to maximize system reliability by testing the program with a variety of conditions.
  • (AP 3.2.A.2) The creation of programs has impacts on society, the economy, and culture. These impacts can be both beneficial and harmful. Programs meant to fill a need or solve a problem can have unintended harmful effects beyond their intended use.
  • (AP 3.1.A.3) Legal issues and intellectual property concerns arise when creating programs. Programmers often reuse code written by others and published as open source and free to use. Incorporation of code that is not published as open source requires the programmer to obtain permission and often purchase the code before integrating it into their program.
  • (AP 4.1.A.1) When using a computer, personal privacy is at risk. When developing new programs, programmers should attempt to safeguard the personal privacy of the user.
  • Computer use and the creation of programs have an impact on personal security and data privacy. These impacts can be beneficial and/or harmful.
  • (AP 4.1.B.1) Algorithmic bias describes systemic and repeated errors in a program that create unfair outcomes for a specific group of users.
  • (AP 4.1.B.2) Programmers should be aware of the data set collection method and the potential for bias when using this method before using the data to extrapolate new information or drawing conclusions.
  • (AP 4.1.B.3) Some data sets are incomplete or contain inaccurate data. Using such data in the development or use of a program can cause the program to work incorrectly or inefficiently.
  • (AP 4.1.C.1) Contents of a data set might be related to a specific question or topic and might not be appropriate to give correct answers or extrapolate information for a different question or topic.
You have attempted of activities on this page.