What are the ethical considerations regarding the privacy and control of consumer information and big data, especially in the aftermath of recent large-scale data breaches? This course provides a framework to analyze these concerns as you examine the ethical and privacy implications of collecting and managing big data.
Explore the broader impact of the data science field on modern society and the principles of fairness, accountability and transparency as you gain a deeper understanding of the importance of a shared set of ethical values. You will examine the need for voluntary disclosure when leveraging metadata to inform basic algorithms and/or complex artificial intelligence systems while also learning best practices for responsible data management, understanding the significance of the Fair Information Practices Principles Act and the laws concerning the “right to be forgotten.”
This course will help you answer questions such as who owns data, how do we value privacy, how to receive informed consent and what it means to be fair.
Data scientists and anyone beginning to use or expand their use of data will benefit from this course. No particular previous knowledge needed.
What are Ethics?
Module 1 of this course establishes a basic foundation in the notion of simple utilitarian ethics we use for this course. The lecture material and the quiz questions are designed to get most people to come to an agreement about right and wrong, using the utilitarian framework taught here. If you bring your own moral sense to bear, or think hard about possible counter-arguments, it is likely that you can arrive at a different conclusion. But that discussion is not what this course is about. So resist that temptation, so that we can jointly lay a common foundation for the rest of this course.
Graded: Module 1 Quiz
History, Concept of Informed Consent
Early experiments on human subjects were by scientists intent on advancing medicine, to the benefit of all humanity, disregard for welfare of individual human subjects. Often these were performed by white scientists, on black subject. In this module we will talk about the laws that govern the Principle of Informed Consent. We will also discuss why informed consent doesn’t work well for retrospective studies, or for the customers of electronic businesses.
Graded: Module 2 Quiz
Who owns data about you? We’ll explore that question in this module. A few examples of personal data include copyrights for biographies; ownership of photos posted online, Yelp, Trip Advisor, public data capture, and data sale. We’ll also explore the limits on recording and use of data.
Graded: Module 3 Quiz
Privacy is a basic human need. Privacy means the ability to control information about yourself, not necessarily the ability to hide things. We have seen the rise different value systems with regards to privacy. Kids today are more likely to share personal information on social media, for example. So while values are changing, this doesn’t remove the fundamental need to be able to control personal information. In this module we’ll examine the relationship between the services we are provided and the data we provide in exchange: for example, the location for a cell phone. We’ll also compare and contrast “data” against “metadata”.
Graded: Module 4 Quiz
Certain transactions can be performed anonymously. But many cannot, including where there is physical delivery of product. Two examples related to anonymous transactions we’ll look at are “block chains” and “bitcoin”. We’ll also look at some of the drawbacks that come with anonymity.
Graded: Module 5 Quiz
Data validity is not a new concern. All too often, we see the inappropriate use of Data Science methods leading to erroneous conclusions. This module points out common errors, in language suited for a student with limited exposure to statistics. We’ll focus on the notion of representative sample: opinionated customers, for example, are not necessarily representative of all customers.
Graded: Module 6 Quiz
What could be fairer than a data-driven analysis? Surely the dumb computer cannot harbor prejudice or stereotypes. While indeed the analysis technique may be completely neutral, given the assumptions, the model, the training data, and so forth, all of these boundary conditions are set by humans, who may reflect their biases in the analysis result, possibly without even intending to do so. Only recently have people begun to think about how algorithmic decisions can be unfair. Consider this article, published in the New York Times. This module discusses this cutting edge issue.
Graded: Module 7 Quiz
In Module 8, we consider societal consequences of Data Science that we should be concerned about even if there are no issues with fairness, validity, anonymity, privacy, ownership or human subjects research. These “systemic” concerns are often the hardest to address, yet just as important as other issues discussed before. For example, we consider ossification, or the tendency of algorithmic methods to learn and codify the current state of the world and thereby make it harder to change. Information asymmetry has long been exploited for the advantage of some, to the disadvantage of others. Information technology makes spread of information easier, and hence generally decreases asymmetry. However, Big Data sets and sophisticated analyses increase asymmetry in favor of those with ability to acquire/access.
Graded: Module 8 Quiz
Code of Ethics
Finally, in Module 9, we tie all the issues we have considered together into a simple, two-point code of ethics for the practitioner.
Graded: Module 9 Quiz
Graded: Data Ethics Case Study
This module contains lists of attributions for the external audio-visual resources used throughout the course.
ENROLL IN COURSE