The CEP’s second biennial workshop on ethics and policy will bring scholars together from across North America to discuss philosophical issues in research ethics. The goal of this workshop is to promote more philosophically rigorous work in the field of research ethics and to bring more philosophers into the research ethics community.Conference Website Draft Program Conference Photos
Abstract: The empirical literature on human trust in artificial agents such as robots is perplexing. People seem to overtrust such agents in some circumstances and undertrust them in other circumstances. Moreover, what trust is and how it is measured shows a great deal of variability. To help advance our knowledge in this domain I offer two proposals. First, I argue that trust is multi-dimensional and that humans can have familiar kinds of trust in a robot (i.e., in its reliability and capacity) but that the more interesting kinds of trust are of a moral kind (i.e., sincerity and ethical integrity). I show that these distinct dimensions of trust can be reliably measured and thus offer a fresh start in understanding when people will trust robots and other artificial agents. Second, if some dimensions of trust involve moral capacities, then we need to ask if and how robots can have moral capacities. To this end I offer theoretical arguments and empirical evidence to propose that moral competence consists primarily of a massive web of norms, decisions in light of these norms, judgments when such norms are violated, and a vocabulary to communicate about these norm violations. I argue that future robots can in principle exhibit these capacities, and if they do so reliably, they will deserve human trust.
Abstract: As the Internet grows more sophisticated, it is creating new threats to democracy. Social media companies such as Facebook can sort us ever more efficiently into groups of the like-minded, creating echo chambers that amplify our views. It's no accident that on some occasions, people of different political views cannot even understand each other. In this lecture, Sunstein describes how the online world creates "cybercascades," exploits "confirmation bias," and assists "polarization entrepreneurs." And he explains why online fragmentation endangers the shared conversations, experiences, and understandings that are the lifeblood of democracy. In response, Sunstein proposes practical and legal changes to make the Internet friendlier to democratic deliberation.
Abstract: Modern companies are implicitly trusted - by consumers as well as by regulators. Consumers must trust the companies on which they depend for increasingly complex products and services because they have lost the "means or skill enough to investigate for [themselves] the soundness of a product" (to quote a famous opinion that set the stage for modern products liability) - and because regulators often find themselves in the same position. Because trust is the implicit foundation of these relationships, the predicate concept of trustworthiness demands more legal substance, structure, and support. Using the example of the companies that are developing and may soon deploy automated motor vehicles, I propose a theory of the "trustworthy company" - a company that is worthy of the trust on which it too depends. I then consider how trustworthiness might be recognized and reinforced through both administrative law and common law.
Abstract: What happens when a Predator drone has as much autonomy as a self-driving car? Should machines be given the power to make life and death decisions in war? Would doing so cross a fundamental moral line? Militaries around the globe are racing to build increasingly autonomous systems, but a growing chorus of voices are raising alarm about the consequences of delegating lethal force decisions to machines.
Paul Scharre, Senior Fellow at the Center for a New American Security, is the author of the forthcoming book Army of None: Autonomous Weapons and the Future of War. He is a former Pentagon official who led the team that drafted the official Defense Department policy guidance on autonomous weapons, DoD Directive 3000.09. He is also a former Army Ranger who served multiple tours in Iraq and Afghanistan.
Considered a public health problem, gun violence is a threat to every dimension of health: it undermines physical, mental, and social well-being. Concern for the health and well-being of individuals and communities demands drawing attention to the causes and magnitude of this health risk. Yet media attention frequently exacerbates some risks to physical and mental health. While homicide in many communities is a relatively neglected sociocultural phenomenon and health risk, mass shooting events capture public attention through 24-hour news cycles and social media platforms. Coverage of these events often leads to an implication that there can be only two explanations: extremism or illness. Media coverage frequently fuels the stigma of mental illness and false perceptions that people with mental illness are dangerous. Coverage also leads to copycat violence, clustering of violent events, and tactical mimicry by people considering such attacks. Through a series of presentations and panel discussions assembled experts will explore best practices for media coverage of gun violence.
Abstract: In an influential paper, published in 1980, Langdon Winner asked “Do artifacts have politics?” and concluded “yes!” In this presentation, which draws on my research on the ethics of autonomous vehicles, military robotics, sex robots, and aged care robots, I will explore how robots have politics. I will argue that the embodied and interactive nature of robots means that they have more politics than other sorts of artefacts. Robots have more — and more complex — “affordances” than other technologies. Robots will embody and reflect the intentions of their designers in ways that are very unlikely to be transparent to those who use or encounter them. The choices made by engineers will often have consequences for the options available to the users of robots and will in turn shape relationships between users and those around them. The power this grants designers is itself politically significant. Because, increasingly, robots will occupy the same environments as human beings, and play important social and economic roles in those environments, human-robot relations will become crucial sites of political contestation. The social policy choices necessary to realise the benefits of robots in many domains will inevitably also be political choices, with implications for relationships between stakeholders. Humanoid robots, and their behaviour, will have representational content, with implications for the ways in which people understand and treat each other. More generally, to the extent to which we anticipate that the introduction of widespread automation will produce a Fourth Industrial Revolution, it is vital that we ask who is making this revolution, as well as who will flourish — and who will suffer — if it occurs.
Keynote address by Hank Greely.Conference Website
Abstract: After the September 11th terrorist attacks, New York City’s chief medical examiner promised that he and his staff would spare no expense in trying to identify every victim and human body part larger than a thumbnail and return them to their families. Sixteen years later, 1,641 of the 2,753 victims killed in Manhattan have been identified. In this talk, I will discuss these efforts, as well as the profound impact that human remains had on the redevelopment of the World Trade Center site and the creation of the memorial there. I will demonstrate that the forensic recovery effort cannot be understood simply on scientific grounds because it was at its heart a political and moral statement. I will examine the challenges of dealing with politically significant deaths for families of the victims, for those charged with memorializing them, and for government officials managing the recovery effort. I will also explore ongoing legal and cultural disputes about who ought to have a say in memorialization efforts and disposition of unidentified remains—or to put it another way, who owns the dead. I will conclude by arguing that the medical examiner’s promise has had profound impacts, both positive and negative, on families and the recovery effort. .
Abstract: A state’s territorial right has two dimensions. There is the local dimension, which pertains to a state’s jurisdictional authority. This is the right of the state to subject individuals within its territory to its laws. The other is the international dimension, which has to do with the right of a state to a specific geographical space (within which it gets to exercise its jurisdictional authority) that other states have to acknowledge and respect. I will call this the international territorial right of states. Recent theories of territoriality mostly hold that a state’s international right flows from its jurisdictional authority. That is, these theories take it that when a state is justified in exercising authority over individuals within a given territory, other states come under an obligation to respect its exclusive claim to that territory. But I suggest that this privileging of the local dimension over the international gets the reasoning backwards. To the contrary, a state must first have an acknowledged international right to a territory before it can have an exclusive dominion in which to exert its jurisdictional right. More substantively, I argue that this international right is not a pre-institutional right, but a right that is based in international convention or institutions. That the international order is institutional in this very fundamental way has implications for global justice. Among other things, it will instigate a more cosmopolitan understanding of egalitarian justice and immigration.
Abstract: This lecture will navigate three areas of interest across civic life: (1) the Millennial citizen (2) the space of old and new media, and (3) the character of our political discourse. From the formation of broadcasting to the emergence of social media, I will consider a blueprint for "civic press.” We’ll grapple with questions for the new (45th) U.S. president and think about fresh ways young people can frame public policy, while reflecting on the 2016 campaign and how to improve the political process.
In celebration of the 20th Anniversary of the publication of Alan Wertheimer's seminar work Exploitation, the Center for Ethics & Policy at Carnegie Mellon University is hosting its inaugural workshop on the theme of "Exploitation and Coercion". Discussion will focus on the theoretical underpinnings of moral claims about exploitation and coercion and the moral force of such claims, as well as implications for important current topics in applied ethics and policy. We are pleased to welcome Richard Arneson as our keynote speaker.Conference Website
Abstract: This paper addresses the ethical questions surrounding disclosures of experiences of sexual violence, organizing those questions into three distinct modes. First, I argue that survivors of sexual assault do not have an ethical obligation to report that assault to university or law enforcement officials, and therefore should not be ethically compelled to do so. Second, I argue that in the face of increased legal pressure to meet their legal obligations under Title IX, many US institutions of higher education have adopted mandatory reporting policies that require all faculty and staff (and in some cases, students) to report any knowledge of incidents of sexual harassment and/or assault to a university official. I argue that such mandatory reporting policies are misguided, not required by law, and detrimental to survivors. Finally, I turn my attention to the ethical (not legal) responsibilities of a person who is entrusted with a narrative of sexual assault. I argue that such confidants should not advocate for any particular course of action (either reporting or not reporting) but should assist in the intersubjective, embodied process of reconstituting a survivor’s bodily autonomy.
Sept. 9: Alan Wertheimer, Exploitation ch. 7 and excerpts from Rethinking the Ethics of Clinical Research, pp. 255-262 & 287-290
Sept. 16: Alan Wertheimer, Coercion chs. 12-13
Sept. 23: Alan Wertheimer, Coercion ch. 14
Sept. 30: Erik Malmqvist, "International Clinical Research and the Problem of Benefiting from Injustice" and "Better to Exploit than to Neglect? International Clinical Research and the Non-Worseness Claim"
Oct. 7: Powell & Zwolinski, "The Economic Case Against Sweatshop Labor: A Critical Assessment" and Joshua Preiss, "Global Labor Justice and the Limits of Economic Analysis"
Oct. 14: Richard Arneson, "Exploitation, Domination, Competitive Markets, and Unfair Division"
Abstract: A global industry of medical “tourism” has flourished in recent years. A prominent example is cross-border surrogacy arrangements, in which women who cannot carry a pregnancy travel to low-resource countries where typically poor women are paid to be gestational surrogates. In most cases an embryo that has been fertilized in vitro using the visiting woman’s egg and her partner’s sperm is implanted in the womb of the surrogate. However, surrogacy arrangements have also taken place for gay male couples as well as single women. This practice has been criticized as exploitation of poor women by wealthier couples who come from mostly industrialized countries. Yet the women who serve as gestational surrogates are paid more than they could possibly earn in other types of work. Critics argue that large sums of money “coerce” poor women into an activity that places them at some risk and involves significant inconvenience. Are women who serve as surrogates--or their families--actually better off as a result of their serving in this capacity? The ethics of cross-border surrogacy is hotly debated, with additional questions arising about the legal status of the resulting child.
Abstract: Safeguarding others' privacy is widely understood to be a responsibility of government, business, and individuals. (Apple seems to think protecting device owners’ privacy is a corporate responsibility.) Do individuals also have a moral obligation to protect their own privacy? Moreover, could protecting one's own privacy be called for by important moral virtues, as well as obligations or duties? The "virtue" of fairness and the "duty" or "obligation" of respect for persons arguably ground other-regarding responsibilities of confidentiality and data security. But is anyone ethically required (and not just prudentially advised) to protect his or her own privacy? If so, how might a requirement to protect one's own privacy and related ethical virtues properly influence everyday choices, public policy, or the law? Shop offline with cash? Don’t use an Iphone? Avoid open windows? I want to test the idea of an ethical mandate to protect one's own privacy in the world that includes the Internet of Things, while identifying the practical and philosophical problems that bear adversely on the case.
Abstract: Informed consent is a core ethical and legal requirement in medical care and clinical research, based on a principle of respect for individuals’ right to shape their goals and choices consistent with their values and interests. Yet there is disparity between the practice of informed consent and its theoretical ideal, and an under-appreciation of how informed consent varies by context. A growing body of research has documented gaps in patient and research participant understanding, and an overemphasis on the written consent document. I will present data on studies we and others have done in an effort to improve research participant understanding of study information. I will also consider the advantages and drawbacks of different proposed models of consent, such as dynamic consent, opt-out consent, and broad consent. Opportunities for strengthening the concepts and practices of informed consent will be discussed.
Abstract: Reaction to the now infamous Facebook-Cornell “mood contagion” experiment was swift and fierce. Criticism by both the public and some prominent ethicists centered on the fact that user-subjects had not consented to participate. But discussion paid scant attention to the experiment’s relationship to Facebook’s underlying practice and its risks. Prior academic studies (most of them small and observational) had suggested two contradictory hypotheses about the mental health risks of Facebook use to its 1.35 billion users: that exposure to friends’ positive posts is psychologically risky (through a social comparison mechanism) and that exposure to negative posts is psychologically risky (through an emotional contagion mechanism). The company alone was in a position to rigorously determine the effects of its product through experimental mechanisms. But the kind of explicit, fully informed consent that we normally demand that researchers (and, less often, clinicians) obtain would have badly biased the results. Not since the Tuskegee study, the 1972 revelation of which served as the primary catalyst for the current ethical and legal framework for governing human subjects research, has the public expressed so much sustained alarm over human subjects research. Moreover, Facebook’s conundrum shares many features faced by practitioners and administrators working in modern healthcare systems. The (comparative) effects on patients of many medical and healthcare delivery practices are uncertain, imperiling patient welfare and potentially squandering scarce resources. Healthcare systems are in a unique position to rigorously field test the consequences of their services, yet obtaining explicit informed consent for participation in learning activities (whether “research” or QI/QA) is often infeasible. How we frame the Facebook experiment thus has consequences for other important research. In this talk, I will argue that criticisms of the Facebook experiment — that the company exploited its position of power over users, treated them as mere means to corporate ends, and deprived them of information necessary for them to make a considered judgment about what was in their best interests — should be inverted: Those in control of large systems affecting numerous people (like Facebook and healthcare system administrators) may abuse their power, treat patients and users as mere means to their ends, and deprive those parties of information necessary to exercise their autonomy when they fail to collect data on the effects of their products or services, giving rise in some cases to an ethical duty to experiment, sometimes without fully informed (or any) consent.
Abstract: Clinical research that examines the safe and effective treatment of diseases, disorders, and conditions affecting children offers one of the best prospects for improving the medical treatment of children. But the inclusion of children in research raises difficult ethical questions, among them: To how much risk should we expose children who cannot provide informed consent for their research participation? Most ethicists agree that children may be exposed to some research risks purely in the interests of obtaining medical knowledge that aims to benefit future generations. But the degree of risk that should be permitted and the reasons for which it should be permitted are controversial.
Various thresholds have been proposed to constrain research risks that do not offer children the prospect of direct medical benefit. These proposals include limiting research risks to (1) the risks of routine medical examinations (CIOMS 2002; Kopelman 2004), (2) the risks of participation in charitable activities (Wendler 2010), (3) the risks of family life (Ackerman 1980; Nelson and Ross 2005), and (4) the risks-of-daily-life (Freedman, Fuks, and Weijer 1993; McMillan and Hope 2004). I examine which, if any, of these thresholds is defensible. I argue that the risks-of-daily-life threshold is defensible, but not for the reasons currently offered. I raise a problem with the current justification of the risks-of-daily-life threshold, and I propose a new justification. I argue that the risks of daily life are justifiable because they are part of a reasonable trade-off between personal safety and our ability to pursue meaningful lives.
Abstract: Clinical research conducted in low- and middle-income countries (LMICs) is playing an ever-expanding role in the research and development of new biomedical interventions. Given large disparities in wealth and access to healthcare services between higher and lower income settings, research ethicists have greater concerns about exploitation in clinical research which is conducted in LMICs than that which is conducted in high-income settings. Moreover, there are legitimate concerns about what is called the 10/90 gap: somewhere around 90% of global research resources are devoted to research and development of interventions targeting the healthcare needs and desires of the wealthiest 10% of the global population. Put another way: only around 10% of global health research resources are devoted to addressing the health deficits affecting 90% of the world’s population. Such concerns have led multiple international bodies and domestic advisory groups to propose specific constraints on research conducted in LMIC settings, with one common recommendation being that research conducted in LMICs but externally or jointly sponsored ought to be responsive to host community health needs.
Although responsiveness is an oft-repeated ethical requirement for international research, there exists ongoing disagreement about both the content of responsiveness as well as its usefulness as a guideline governing international clinical research. In this paper, I propose a framework intended to clarify the responsiveness requirement. I begin by motivating the paper with a couple of examples and presenting some of the shortcomings of existing interpretations of responsiveness. I suggest that one helpful way of characterizing the normative content of the requirement is as a demand that the knowledge sought in clinical research be socially valuable to those populations within which, and upon whom, such research is conducted. I then borrow from decision theory a framework for the assessment of the value of information, and go on to outline how this approach can be utilized in the prospective assessment of a clinical trial’s responsiveness to host community needs. I consider what data would be necessary as inputs to fully operationalize this framework, in the process demonstrating how it avoids each of the objections raised to competing conceptions of responsiveness. Next, I discuss some of the hurdles to the full operationalization of the framework, and indicate how it can nevertheless operate as a threshold condition for the ethical permissibility of research conducted in LMICs. Finally, I conclude by briefly indicating some other issues in research ethics upon which this approach may shed light.
Abstract: Despite women’s increased labor force participation, household divisions of labor remain highly unequal, with women in every industrialized country continuing to perform the vast majority of unpaid housework and childcare. This persistent gendered division of labor is remediable. Properly implemented, “gender egalitarian” political interventions such as work time regulation, subsidized dependent care provisions, and paid family leave initiatives can induce families to share paid work, unpaid work, and leisure time more equally than they currently do. In the long run, these interventions can effectively reform the norms and institutions that currently sustain the gendered division of labor.
Gender egalitarian political interventions face a formidable justificatory hurdle, however. By subsidizing gender egalitarian lifestyles, these interventions appear to violate a basic liberal requirement for legitimacy: that political interventions be publicly defensible within the justificatory community of reasonable citizens. In order for interventions to be defensible in this way, the reasons justifying intervention must be neutral among the conceptions of the good that citizens may reasonably embrace. By this standard, interventions aimed at influencing families’ allocations of work appear illegitimate. They apparently fail to abide by the neutrality constraint on legitimate exercises of political power, because many citizens consciously enact and even celebrate gender inegalitarian domestic arrangements. Thus, the value of gender egalitarianism seems not to be a value that can be recognized as such by all reasonable citizens; it therefore cannot be invoked to justify exercises of political power like gender egalitarian interventions without violating the constraint of neutrality. Some proponents of gender egalitarian interventions have devised elegant arguments for the conclusion that these interventions can be defended without violating the constraint of neutrality, and are thus legitimate after all. My project in this paper is to critique one widely-deployed strategy for defending gender egalitarian political interventions. According to this strategy, the gendered division of labor constitutes or causes unjust distributions of goods, and gender egalitarian interventions can be neutrally justified as necessary means to remedy those injustices. Whether or not such interventions can ultimately be shown to be legitimate, I raise doubts that the strategy I consider meets this burden. But the problems that beset this strategy are illuminating, and point the way toward a more promising approach. In closing, I briefly sketch my own positive view regarding how gender egalitarian interventions can be defended as legitimate exercises of political power that abide fully by the constraint of neutrality.
Abstract: Although overt discrimination on the basis of race, gender, and other such categories has been decline for decades, a large body of social scientific research has shown that people are subject to “implicit” biases of which they remain unaware, and which influence their judgment and behavior in ways they would not endorse. I argue that the problem of moral responsibility for implicit bias requires attending to the distinction between responsibility as attributability and responsibility as accountability, which license what I call “appraisal-based” and “non-appraising” responses. We are morally responsible for our actions in the attributability sense only when they reflect the practical identities that define us as moral agents, while we are responsible in the accountability sense when it is appropriate for others to enforce certain expectations and demands on those actions. I contend that we may sometimes lack attributability for actions caused by implicit bias, but that even in those cases we are still accountable for them. Hence, we should eschew responses such as blame and punishment in favor of non-appraising responses that assign burdens for dealing with the consequences of the action without any assessment of the person. I provide further moral-theoretical and psychological grounds for the distinction between appraisal-based and non-appraising responses, arguing that the latter not only does greater justice to our moral experience and agency, but will also be more practically effective in bringing about positive change.
Weekly meetings to discuss Ken Binmore's seminal work in evolutionary game theory and ethics, Natural Justice.
Sept. 9 - 30: Henry Richardson's Moral Entanglements: The Ancillary Care Obligations of Medical Researchers
Oct. 14 - Nov. 4: Debra Satz's Why Some Things Should Not Be For Sale
Nov. 11: Nicholad Vrousalis, "Exploitation, Vulnerability, and Social Domination"
Nov. 25: Jeremy Snyder, "Needs Exploitation"
Dec. 2: David Sobel, "The Impotence of the Demandingness Objection"
Abstract: Both empirical data and philosophical considerations suggest that brain scans used as evidence in the courtroom may be biasing or misleading. However, recent studies suggest this view is mistaken. In this talk I explain the reasons for the expectation that neuroimages may be misleading, and review the studies that contradict it. I offer an explanation for the totality of the seemingly contradictory evidence, and argue that this has implications for the admissibility of neuroimaging in the courtroom.
Abstract: In this paper I set out an argument, invoking human rights, in defense of the duties to mitigate and provide adaptation to climate change. I look at five challenges to the human rights argument, three of which have been pressed in the literature on conceptual grounds, and two of which I develop on normative grounds. I present what I think are satisfactory responses to the three conceptual challenges but I argue that the normative challenges are more compelling. The human rights argument does not help us to understand well our duties to future generations to mitigate and provide adaptation for climate change. The problems with the human rights argument suggest that a more promising approach is to understand these duties as matters of intergenerational distributive justice.
Abstract: The ex ante Pareto principle requires that if a first alternative has greater expected value for each person than second alternative, the first alternative ought to be preferred. We examine cases in which a first alternative has greater expected value for each person, but we know that under this alternative one person will, ex post, end up worse off than others. We argue that, in such cases, the ex ante Pareto principle is of doubtful validity, because it relies on incomplete information about what is in the interests of each person. We argue that, whenever possible, it is better to rank alternatives as we *would* rank them if we had full information about how individuals will be affected.