Legal authorities adopted the reasonable person standard to determine whether defendants were negligent. During the 1970s, judges started using the standard to evaluate negligence claims brought by injured patients who said doctors had failed to obtain informed consent to the harmful procedures. Judges declared that the traditional standard for disclosure (what a reasonable medical professional would disclose) was insufficiently respectful of the patient’s right to decide. Instead, professionals should disclose what reasonable patients would need and want to know about the options. The revised Common Rule adopts the reasonable person standard to guide research disclosure.
Some members of the research community contend that the standard is confusing and ill-suited to the research oversight system. But the revised rule is not as radical as it might seem. In its influential Belmont Report, the National Commission recommended application of a “reasonable volunteer standard” to guide IRBs evaluating research disclosures. Evidence also suggests that IRBs often invoke the reasonable person standard in deliberations about consent forms. But past application of the standard has been informal and uneven.
Robust application of the reasonable person standard will require researchers and IRBs to learn more about what ordinary people want and need to know about the studies they are invited to join.
The US Department of Defense has, for at least 20 years, held the stated intention to enhance active military personnel (“warfighters”); this intention has become more acute in the face of dropping recruitment, an ageing fighting force, and emerging strategic challenges. However, developing and testing enhancements is clouded by the ethically contested status of enhancements, the long history of abuse by military medical researchers, and new legislation in the guise of “health security” that has enabled the Department of Defense to apply medical interventions without appropriate oversight.
The aim of this paper is to reconcile existing legal and regulatory frameworks on military biomedical research with ethical concerns about military enhancements. In what follows, I first outline the justification for military enhancements. I then briefly address existing definitional issues over what constitutes enhancement, before addressing existing research ethics regulations governing military biomedical research.
I then argue that the two common justifications for rapid military innovation in science and technology, including enhancement, fail. These justifications are a) to satisfy a compelling military need, and b) strategic dominance. I then turn to an objection that turns on the idea that we need not have these justifications if warfighters are willing to adopt enhancement, and argue that laissez-faire approaches to enhancement fail in the context of the military due to pressing and historically significant concerns about coercion and exploitation. I conclude with what refer to as the “least worst” justification, that given the rise of untested enhancements in civilian and military life we have good reason to validate potential enhancements, even if they do not satisfy reasons a) or b) above.
The reigning paradigm of rational drug discovery in medical research attempts to exploit biological theories and pathophysiological explanations to identify novel drug targets and therapeutic strategies. Given that there are limited human and material resources available for testing experimental therapeutics, this theory- and explanation-driven strategy of drug development seems to make good sense: It narrows the number of plausible drug candidates to be put through rigorous and expensive testing, potentially improves the success rate of clinical translation, and it provides some theoretical basis for minimizing risks to patient-subjects. Yet because many theories in medicine are either incomplete (at best) or false (at worst), relying on theoretical explanations can have some puzzling and troubling consequences. For example, new drugs may be vetted in clinical trials and achieve regulatory approval despite a faulty explanation for why they are effective. If physicians heavily rely on this explanation to make treatment decisions, it can lead to systematic misdiagnoses and patient harm.
Commitment to a faulty explanation or theory can also lead to excessive risks and harms for research subjects. In an empirically-driven research program, which places little or no weight upon the explanation for an intervention's effectiveness or lack thereof, successive negative results are typically sufficient to cancel the program. There is simply no other rationale for conducting trials other than the promising empirical results. But for a theoretically-driven research program, underdetermination may shield the driving biomedical explanation and rationale, and this can result in wasted research resources and avoidable burdens on research subjects.
In what follows, I will argue that these problematic features of biomedical explanations can be largely resolved by re-conceptualizing the epistemology of rational drug discovery in terms of heuristics. That is, instead of treating biomedical explanations as true or false (or reliable or faulty), we should instead think of them as (clusters of) simplifying assumptions, which are useful and reliable within a limited domain. This means that the task of research systems is not to verify or falsify an explanation (or the explanatory theory), but rather to define the initial boundaries around the heuristic's domain of utility.
Research ethics and oversight presume that all relevant ethical issues in research involving human participants can be identified and addressed by the careful review of individual study protocols or their components. This presumption is false. In this paper we introduce the concept of the trial portfolio—a series of trials that are interrelated by their hypotheses. We demonstrate that trial portfolios represent a distinct unit of knowledge production, and that decisions that affect their composition, coordination, and expansion play a critical role in determining: whether unnecessary risk has been eliminated from research, whether remaining risks are justified by a reasonable expectation of direct medical benefit to participants or by the expected social value of the information that emerges from that set of investigations. Together, these considerations also affect the fairness of the way the costs and burdens of medical uncertainty are distributed across health care and research systems. If research ethics and oversight are to effectively discharge their mission of advancing science, protecting participants in trials, and ensuring an equitable distributing risks and benefits related to research, they will have to develop mechanisms for addressing ethical and scientific issues that emerge from the way trial portfolios are constructed.
In a number of different ways, ethics in research involving humans appears to be eroding. The evidence is scattered, the examples diverse, and the players varied. But taken together, these developments are cause for concern. Chief among these concerns is an erosion of the bedrock requirements for obtaining the informed consent from research subjects. A separate but related aspect is the denial that individuals involved in some way in what is obviously research ought to be considered research subjects. Still another factor is insistence on the part of investigators and IRBs that some studies involve only minimal risks, a contention open to serious question. Ethical quandaries have arisen from the introduction of alternative research designs, which by themselves are not unethical but nevertheless lead to difficulties in the ability to obtain informed consent as well as disputes regarding who are the subjects. Examples exist in which research purported to involve usual-care interventions turned out to involve at least one unusual care arm, and therefore subjected participants to greater risks than were alleged by investigators, assessed by IRBs, and described in the consent documents. In response to any one of these developments, skeptics are wont to ask: how widespread is this? The reply is to acknowledge that although each individual example, taken alone, may not be cause for concern, taken together, these phenomena point to a reason to worry about ethics in today’s research landscape.
Research approaches that integrate learning into ongoing clinical activities offer the potential to accelerate knowledge generation to improve the health of individuals and populations. Yet integrating research and clinical activities raises difficult ethical and regulatory challenges, including whether and what form of consent is needed, and who should solicit that consent. In recent years, a series of empirical studies have explored these issues. However, questions remain as to the appropriate role of these data. In this manuscript, I explore this issue, examining how these empirical data might inform normative and policy reflection regarding approaches to consent and disclosure for these new research designs. I propose that streamlined approaches with verbal consent might be widely acceptable to prospective patient-participants for some activities that integrate research and care, and that the reticence towards treating physician participation in consent for research may not be merited. I conclude with suggestions for additional empirical and conceptual work to guide policy decisions regarding research-care integration.
The creation of guidelines has long been a popular means of approaching research ethics. Yet despite proliferating codes of ethics for fields as diverse as research with the recently dead to artificial intelligence research, it is not always clear exactly what these guidelines are meant to do. Paradigmatic cases suggest three roles for ethical guidelines: an analytical framework to resolve ethical problems, a group of principles to aid in deliberations, and a set of prescriptive expectations. Yet these guidelines may do exactly the opposite of what they are intended to do. Rather than promote successful moral reasoning for ethical research practices, they rationalize problematic ones. This article highlights the details that make the difference between justified moral reasoning in research and rationalization of unethical research, arguing that guidelines may do more harm than good by providing reasons for rationalization without simultaneously developing the capacity for reflection on the distortive effects of researchers’ internal desires and external motivations. It then offers several alternative means of supporting ethical research practices.
Cluster randomized trials (CRTs) are increasingly used in health research to evaluate public health, knowledge translation and health services interventions. Because of features of their design, CRTs pose challenges to the interpretation of standard research ethics guidelines and regulations. As a result, researchers and research ethics committees need clear guidance on the ethics of CRTs. In 2012, members of our research group published the first international ethical guidance for CRTs, the Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials (Weijer et al., 2012). The Ottawa Statement has been widely cited and has influenced policy in the United Kingdom, the United States of America and internationally. (SACHRP, 2014; CIOMS, 2016)
Notwithstanding the impact of the Ottawa Statement, debate continues about key ethical concepts and appropriate guidance (Macklin, 2014; McRae et al, 2016; Macklin, 2016). In the pages of the Journal of Clinical Epidemiology, van der Graaf and colleagues reflect on the Ottawa Statement and propose three revisions (van der Graaf et al., 2015). First, patients ought to be considered research participants when “they are indirectly affected as the result of an intervention targeted at a [health provider].” Second, health providers have a “different moral status than ordinary research participants, which implies a higher threshold for withdrawal.” Third, aspects of the CRT, including randomization, should not be revealed in the consent process when “disclosure of the randomization process would affect the validity of [the] CRT.” In this commentary, we respond to each of these proposals.
It has long been taken for granted that clinical research with human subjects is ethical only if it can produce socially valuable knowledge. Recently, this social value requirement (SVR) has come under scrutiny, with prominent ethicists arguing that the SVR cannot be substantiated as an ethical limit on clinical research, and others attempting to offer new support.
I argue that both criticisms and existing defenses of the SVR are predicated on what I call the “transactional model of stakeholder obligations”. I problematize this framework, and go on to defend an alternative that I call the basic structure model. The basic structure model is grounded in the claim that clinical research plays a direct role in establishing the justice or injustice of our social organization, and thus ought to be governed more explicitly by justice-based considerations. I then show how this model provides a more stable foundation for the SVR.