Our book approaches medical decision making primarily from the standpoint of the community of clinicians, behavior scientists, and theorists who focus on the question of “how should decisions be made in order to provide the patient with the greatest health benefit?” Another group of thinkers, including clinicians, philosophers, lawyers, and patient advocates, have asked an equally important question: “how should decisions be made in order to preserve the ethical values that mean most to us as individuals and societies?”
While decision scientists have traditionally emphasized utilitarian approaches to decision making, bioethicists have questioned whether one should focus on the consequences of decisions or their intrinsic features (as in deontological approaches to ethics). They have also asked such questions as whether morality is a feature of actions or a set of virtues, how moral claims should be justified, how to balance contextually-specific decisions with universal principles, and how health care ethics related to ethics in other endeavors (Khushf 2004).
Like decision scientists, ethicists, particularly those that recognize multiple ethical principles, virtues, or rights, regularly consider situations in which all of their goals can not be fully met. Decision science approaches this problem through prioritization and trade-off of goals. Ethics is more likely to use a process of specification, in which each principle is made more contextual and concrete in order to provide guidance for the particular decision in question (Beauchamp and DeGrazia 2004). Indeed, some philosophers advocate case-based approaches to bioethics, just as the judicial system in most Western nations interprets the law through the analysis of cases and the development of precedence (Boyle 2004).
Although it would be natural to expect these two perspectives to converge and these two groups of theorists to share ideas, historically these questions have been answered through entirely separate academic endeavors, and have resulted in separate communities of scholarship (Stiggelbout, Elstein et al. 2006).
Two recent studies have compared the thinking of decision scientists and bioethicists. In a survey by Ubel, et al. (1996), 568 prospective jurors, 74 members of the American Association of Bioethics and 73 members of the Society for Medical Decision Making were asked to choose between providing a more effective screening program to half a population (resulting in a greater number of deaths prevented overall) or a less effective program to the complete population (resulting in fewer deaths prevented overall). Decision scientists were less likely than others to favor the less effective (but arguably more equitable) program.[i]
A more extensive study by Stiggelbout, et al. (2006) surveyed 327 members of the American Society for Bioethics and Humanities and 77 members of the Society for Medical Decision Making, presenting each participant with two clinical cases, one concerning refusal of a beneficial treatment, and one concerning surrogate decision making about futile care at the end-of-life. The authors found substantial similarities in the opinions of the two groups, although decision scientists were more likely to emphasize outcomes and trade-offs and ethicists were more likely to emphasize patient autonomy. Notably, both groups called for greater exploration of patient goals and values, and ongoing theme in this book.
As the Stiggelbout study suggests, the concerns of bioethics and decision science are often in alignment; the ethical principle of autonomy, for example, is often naturally reflected in the importance of properly eliciting and considering patient preferences in decision analysis. Bioethical principles can also be naturally incorporated into decision making as personal or societal constraints on possible choice alternatives: in a society which espouses the natural right of people to be free from becoming unwilling subjects of medical procedures, a decision alternative which results in forcing a family member to undergo genetic screening might be simply unacceptable.