Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

Recently Viewed

JacketFlap Sponsors

Spread the word about books.
Put this Widget on your blog!
  • Powered by JacketFlap.com

Are you a book Publisher?
Learn about Widgets now!

Advertise on JacketFlap

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Tag

In the past 7 days

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing: Blog Posts Tagged with: Medical Ethics, Most Recent at Top [Help]
Results 1 - 16 of 16
1. Uterus transplants: challenges and potential

The birth of a healthy child in Sweden in October, 2014 after a uterus transplant from a living donor marked the advent of a new technique to help women with absent or non-functional uteruses to bear genetic offspring. The Cleveland Clinic has now led American doctors into this space, performing the first US uterine transplant in February, 2016

The post Uterus transplants: challenges and potential appeared first on OUPblog.

0 Comments on Uterus transplants: challenges and potential as of 3/21/2016 9:03:00 AM
Add a Comment
2. Why we need “mystery shoppers” directly observing health care

Considering the well documented problems of medical error, it’s remarkable that it’s rarely observed. Of course there is much scrutiny of the data that is generated during the health care encounter, but that is not the same thing. For instance, while quality measures track data on how well blood pressure is managed, there are not measures of whether blood pressure is actually measured accurately.

The post Why we need “mystery shoppers” directly observing health care appeared first on OUPblog.

0 Comments on Why we need “mystery shoppers” directly observing health care as of 1/1/1900
Add a Comment
3. The traumatising language of risk in mental health nursing

Despite progress in the care and treatment of mental health problems, violence directed at self or others remains high in many parts of the world. Subsequently, there is increasing attention to risk assessment in mental health. But it this doing more harm than good?

The post The traumatising language of risk in mental health nursing appeared first on OUPblog.

0 Comments on The traumatising language of risk in mental health nursing as of 1/1/1900
Add a Comment
4. Most powerful lesson from Ebola: We do not learn our lessons

‘Ebola is a wake-up call.’ This is a common sentiment expressed by those who have reflected on the ongoing Ebola outbreak in West Africa. It is a reaction to the nearly 30,000 cases and over 11,000 deaths that have occurred since the first cases of the outbreak were reported in March 2014.

The post Most powerful lesson from Ebola: We do not learn our lessons appeared first on OUPblog.

0 Comments on Most powerful lesson from Ebola: We do not learn our lessons as of 12/11/2015 8:13:00 AM
Add a Comment
5. Are drug companies experimenting on us too much?

For years, my cholesterol level remained high, regardless of what I ate. I gave up all butter, cheese, red meat, and fried food. But every time I visited my doctor, he still shook his head sadly, as he looked at my lab results. Then, anti-cholesterol medications became available, and I started one.

The post Are drug companies experimenting on us too much? appeared first on OUPblog.

0 Comments on Are drug companies experimenting on us too much? as of 10/25/2015 6:43:00 AM
Add a Comment
6. Ethics at the chocolate factory

Two women are being trained for work on a factory assembly line. As products arrive on a conveyor belt, their task is to wrap each product and place it back on the belt. Their supervisor warns them that failing to wrap even one product is a firing offense, but once they get started, the work seems easy.

The post Ethics at the chocolate factory appeared first on OUPblog.

0 Comments on Ethics at the chocolate factory as of 10/22/2015 6:59:00 AM
Add a Comment
7. Facing the challenges of palliative care: continuity

The last two decades have witnessed truly remarkable growth in the field of palliative care. Such growth is challenging, and brings both uncertainties and optimism about the future. In this three-part blog, we’ll take a look at some of the complex issues of continuity, development and evolution in palliative medicine.

The post Facing the challenges of palliative care: continuity appeared first on OUPblog.

0 Comments on Facing the challenges of palliative care: continuity as of 5/29/2015 7:23:00 AM
Add a Comment
8. Adderall and desperation

“Butler Library smells like Adderall and desperation.”

That note from a blogger at Columbia University isn’t exactly scientific. But it speaks to the atmosphere that settles in around exam time here, and at other competitive universities. For some portion of the students whose exams I’m grading this week, study drugs, stimulants, and cognitive enhancement are as much a part of finals as all-nighter and bluebooks. Exactly how many completed exams are coming to me via Adderall or Provigil is impossible to pin down. But we do know that studies have found past-year, nonprescribed stimulant use rates as high as 35% among students. We know, according to HHS, that full-time students use nonprescribed Adderall at twice the rate of non-students. We can suspect, too, that academics aren’t so different in this regard from their students. In unscientific poll, 20% of the readers of Nature acknowledged off-label use of cognitive enhancement drugs (CEDs).

If this sounds like the windup to a drug-panic piece, it’s not. The use of cognitive enhancement drugs concerns me much less than the silence surrounding their use. At universities like Columbia, cognitive enhancement exists in something of an ethical gray zone: technically against rules that are mostly unenforced; an open conversation topic among students in the library at 2 a.m., but a blank spot in “official” academic culture. That blank in itself is worth our concern. CEDs aren’t going away–but more openness about their use could teach us something valuable about the kind of work we do here, and anywhere else focus-boosting pills are popped.

In fact, much of the anti-cognitive enhancement drug literature dwells on the ethics of work, on the question of how much credit we can and should take for our “enhanced” accomplishments. (In focusing on these arguments, I’m setting to one side any health concerns raised by off-label drug use. I’m doing that not because those concerns are unimportant, but because the most challenging bioethics writing on the topic is less about one drug or another than about the promises and limits of cognitive enhancement in general–up to and including drugs that haven’t been invented yet.) In Beyond Therapy, the influential 2003 report on enhancement technologies from the President’s Council on Bioethics, the central argument against CED use had to do with the kind of work we can honestly claim as our own: “The attainment of [excellence] by means of drugs…looks to many people (including some Members of this Council) to be ‘cheating’ or ‘cheap.’” Work done under the influence of CEDs “seems less real, less one’s own, less worthy of our admiration.”

Is that a persuasive argument for keeping cognitive enhancement drug use in the closet, or even for taking stronger steps to ban it on campus? I’m not so sure it is. This kind of anti-enhancement case rests on an assumption about authorship, which I call the individual view. It claims that the dignity and authenticity of our accomplishments lie largely in our ability to claim individual credit for our work. In a word, it’s producer-focused, not product-focused.

That’s a reasonable way to think about authorship–but much of the weight of the anti-cognitive enhancement drug case rests on the presumption that it’s the only way to think about authorship. In fact, there’s another view that’s just as viable: call it the collaborative view. It’s an impersonal way of seeing accomplishment; it’s a product-focused view; it’s less concerned with allocating ownership of our accomplishments and it’s less likely to emphasize originality as the most important mark of quality. It is founded on the understanding that all work, even the most seemingly original, is subject to influences and takes place in a social context.

You can’t tell the history of accomplishments in the arts and sciences without considering those who thought about their work in this way. We can see it in the “thefts” of content that led passages from Plutarch, via Shakespeare, to T.S. Eliot’s poetry, or in the constant musical borrowing that shapes jazz or blues or classical music. We can see it in the medieval architects and writers who, as C.S. Lewis observed, practiced a kind of “shared authorship,” layering changes one on top of the other until they produced cathedrals or manuscripts that are the product of dozens of anonymous hands. We can see it again in the words of writers like Mark Twain, who forcefully argued that “substantially all ideas are second hand,” or Eliot, who advised critics that “to divert interest from the poet to the poetry is a laudable aim.” We can even see it in the history of our language. Consider the evolution of words like genius (from the classical idea of a guardian spirit, to a special ability, to a talented person himself or herself), invent (from a literal meaning of “to find” to a secondary meaning of “to create”), and talent (from a valuable coin to an internal gift). As Owen Barfield has argued, these changes are marks of the way our understanding of accomplishment has become “internalized.” Where earlier writers tended to imagine inspiration as a process that happens from without, we’re more likely to see it as something that happens from within.

The collaborative view is valuable even for those of us who aren’t, say, producing historically-great art. It might relieve of us of the anxiety that the work we produce is a commentary on our personal worth. It’s well-tailored to the creative borrowing and sampling that define the “remix culture” celebrated by writers like Lawrence Lessig. And it is, I think, a tonic against the kind of “callous meritocracy” that John Rawls cogently warned us about.

Female college student stressed and overwhelmed and trying to study at the school library. © Antonio_Diaz via iStock.
Female college student stressed and overwhelmed and trying to study at the school library. © Antonio_Diaz via iStock.

That’s not to suggest that the collaborative view is the one true perspective on accomplishment. I’d call it one of a range of possible emphases that have struggled or prospered with the times. But if that’s the case, then we’re free to think more critically about the view of work we want to emphasize at any given time.

What does any of this have to do with cognitive enhancement? The collaborative view I’ve outlined and a culture of open cognitive enhancement share some important links. It’s certainly not true that one has to use CEDs to take that view, but there are strong reasons why an honest and thoughtful CED user ought to do so.

Consider the case of a journalist like David Plotz, who kept a running diary of his two-day experiment with Provigil: “Today I am the picture of vivacity. I am working about twice as fast as usual. I have a desperate urge to write…. These have been the two most productive days I’ve had in years.”

How might such a writer account for the boost in his performance? Would he chalk it up to his inherent skill or effort, or to the temporary influence of a drug? If someone singled out his enhanced work for praise, would he be right in taking all the credit for himself and leaving none for the enhancement?

I don’t think he would be. There is a dishonesty in failing to acknowledge the enhancement, because that failure willingly creates a false assumption: it allows us to believe that the marginal improvement in performance reflects on the writer’s efforts, growing skill, or some other personal quality, when the truth seems to be otherwise. In other words, I don’t think enhancement is dishonest in itself–it’s failing to acknowledge enhancement that’s dishonest.

There’s nothing objectionable in collaborative work, forthrightly acknowledged. When we take an impersonal view of our work, we share credit and openly recognize our influences. And we can take a similar attitude to work done under the influence of cognitive enhancement drugs. When we speak of creative influences and working “under the influence” of CEDs, I think we’re exposing a similarity that runs deeper than a pun. Of course, one does not literally “collaborate” with a drug. But whether we acknowledge influences that shape our work or acknowledge the influence of a drug that helped us accomplish that work by improving our performance, we are forgoing full, personal credit. We are directing observers toward the quality of the work, rather than toward what the work may say about our personal qualities. We are, in a sense, making less of a “property claim” on the work. Given the history of innovators who willingly made this more modest claim, and given the benefits of the collaborative view that I’ve discussed, I don’t think that’s such bad news.

But could a culture of open cognitive enhancement drug use really one day change the way we think about work? There are no guarantees, to be sure. When I read first-person accounts of CED use, I’m struck by the way users perceive fast, temporary, and often surprising gains in focus, processing speed, and articulateness. With that strong subjective experience comes the experience of leaving, and returning to, an “unenhanced” state. The contrast seems visceral and difficult to overlook; the marginal gains in performance seem especially difficult to take credit for. The subjective experience of CED use looks like short-term growth in our abilities, arising from an external source, to which we cannot permanently lay claim. For just that reason, I have trouble agreeing with those, like Michael Sandel, who associate cognitive enhancement with “hubris.” Why not humility instead? Of course, I don’t claim that CEDs will inspire the same reflections in all of their users. It’s certainly possible to be unreflective about the implications of CED use. I only argue that it’s a little harder to be unreflective.

But that reflectiveness, in turn, requires openness about the enhancement already going on. As long as students fear job-market ramifications for talking on the record about their cognitive enhancement drug use, I wouldn’t nominate them as martyrs to the cause. But why not start with professors and academics–with, say, those 20% of respondents to the Nature poll? What’s tenure for anyway?

We simply can’t separate enhancement, of any kind, from the ends we ask of it and the work we do with it. So I sympathize with the New Yorker’s Margaret Talbot when she writes that “every era, it seems, has its own defining drug. Neuroenhancers are perfectly suited for the anxiety of white-collar competition in a floundering economy…. They facilitate a pinched, unromantic, grindingly efficient form of productivity.” Yet that’s giving the drug too much credit. I’d look instead to the culture that surrounds it. Our culture of cognitive enhancement is furtive, embarrassed, dedicated to one-upping one another on exams or on the tenure track. But a healthier culture of enhancement is conceivable, and it begins with a greater measure of honesty. Adderall and desperation don’t have to be synonymous, but as long as they are, I’d blame the desperation, not the drug.

The post Adderall and desperation appeared first on OUPblog.

0 Comments on Adderall and desperation as of 12/31/2014 10:51:00 AM
Add a Comment
9. Traveling patients, traveling disease: Ebola is just the tip of the iceberg

Many in the media and academia (myself included) have been discussing the Ebola crisis, and more specifically, the issues that arise as Ebola has traveled with infected patients and health care workers to the United States and infected other US citizens.

These discussions have been fascinating and frightening, but the terrifying truth is that Ebola is just the tip of the iceberg. Diseases have long traveled with patients, and as the phenomena of medical tourism and the more general globalization of health care grow, these problems are likely to grow as well.

Medical tourists are very good targets of opportunities for pathogens. Many are traveling with compromised or suppressed immune systems to destination countries for treatment with relatively high infection rates, including the risk of exposure to multi-drug–resistant pathogens.

Doctors typically distinguish commensals—the bugs we normally carry on our skin, mouth, digestive tracts, etc.—from pathogens, the harmful bacteria that cause disease through infection. But what is commensal for a person in India might be an exotic pathogen for a US population. Medical tourist patients are transporting their commensals and pathogens to the hospital environments of the destination countries to which they travel, and are exposed to the commensals and pathogens of hospitals and population at large in the destination country. These transmissions tax the health care system and the knowledge of physicians in the home country to whom the new microbe may be unknown, and diagnosis and treatment more difficult.

Air travel can involve each of the four classical modes of disease transmission: contact (e.g. body-to-body or touching an armrest), common vehicle (e.g. via food or water), vector (e.g. via insects or vermin), and airborne (although more recent planes are equipped with high efficiency particulate air (HEPA) filters reducing transmission risk, older planes are not).

We have seen several diseases travel in this way. The Severe Acute Respiratory Syndrome (SARS) outbreak of 2003 involved a three-hour flight from Hong Kong to Beijing carrying one SARS-infected passenger leading to sixteen passengers being subsequently confirmed as cases of SARS, with eight of those passengers sitting in the three rows in front of the passenger.

In January 2008, a new type of enzyme was detected in bacteria found in a fifty-nine-year-old man with a urinary tract infection being treated in Sweden. The man, Swedish but of Indian origin, had in the previous month undergone surgeries at two hospitals in India. The enzyme, labeled “New Delhi metallo-beta-lactamase-1 (NDM-1)” was able to disarm a lot of antibiotics, including one that was the last line of defenses against common respiratory and urinary tract infection.

In 2009, a study found that twenty-nine UK patients had tested positive for the bacteria-carrying NDM-1 and that seventeen of the twenty-nine (60%) had traveled to India or Pakistan in the year before. A majority of those seventeen received medical treatment while abroad in those countries, some for accidents or illness while traveling and others for medical tourism, either for kidney and bone marrow transplants or for cosmetic surgery.

High-income countries face significant problems with these infections. A 2002 study estimated that 1.7 million patients (ninety-nine thousand of whom died as a result) developed health care-acquired infections in the United States that year. In Europe these infections have been estimated to cause thirty-seven thousand deaths a year and add US $9.4 billion in direct costs

What can be done? Although in theory airline or national travel rules can prevent infected patients from boarding planes, detecting these infections in passengers is very difficult for the airline or immigration officials, and concerns about privacy of patients may chill some interventions. A 2007 case of a man who flew from the United States to Europe with extensively resistant tuberculosis and who ultimately circumvented authorities who tried to stop him on return by flying to Montreal, Canada and renting a car, shows some of the limits on these restrictions.

Part of the solution is technological. The HEPA filters discussed above on newer model planes reduce the risk substantially, and we can hope for more breakthroughs.

Part of the solution is better regulating the use of antibiotics: overuse of antibiotics when not effective or necessary, underuse of antibiotics when they are needed, failure to complete a full course of antibiotics, counterfeit drugs, and excessive antibiotic use in food animals. This is not a magic bullet, however, and we see problems even in countries with prescription systems such as the United States.

We also need much better transparency and reaction time. Some countries reacted quickly to the report of the NDM-1 cases discussed above in issuing travel warnings and informing home country physicians, while others did not.

Finally, as became evident with Ebola, we need better protocols in place to screen returning medical tourism patients and to engage in infection control when needed.

Headline image credit: Ebola virus virion by CDC microbiologist Cynthia Goldsmith. Public domain via Wikimedia Commons.

The post Traveling patients, traveling disease: Ebola is just the tip of the iceberg appeared first on OUPblog.

0 Comments on Traveling patients, traveling disease: Ebola is just the tip of the iceberg as of 1/1/1900
Add a Comment
10. Eleanor Roosevelt’s last days

When Eleanor Roosevelt died on this day (7 November) in 1962, she was widely regarded as “the greatest woman in the world.” Not only was she the longest-tenured First Lady of the United States, but also a teacher, author, journalist, diplomat, and talk-show host. She became a major participant in the intense debates over civil rights, economic justice, multiculturalism, and human rights that remain central to policymaking today. As her husband’s most visible surrogate and collaborator, she became the surviving partner who carried their progressive reform agenda deep into the post-war era, helping millions of needy Americans gain a foothold in the middle class, dismantling Jim Crow laws in the South, and transforming the United States from an isolationist into an internationalist power. In spite of her celebrity, or more likely because of it, she had to endure a prolonged period of intense suffering and humiliation before dying, due in large part to her end-of-life care.

Roosevelt’s terminal agonies began in April 1960 when at 75 years of age, she consulted her personal physician, David Gurewitsch, for increasing fatigue. On detecting mild anemia and an abnormal bone marrow, he diagnosed “aplastic anemia” and warned Roosevelt that transfusions could bring temporary relief, but sooner or later, her marrow would break down completely and internal hemorrhaging would result. Roosevelt responded simply that she was “too busy to be sick.”

For a variety of arcane reasons, Roosevelt’s hematological disorder would be given a different name today – myelodysplastic disorder – and most likely treated with a bone marrow transplant. Unfortunately, in 1962 there was no effective treatment for Roosevelt’s hematologic disorder, and over the ensuing two years, Gurewitsch’s grim prognosis proved correct. Though she entered Columbia-Presbyterian Hospital in New York City repeatedly for tests and treatments, her “aplastic anemia” progressively worsened. Premarin produced only vaginal bleeding necessitating dilatation and curettage, transfusions temporary relief of her fatigue, but at the expense of severe bouts of chills and fever. Repeated courses of prednisone produced only the complications of a weakened immune system. By September 1962, deathly pale, covered with bruises and passing tarry stools, Roosevelt begged Gurewitsch in vain to let her die. She began spitting out pills or hiding them under her tongue, refused further tests and demanded to go home. Eight days after leaving the hospital, the TB bacillus was cultured from her bone marrow.

Eleanor Roosevelt with grandchildren Buzzie and Sistie Dall. Harris & Ewing, photographer, 1934. Public domain via Library of Congress.
Eleanor Roosevelt with grandchildren Buzzie and Sistie Dall. Harris & Ewing, photographer, 1934. Public domain via Library of Congress.

Gurewitsch was elated. The new finding, he proclaimed, had increased Roosevelt’s chances of survival “by 5000%.” Roosevelt’s family, however, was unimpressed and insisted that their mother’s suffering had gone on long enough. Undeterred, Gurewitsch doubled the dose of TB medications, gave additional transfusions, and ordered tracheal suctioning and a urinary catheter inserted.

In spite of these measures, Roosevelt’s condition continued to deteriorate. Late in the afternoon of 7 November 1962 she ceased breathing. Attempts at closed chest resuscitation with mouth-to-mouth breathing and intra-cardiac adrenalin were unsuccessful.

Years later, when reflecting upon these events, Gurewitsch opined that: “He had not done well by [Roosevelt] toward the end. She had told him that if her illness flared up again and fatally that she did not want to linger on and expected him to save her from the protracted, helpless, dragging out of suffering. But he could not do it.” He said, “When the time came, his duty as a doctor prevented him.”

The ethical standards of morally optimal care for the dying we hold dear today had not yet been articulated when Roosevelt became ill and died. Most of them were violated (albeit unknowingly) by Roosevelt’s physicians in their desperate efforts to halt the progression of her hematological disorder: that of non-maleficence (i.e., avoiding harm); by pushing prednisone after it was having no apparent therapeutic effect; that of beneficence (i.e., limiting interventions to those that are beneficial); by performing cardiopulmonary resuscitation in the absence of any reasonable prospect of a favorable outcome; and that of futility (avoiding futile interventions); by continuing transfusions, performing tracheal suctioning and (some might even argue) beginning anti-tuberculosis therapy after it was clear that Roosevelt’s condition was terminal.

Roosevelt’s physicians also unknowingly violated the principle of respect for persons, by ignoring her repeated pleas to discontinue treatment. However, physician-patient relationships were more paternalistic then, and in 1962 many, if not most, physicians likely would have done as Gurewitsch did, believing as he did that their “duty as doctors” compelled them to preserve life at all cost.

Current bioethical concepts and attitudes would dictate a different, presumably more humane, end-of-life care for Eleanor Roosevelt from that received under the direction of Dr. David Gurewitsch. While arguments can be made about whether any ethical principles are timeless, Gurewitsch’s own retrospective angst over his treatment of Roosevelt, coupled with ancient precedents proscribing futile and/or maleficent interventions, and an already growing awareness of the importance of respect for patients’ wishes in the early part of the 20th century, suggest that even by 1962 standards, Roosevelt’s end-of-life care was misguided. Nevertheless, in criticizing Gurewitsch for his failure “to save [Roosevelt] from the protracted, helpless, dragging out of suffering,” one has to wonder if and when a present-day personal physician of a patient as prominent as Roosevelt would have the fortitude to inform her that nothing more can be done to halt the progression of the disorder that is slowly carrying her to her grave. One wonders further if and when that same personal physician would have the fortitude to inform a deeply concerned public that no further treatment will be given, because in his professional opinion, his famous patient’s condition is terminal and further interventions will only prolong her suffering.

Evidence that recent changes in the bioethics of dying have had an impact on the end-of-life care of famous patients is mixed. Former President Richard Nixon and another famous former First Lady, Jacqueline Kennedy Onassis, both had living wills and died peacefully after forgoing potentially life-prolonging interventions. The deaths of Nelson Mandela and Ariel Sharon were different. Though 95 years of age and clearly over-mastered by a severe lung infection as early as June 2013, Mandela was maintained on life support in a vegetative state for another six months before finally dying in December of that year. Sharon’s dying was even more protracted, thanks to the aggressive end-of-life care provided by Israeli physicians. After a massive hemorrhagic stroke destroyed his cognitive abilities in 2006, a series of surgeries and on-going medical care kept Sharon alive until renal failure finally ended his suffering in January 2014. Thus, although bioethical concepts and attitudes regarding end-of-life care have undergone radical changes since 1962, these contrasting cases suggest that those caring for world leaders at the end of their lives today are sometimes as incapable as Roosevelt’s physicians were a half century ago in saving their patients from the protracted suffering and indignities of a lingering death.

The post Eleanor Roosevelt’s last days appeared first on OUPblog.

0 Comments on Eleanor Roosevelt’s last days as of 11/7/2014 7:27:00 PM
Add a Comment
11. Illuminating the drama of DNA: creating a stage for inquiry

Many bioethical challenges surround the promise of genomic technology and the power of genomic information — providing a rich context for critically exploring underlying bioethical traditions and foundations, as well as the practice of multidisciplinary advisory committees and collaborations. Controversial issues abound that call into question the core values and assumptions inherent in bioethics analysis and thus necessitates interprofessional inquiry. Consequently, the teaching of genomics and contemporary bioethics provides an opportunity to re-examine our disciplines’ underpinnings by casting light on the implications of genomics with novel approaches to address thorny issues — such as determining whether, what, to whom, when, and how genomic information, including “incidental” findings, should be discovered and disclosed to individuals and their families, and whose voice matters in making these determinations particularly when children are involved.

One creative approach we developed is narrative genomics using drama with provocative characters and dialogue as an interdisciplinary pedagogical approach to bring to life the diverse voices, varied contexts, and complex processes that encompass the nascent field of genomics as it evolves from research to clinical practice. This creative educational technique focuses on inherent challenges currently posed by the comprehensive interrogation and analysis of DNA through sequencing the human genome with next generation technologies and illuminates bioethical issues, providing a stage to reflect on the controversies together, and temper the sometimes contentious debates that ensue.

As a bioethics teaching method, narrative genomics highlights the breadth of individuals affected by next-gen technologies — the conversations among professionals and families — bringing to life the spectrum of emotions and challenges that envelope genomics. Recent controversies over genomic sequencing in children and consent issues have brought fundamental ethical theses to the stage to be re-examined, further fueling our belief in drama as an interdisciplinary pedagogical approach to explore how society evaluates, processes, and shares genomic information that may implicate future generations. With a mutual interest in enhancing dialogue and understanding about the multi-faceted implications raised by generating and sharing vast amounts of genomic information, and with diverse backgrounds in bioethics, policy, psychology, genetics, law, health humanities, and neuroscience, we have been collaboratively weaving dramatic narratives to enhance the bioethics educational experience within varied professional contexts and a wide range of academic levels to foster interprofessionalism.

1024px-A-DNA,_B-DNA_and_Z-DNA
From left to right, the structures of A-, B-, and Z-DNA by Zephyris (Richard Wheeler). CC-BY-SA-3.0 from Wikimedia Commons.

Dramatizations of fictionalized individual, familial, and professional relationships that surround the ethical landscape of genomics create the potential to stimulate bioethical reflection and new perceptions amongst “actors” and the audience, sparking the moral imagination through the lens of others. By casting light on all “the storytellers” and the complexity of implications inherent with this powerful technology, dramatic narratives create vivid scenarios through which to imagine the challenges faced on the genomic path ahead, critique the application of bioethical traditions in context, and re-imagine alternative paradigms.

Building upon the legacy of using case vignettes as a clinical teaching modality, and inspired by “readers’ theater”, “narrative medicine,” and “narrative ethics” as approaches that helped us expand the analyses to implications of genomic technologies, our experience suggests similar value for bioethics education within the translational research and public policy domain. While drama has often been utilized in academic and medical settings to facilitate empathy and spotlight ethical and legal controversies such as end-of-life issues and health law, to date there appears to be few dramatizations focusing on next-generation sequencing (NGS) in genomic research and medicine.

We initially collaborated on the creation of a short vignette play in the context of genomic research and the informed consent process that was performed at the NHGRI-ELSI Congress by a geneticist, genetic counselor, bioethicists, and other conference attendees. The response by “actors” and audience fueled us to write many more plays of varying lengths on different ethical and genomic issues, as well as to explore the dialogues of existing theater with genetic and genomic themes — all to be presented and reflected upon by interdisciplinary professionals in the bioethics and genomics community at professional society meetings and academic medical institutions nationally and internationally.

Because narrative genomics is a pedagogical approach intended to facilitate discourse, as well as provide reflection on the interrelatedness of the cross-disciplinary issues posed, we ground our genomic plays in current scholarship and ensure that it is accurate scientifically as well as provide extensive references and pose focused bioethics questions which can complement and enhance the classroom experience.

In a similar vein, bioethical controversies can also be brought to life with this approach where bioethics reaching incorporates dramatizations and excerpts from existing theatrical narratives, whether to highlight bioethics issues thematically, or to illuminate the historical path to the genomics revolution and other medical innovations from an ethical perspective.

Varying iterations of these dramatic narratives have been experienced (read, enacted, witnessed) by bioethicists, policy makers, geneticists, genetic counselors, other healthcare professionals, basic scientists, bioethicists, lawyers, patient advocates, and students to enhance insight and facilitate interdisciplinary and interprofessional dialogue.

Dramatizations embedded in genomic narratives illuminate the human dimensions and complexity of interactions among family members, medical professionals, and others in the scientific community. By facilitating discourse and raising more questions than answers on difficult issues, narrative genomics links the promise and concerns of next-gen technologies with a creative bioethics pedagogical approach for learning from one another.

Heading image: Andrzej Joachimiak and colleagues at Argonne’s Midwest Center for Structural Genomics deposited the consortium’s 1,000th protein structure into the Protein Data Bank. CC-BY-SA-2.0 via Wikimedia Commons.

The post Illuminating the drama of DNA: creating a stage for inquiry appeared first on OUPblog.

0 Comments on Illuminating the drama of DNA: creating a stage for inquiry as of 10/20/2014 10:42:00 PM
Add a Comment
12. The truth about evidence

Rated by the British Medical Journal as one of the top 15 breakthroughs in medicine over the last 150 years evidence-based medicine (EBM) is an idea that has become highly influential in both clinical practice and health policy-making. EBM promotes a seemingly irrefutable principle: that decision-making in medical practice should be based, as much as possible, on the most up-to-date research findings. Nowhere has this idea been more welcome than in psychiatry, a field that continues to be dogged by a legacy of controversial clinical interventions. Many mental health experts believe that following the rules of EBM is the best way of safeguarding patients from unproven fads or dangerous interventions. If something is effective or ineffective, EBM will tell us.

But it turns out that ensuring medical practice is based on solid evidence is not as straightforward as it sounds. After all, evidence does not emerge from thin air. There are finite resources for research, which means that there is always someone deciding what topics should be researched, whose studies merit funding, and which results will be published. These kinds of decisions are not neutral. They reflect the beliefs and values of policymakers, funders, researchers, and journal editors about what is important. And determining what is important depends on one’s goals: improving clinical practice to be sure, but also reaping profits, promoting one’s preferred hypotheses, and advancing one’s career. In other words, what counts as evidence is partly determined by values and interests.

doctor patient mental health
Teenage Girl Visits Doctor’s Office Suffering With Depression via iStock. ©monkeybusinessimages.

Let’s take a concrete example from psychiatry. The two most common types of psychiatric interventions are medications and psychotherapy. As in all areas of medicine, manufacturers of psychiatric drugs play a very significant role in the funding of clinical research, more significant in dollar amount than government funding bodies. Pharmaceutical companies develop drugs in order to sell them and make profits and they want to do so in such a manner that maximizes revenue. Research into drug treatments has a natural sponsor — the companies who stand to profit from their sales. Meanwhile, psychotherapy has no such natural sponsor. There are researchers who are interested in psychotherapy and do obtain funding in order to study it. However, the body of research data supporting the use of pharmaceuticals is simply much larger and continues to grow faster than the body of data concerning psychotherapy. If one were to prioritize treatments that were evidence-based, one would have no choice but to privilege medications. In this way the values of the marketplace become incorporated into research, into evidence, and eventually into clinical practice.

The idea that values effect what counts as evidence is a particularly challenging problem for psychiatry because it has always suffered from the criticism that it is not sufficiently scientific. A broken leg is a fact, but whether someone is normal or abnormal is seen as a value judgement. There is a hope amongst proponents of evidence-based psychiatry that EBM can take this subjective component out of psychiatry but it cannot. Showing that a drug, like an antidepressant, can make a person feel less sad does not take away the judgement that there is something wrong with being sad in the first place. The thorniest ethical problems in psychiatry surround clinical cases in which psychiatrists and/or families want to impose treatment on mentally ill persons in hopes of achieving a certain mental state that the patient himself does not want. At the heart of this dispute is whose version of a good life ought to prevail. Evidence doesn’t resolve this debate. Even worse, it might end up hiding it. After all, evidence that a treatment works for certain symptoms — like hallucinations — focuses our attention on getting rid of those symptoms rather than helping people in other ways such as finding ways to learn to live with them.

The original authors of EBM worried that clinicians’ values and their exercise of judgment in clinical decision-making actually led to bad decisions and harmed patients. They wanted to get rid of judgment and values as much as possible and let scientific data guide practice instead. But this is not possible. No research is done without values, no data becomes evidence without judgments. The challenge for psychiatry is to be as open as possible about how values are intertwined with evidence. Frank discussion of the many ethical, cultural, and economic factors that inform psychiatry enriches rather than diminishes the field.

Heading image: Lexapro pills by Tom Varco. CC-BY-SA-3.0 via Wikimedia Commons.

The post The truth about evidence appeared first on OUPblog.

0 Comments on The truth about evidence as of 9/12/2014 7:14:00 AM
Add a Comment
13. Ethical issues in managing the current Ebola crisis

Until the current epidemic, Ebola was largely regarded as not a Western problem. Although fearsome, Ebola seemed contained to remote corners of Africa, far from major international airports. We are now learning the hard way that Ebola is not—and indeed was never—just someone else’s problem. Yes, this outbreak is different: it originated in West Africa, at the border of three countries, where the transportation infrastructure was better developed, and was well under way before it was recognized. But we should have understood that we are “all in this together” for Ebola, as for any, infectious disease.

Understanding that we were profoundly wrong about Ebola can help us to see ethical considerations that should shape how we go forward. Here, I have space just to outline two: reciprocity and fairness.

In the aftermath of the global SARS epidemic that spread to Canada, the Joint Centre for Bioethics at the University of Toronto produced a touchstone document for pandemic planning, Stand on Guard for Thee, which highlights reciprocity as a value. When health care workers take risks to protect us all, we owe them special concern if they are harmed. Dr. Bruce Ribner, speaking on ABC, described Emory University Hospital as willing to take two US health care workers who became infected abroad because they believed these workers deserved the best available treatment for the risks they took for humanitarian ends. Calls to ban the return of US workers—or treatment in the United States of other infected front-line workers—forget that contagious diseases do not occur in a vacuum. Even Ann Coulter recognized, in her own unwitting way, that we owe support to first responders for the burdens they undertake for us all when she excoriated Dr. Kent Brantly for humanitarian work abroad rather than in the United States.

We too often fail to recognize that all the health care and public health workers at risk in the Ebola epidemic—and many have died—are owed duties of special concern. Yet unlike health care workers at Emory, health care workers on the front lines in Africa must make do with limited equipment under circumstances in which it is very difficult for them to be safe, according to a recent Wall Street Journal article. As we go forward we must remember the importance of providing adequately for these workers and for workers in the next predictable epidemics — not just for Americans who are able to return to the US for care. Supporting these workers means providing immediate care for those who fall ill, as well as ongoing care for them and their families if they die or are not longer able to work. But this is not all; health care workers on the front lines can be supported by efforts to minimize disease spread—for example conducting burials to minimize risks of infection from the dead—as well as unceasing attention to the development of public health infrastructures so that risks can be swiftly identified and contained and care can be delivered as safely as possible.

Ebola in West Africa. Three humanitarian experts and six specialists in dangerous infectious diseases of the European Mobile Lab project have been deployed on the ground, with a mobile laboratory unit to help accelerate diagnoses. © EMLab, European Commission DG ECHO, EU Humanitarian Aid and Civil Protection. CC BY-ND 2.0 via European Commission DG ECHO Flickr.
Ebola in West Africa. Three humanitarian experts and six specialists in dangerous infectious diseases of the European Mobile Lab project have been deployed on the ground, with a mobile laboratory unit to help accelerate diagnoses. © EMLab, European Commission DG ECHO, EU Humanitarian Aid and Civil Protection. CC BY-ND 2.0 via European Commission DG ECHO Flickr.

Fairness requires treating others as we would like to be treated ourselves. A way of thinking about what is fair is to ask what we would want done if we did not know our position under the circumstances at hand. In a classic of political philosophy, A Theory of Justice, John Rawls suggested the thought experiment of asking what principles of justice we would be willing to accept for a society in which we were to live, if we didn’t know anything about ourselves except that we would be somewhere in that society. Infectious disease confronts us all with an actual possibility of the Rawlsian thought experiment. We are all enmeshed in a web of infectious organisms, potential vectors to one another and hence potential victims, too. We never know at any given point in time whether we will be victim, vector, or both. It’s as though we were all on a giant airplane, not knowing who might cough, or spit, or bleed, what to whom, and when. So we need to ask what would be fair under these brute facts of human interconnectedness.

At a minimum, we need to ask what would be fair about the allocation of Ebola treatments, both before and if they become validated and more widely available. Ethical issues such as informed consent and exploitation of vulnerable populations in testing of experimental medicines certainly matter but should not obscure that fairness does, too, whether we view the medications as experimental or last-ditch treatment. Should limited supplies be administered to the worst off? Are these the sickest, most impoverished, or those subjected to the greatest risks, especially risks of injustice? Or, should limited supplies be directed where they might do the most good—where health care workers are deeply fearful and abandoning patients, or where we need to encourage people who have been exposed to be monitored and isolated if needed?

These questions of fairness occur in the broader context of medicine development and distribution. ZMAPP (the experimental monoclonal antibody administered on a compassionate use basis to the two Americans) was jointly developed by the US government, the Public Health Agency of Canada, and a few very small companies. Ebola has not drawn a great deal of drug development attention; indeed, infectious diseases more generally have not drawn their fair share of attention from Big Pharma, as least as measured by the global burden of disease.

WHO has declared the Ebola epidemic an international emergency and is convening ethics experts to consider such questions as whether and how the experimental treatment administered to the two Americans should be made available to others. I expect that the values of reciprocity and fairness will surface in these discussions. Let us hope they do, and that their import is remembered beyond the immediate emergency.

Headline Image credit: Ebola virus virion. Created by CDC microbiologist Cynthia Goldsmith, this colorized transmission electron micrograph (TEM) revealed some of the ultrastructural morphology displayed by an Ebola virus virion. Centers for Disease Control and Prevention’s Public Health Image Library, #10816 . Public domain via Wikimedia Commons.

The post Ethical issues in managing the current Ebola crisis appeared first on OUPblog.

0 Comments on Ethical issues in managing the current Ebola crisis as of 8/15/2014 9:26:00 AM
Add a Comment
14. Morality, science, and Belgium’s child euthanasia law

vsi

By Tony Hope


Science and morality are often seen as poles apart. Doesn’t science deal with facts, and morality with, well, opinions? Isn’t science about empirical evidence, and morality about philosophy? In my view this is wrong. Science and morality are neighbours. Both are rational enterprises. Both require a combination of conceptual analysis, and empirical evidence. Many, perhaps most moral disagreements hinge on disagreements over evidence and facts, rather than disagreements over moral principle.

Consider the recent child euthanasia law in Belgium that allows a child to be killed – as a mercy killing – if: (a) the child has a serious and incurable condition with death expected to occur within a brief period; (b) the child is experiencing constant and unbearable suffering; (c) the child requests the euthanasia and has the capacity of discernment – the capacity to understand what he or she is requesting; and, (d) the parents agree to the child’s request for euthanasia. The law excludes children with psychiatric disorders. No one other than the child can make the request.

Is this law immoral? Thought experiments can be useful in testing moral principles. These are like the carefully controlled experiments that have been so useful in science. A lorry driver is trapped in the cab. The lorry is on fire. The driver is on the verge of being burned to death. His life cannot be saved. You are standing by. You have a gun and are an excellent shot and know where to shoot to kill instantaneously. The bullet will be able to penetrate the cab window. The driver begs you to shoot him to avoid a horribly painful death.

Would it be right to carry out the mercy killing? Setting aside legal considerations, I believe that it would be. It seems wrong to allow the driver to suffer horribly for the sake of preserving a moral ideal against killing.

Thought experiments are often criticised for being unrealistic. But this can be a strength. The point of the experiment is to test a principle, and the ways in which it is unrealistic can help identify the factual aspects that are morally relevant. If you and I agree that it would be right to kill the lorry driver then any disagreement over the Belgian law cannot be because of a fundamental disagreement over mercy killing. It is likely to be a disagreement over empirical facts or about how facts integrate with moral principles.

Euthanasia_and_the_Law

There is a lot of discussion of the Belgian law on the internet. Most of it against. What are the arguments?

Some allow rhetoric to ride roughshod over reason. Take this, for example: “I’m sure the Belgian parliament would agree that minors should not have access to alcohol, should not have access to pornography, should not have access to tobacco, but yet minors for some reason they feel should have access to three grams of phenobarbitone in their veins – it just doesn’t make sense.”

But alcohol, pornography and tobacco are all considered to be against the best interests of children. There is, however, a very significant reason for the ‘three grams of phenobarbitone’: it prevents unnecessary suffering for a dying child. There may be good arguments against euthanasia but using unexamined and poor analogies is just sloppy thinking.

I have more sympathy for personal experience. A mother of two terminally ill daughters wrote in the Catholic Herald: “Through all of their suffering and pain the girls continued to love life and to make the most of it…. I would have done anything out of love for them, but I would never have considered euthanasia.”

But this moving anecdote is no argument against the Belgian law. Indeed, under that law the mother’s refusal of euthanasia would be decisive. It is one thing for a parent to say that I do not believe that euthanasia is in my child’s best interests; it is quite another to say that any parent who thinks euthanasia is in their child’s best interests must be wrong.

To understand a moral position it is useful to state the moral principles and the empirical assumptions on which it is based. So I will state mine.

Moral Principles

  1. A mercy killing can be in a person’s best interests.
  2. A person’s competent wishes should have very great weight in what is done to her.
  3. Parents’ views as to what it right for their children should normally be given significant moral weight.
  4. Mercy killing, in the situation where a person is suffering and faces a short life anyway, and where the person is requesting it, can be the right thing to do.

Empirical assumptions

  1. There are some situations in which children with a terminal illness suffer so much that it is in their interests to be dead.
  2. There are some situations in which the child’s suffering cannot be sufficiently alleviated short of keeping the child permanently unconscious.
  3. A law can be formulated with sufficient safeguards to prevent euthansia from being carried out in situations when it is not justified.


This last empirical claim is the most difficult to assess. Opponents of child euthanasia may believe such safeguards are not possible: that it is better not to risk sliding down the slippery slope. But the ‘slippery slope argument’ is morally problematic: it is an argument against doing the right thing on some occasions (carrying out a mercy killing when that is right) because of the danger of doing the wrong thing on other occasions (carrying out a killing when that is wrong). I prefer to focus on safeguards against slipping. But empirical evidence could lead me to change my views on child euthanasia. My guess is that for many people who are against the new Belgian law, it is the fear of the slippery slope that is ultimately crucial. Much moral disagreement, when carefully considered, comes down to disagreement over facts. Scientific evidence is a key component of moral argument.

Tony Hope is Emeritus Professor of Medical Ethics at the University of Oxford and the author of Medical Ethics: A Very Short Introduction.

The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday, subscribe to Very Short Introductions articles on the OUPblog via email or RSS, and like Very Short Introductions on Facebook.

Subscribe to the OUPblog via email or RSS.
Subscribe to only science and medicine articles on the OUPblog via email or RSS.
Image credit: Legality of Euthanasia throughout the world By Jrockley. Public domain via Wikimedia Commons

The post Morality, science, and Belgium’s child euthanasia law appeared first on OUPblog.

0 Comments on Morality, science, and Belgium’s child euthanasia law as of 5/23/2014 5:00:00 AM
Add a Comment
15. Unfit for the future: The urgent need for moral enhancement

By Julian Savulescu and Ingmar Persson


First published in Philosophy Now Issue 91, July/Aug 2012.

For the vast majority of our 150,000 years or so on the planet, we lived in small, close-knit groups, working hard with primitive tools to scratch sufficient food and shelter from the land. Sometimes we competed with other small groups for limited resources. Thanks to evolution, we are supremely well adapted to that world, not only physically, but psychologically, socially and through our moral dispositions.

But this is no longer the world in which we live. The rapid advances of science and technology have radically altered our circumstances over just a few centuries. The population has increased a thousand times since the agricultural revolution eight thousand years ago. Human societies consist of millions of people. Where our ancestors’ tools shaped the few acres on which they lived, the technologies we use today have effects across the world, and across time, with the hangovers of climate change and nuclear disaster stretching far into the future. The pace of scientific change is exponential. But has our moral psychology kept up?

With great power comes great responsibility. However, evolutionary pressures have not developed for us a psychology that enables us to cope with the moral problems our new power creates. Our political and economic systems only exacerbate this. Industrialisation and mechanisation have enabled us to exploit natural resources so efficiently that we have over-stressed two-thirds of the most important eco-systems.

A basic fact about the human condition is that it is easier for us to harm each other than to benefit each other. It is easier for us to kill than it is for us to save a life; easier to injure than to cure. Scientific developments have enhanced our capacity to benefit, but they have enhanced our ability to harm still further. As a result, our power to harm is overwhelming. We are capable of forever putting an end to all higher life on this planet. Our success in learning to manipulate the world around us has left us facing two major threats: climate change – along with the attendant problems caused by increasingly scarce natural resources – and war, using immensely powerful weapons. What is to be done to counter these threats?

Our Natural Moral Psychology
Our sense of morality developed around the imbalance between our capacities to harm and to benefit on the small scale, in groups the size of a small village or a nomadic tribe – no bigger than a hundred and fifty or so people. To take the most basic example, we naturally feel bad when we cause harm to others within our social groups. And commonsense morality links responsibility directly to causation: the more we feel we caused an outcome, the more we feel responsible for it. So causing a harm feels worse than neglecting to create a benefit. The set of rights that we have developed from this basic rule includes rights not to be harmed, but not rights to receive benefits. And we typically extend these rights only to our small group of family and close acquaintances. When we lived in small groups, these rights were sufficient to prevent us harming one another. But in the age of the global society and of weapons with global reach, they cannot protect us well enough.

There are three other aspects of our evolved psychology which have similarly emerged from the imbalance between the ease of harming and the difficulty of benefiting, and which likewise have been protective in the past, but leave us open now to unprecedented risk:

  1. Our vulnerability to harm has left us loss-averse, preferring to protect against losses than to seek benefits of a similar level.
  2. We naturally focus on the immediate future, and on our immediate circle of friends. We discount the distant future in making judgements, and can only empathise with a few individuals based on their proximity or similarity to us, rather than, say, on the basis of their situations. So our ability to cooperate, applying our notions of fairness and justice, is limited to our circle, a small circle of family and friends. Strangers, or out-group members, in contrast, are generally mistrusted, their tragedies downplayed, and their offences magnified.
  3. We feel responsible if we have individually caused a bad outcome, but less responsible if we are part of a large group causing the same outcome and our own actions can’t be singled out.


Case Study: Climate Change and the Tragedy of the Commons
There is a well-known cooperation or coordination problem called ‘the tragedy of the commons’. In its original terms, it asks whether a group of village herdsmen sharing common pasture can trust each other to the extent that it will be rational for each of them to reduce the grazing of their own cattle when necessary to prevent over-grazing. One herdsman alone cannot achieve the necessary saving if the others continue to over-exploit the resource. If they simply use up the resource he has saved, he has lost his own chance to graze but has gained no long term security, so it is not rational for him to self-sacrifice. It is rational for an individual to reduce his own herd’s grazing only if he can trust a sufficient number of other herdsmen to do the same. Consequently, if the herdsmen do not trust each other, most of them will fail to reduce their grazing, with the result that they will all starve.

The tragedy of the commons can serve as a simplified small-scale model of our current environmental problems, which are caused by billions of polluters, each of whom contributes some individually-undetectable amount of carbon dioxide to the atmosphere. Unfortunately, in such a model, the larger the number of participants the more inevitable the tragedy, since the larger the group, the less concern and trust the participants have for one another. Also, it is harder to detect free-riders in a larger group, and humans are prone to free ride, benefiting from the sacrifice of others while refusing to sacrifice themselves. Moreover, individual damage is likely to become imperceptible, preventing effective shaming mechanisms and reducing individual guilt.

Anthropogenic climate change and environmental destruction have additional complicating factors. Although there is a large body of scientific work showing that the human emission of greenhouse gases contributes to global climate change, it is still possible to entertain doubts about the exact scale of the effects we are causing – for example, whether our actions will make the global temperature increase by 2°C or whether it will go higher, even to 4°C – and how harmful such a climate change will be.

In addition, our bias towards the near future leaves us less able to adequately appreciate the graver effects of our actions, as they will occur in the more remote future. The damage we’re responsible for today will probably not begin to bite until the end of the present century. We will not benefit from even drastic action now, and nor will our children. Similarly, although the affluent countries are responsible for the greatest emissions, it is in general destitute countries in the South that will suffer most from their harmful effects (although Australia and the south-west of the United States will also have their fair share of droughts). Our limited and parochial altruism is not strong enough to provide a reason for us to give up our consumerist life-styles for the sake of our distant descendants, or our distant contemporaries in far-away places.

Given the psychological obstacles preventing us from voluntarily dealing with climate change, effective changes would need to be enforced by legislation. However, politicians in democracies are unlikely to propose such legislation. Effective measures will need to be tough, and so are unlikely to win a political leader a second term in office. Can voters be persuaded to sacrifice their own comfort and convenience to protect the interests of people who are not even born yet, or to protect species of animals they have never even heard of? Will democracy ever be able to free itself from powerful industrial interests? Democracy is likely to fail. Developed countries have the technology and wealth to deal with climate change, but we do not have the political will.

If we keep believing that responsibility is directly linked to causation, that we are more responsible for the results of our actions than the results of our omissions, and that if we share responsibility for an outcome with others our individual responsibility is lowered or removed, then we will not be able to solve modern problems like climate change, where each person’s actions contribute imperceptibly but inevitably. If we reject these beliefs, we will see that we in the rich, developed countries are more responsible for the misery occurring in destitute, developing countries than we are spontaneously inclined to think. But will our attitudes change?

Moral Bioenhancement
Our moral shortcomings are preventing our political institutions from acting effectively. Enhancing our moral motivation would enable us to act better for distant people, future generations, and non-human animals. One method to achieve this enhancement is already practised in all societies: moral education. Al Gore, Friends of the Earth and Oxfam have already had success with campaigns vividly representing the problems our selfish actions are creating for others – others around the world and in the future. But there is another possibility emerging. Our knowledge of human biology – in particular of genetics and neurobiology – is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process. We could use these techniques to overcome the moral and psychological shortcomings that imperil the human species. We are at the early stages of such research, but there are few cogent philosophical or moral objections to the use of specifically biomedical moral enhancement – or moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility of developing moral bioenhancement technologies – not to replace traditional moral education, but to complement it. We simply can’t afford to miss opportunities. We have provided ourselves with the tools to end worthwhile life on Earth forever. Nuclear war, with the weapons already in existence today could achieve this alone. If we must possess such a formidable power, it should be entrusted only to those who are both morally enlightened and adequately informed.

Objection 1: Too Little, Too Late?
We already have the weapons, and we are already on the path to disastrous climate change, so perhaps there is not enough time for this enhancement to take place. Moral educators have existed within societies across the world for thousands of years – Buddha, Confucius and Socrates, to name only three – yet we still lack the basic ethical skills we need to ensure our own survival is not jeopardised. As for moral bioenhancement, it remains a field in its infancy.

We do not dispute this. The relevant research is in its inception, and there is no guarantee that it will deliver in time, or at all. Our claim is merely that the requisite moral enhancement is theoretically possible – in other words, that we are not biologically or genetically doomed to cause our own destruction – and that we should do what we can to achieve it.

Objection 2: The Bootstrapping Problem
We face an uncomfortable dilemma as we seek out and implement such enhancements: they will have to be developed and selected by the very people who are in need of them, and as with all science, moral bioenhancement technologies will be open to abuse, misuse or even a simple lack of funding or resources.

The risks of misapplying any powerful technology are serious. Good moral reasoning was often overruled in small communities with simple technology, but now failure of morality to guide us could have cataclysmic consequences. A turning point was reached at the middle of the last century with the invention of the atomic bomb. For the first time, continued technological progress was no longer clearly to the overall advantage of humanity. That is not to say we should therefore halt all scientific endeavour. It is possible for humankind to improve morally to the extent that we can use our new and overwhelming powers of action for the better. The very progress of science and technology increases this possibility by promising to supply new instruments of moral enhancement, which could be applied alongside traditional moral education.

Objection 3: Liberal Democracy – a Panacea?
In recent years we have put a lot of faith in the power of democracy. Some have even argued that democracy will bring an ‘end’ to history, in the sense that it will end social and political development by reaching its summit. Surely democratic decision-making, drawing on the best available scientific evidence, will enable government action to avoid the looming threats to our future, without any need for moral enhancement?

In fact, as things stand today, it seems more likely that democracy will bring history to an end in a different sense: through a failure to mitigate human-induced climate change and environmental degradation. This prospect is bad enough, but increasing scarcity of natural resources brings an increased risk of wars, which, with our weapons of mass destruction, makes complete destruction only too plausible.

Sometimes an appeal is made to the so-called ‘jury theorem’ to support the prospect of democracy reaching the right decisions: even if voters are on average only slightly more likely to get a choice right than wrong – suppose they are right 51% of the time – then, where there is a sufficiently large numbers of voters, a majority of the voters (ie, 51%) is almost certain to make the right choice.

However, if the evolutionary biases we have already mentioned – our parochial altruism and bias towards the near future – influence our attitudes to climatic and environmental policies, then there is good reason to believe that voters are more likely to get it wrong than right. The jury theorem then means it’s almost certain that a majority will opt for the wrong policies! Nor should we take it for granted that the right climatic and environmental policy will always appear in manifestoes. Powerful business interests and mass media control might block effective environmental policy in a market economy.

Conclusion
Modern technology provides us with many means to cause our downfall, and our natural moral psychology does not provide us with the means to prevent it. The moral enhancement of humankind is necessary for there to be a way out of this predicament. If we are to avoid catastrophe by misguided employment of our power, we need to be morally motivated to a higher degree (as well as adequately informed about relevant facts). A stronger focus on moral education could go some way to achieving this, but as already remarked, this method has had only modest success during the last couple of millennia. Our growing knowledge of biology, especially genetics and neurobiology, could deliver additional moral enhancement, such as drugs or genetic modifications, or devices to augment moral education.

The development and application of such techniques is risky – it is after all humans in their current morally-inept state who must apply them – but we think that our present situation is so desperate that this course of action must be investigated.

We have radically transformed our social and natural environments by technology, while our moral dispositions have remained virtually unchanged. We must now consider applying technology to our own nature, supporting our efforts to cope with the external environment that we have created.

Biomedical means of moral enhancement may turn out to be no more effective than traditional means of moral education or social reform, but they should not be rejected out of hand. Advances are already being made in this area. However, it is too early to predict how, or even if, any moral bioenhancement scheme will be achieved. Our ambition is not to launch a definitive and detailed solution to climate change or other mega-problems. Perhaps there is no realistic solution. Our ambition at this point is simply to put moral enhancement in general, and moral bioenhancement in particular, on the table. Last century we spent vast amounts of resources increasing our ability to cause great harm. It would be sad if, in this century, we reject opportunities to increase our capacity to create benefits, or at least to prevent such harm.

© Prof. Julian Savulescu and Prof. Ingmar Persson 2012

Julian Savulescu is a Professor of Philosophy at Oxford University and Ingmar Persson is a Professor of Philosophy at the University of Gothenburg. This article is drawn from their book Unfit for the Future: The Urgent Need for Moral Enhancement (Oxford University Press, 2012).

Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.
View more about this book on the  

0 Comments on Unfit for the future: The urgent need for moral enhancement as of 1/1/1900
Add a Comment
16. The Mammography Furor: Why Both Opponents and Proponents of Screening Are Wrong

medical-mondays

Robert M. Veatch is Professor of Medical Ethics at The Kennedy Institute of Ethics at Georgetown University.  He received the career distinguished achievement 9780195313727award from Georgetown University in 2005 and has received honorary doctorates from Creighton and Union College.  In his new book, Patient, Heal Thyself: How the “New Medicine” Puts the Patient in Charge, he sheds light on a fundamental change sweeping through the American health care system, a change that puts the patient in charge of treatment to an unprecedented extent. In the original article below, Veatch looks at the recent debate over mammograms.

Controversy has erupted over recommendations of a government-sponsored task force that are widely interpreted as opposing mammography for women ages 40-50 without special risk factors. This reverses an earlier recommendation favoring such screening. In response a number of critics including Bernadine Healy, the form head of the National Institutes of Health, and spokespersons for the American Cancer Society and the American College of Radiation have challenged the recommendation claiming that cutting out the screening will cost people’s lives. They insist that 40-50 year-olds should still be screened routinely.

Strange as it may seem, both of these positions are wrong. Both the defenders of the task force recommendations and the critics make the mistake of assuming that the data from medical science can tell a person what the correct decision is regarding a medical choice such as breast cancer screening. I am a defender of what I call the “new medicine,” the medicine in which it is up to the patient to make the value choices related to her medical treatment. In principle, decisions such as those addressed by the mammography task force and its critics cannot be derived from the facts alone. Each person must evaluate the possible outcomes based on his or her own beliefs and values. This is true not only for areas of obvious value judgment such as abortion and withdrawing life-support during terminal illness, but literally for every medical choice, no matter how mundane.

In the case of mammography screening for breast cancer remarkable agreement exists on the medical facts. Mammography catches cancers that cannot be found by other techniques such as breast self-exam. People’s lives are saved by mammography. The problem is that many more lives can be saved screening older women in part because the incidence of cancer is greater. The task force expresses the benefit in terms of the number of people who would need to be screened to extend one life. For women 40 to 49, 1904 would have to be screened; for women 50-59 only 1339. Thus the absolute risk reduction from screening is greater for the older women. In an article published in last week’s Annals of Internal Medicine alongside the task force report, the same idea is expressed in terms of percentage reduction in breast cancer deaths from screening compared to no screening. For women

0 Comments on The Mammography Furor: Why Both Opponents and Proponents of Screening Are Wrong as of 1/1/1900
Add a Comment