Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Blog Posts by Tag
In the past 7 days
Blog Posts by Date
Click days in this calendar to see posts by day or month
Viewing: Blog Posts Tagged with: Medical Ethics, Most Recent at Top [Help]
Results 1 - 16 of 16
How to use this Page
You are viewing the most recent posts tagged with the words: Medical Ethics in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
The birth of a healthy child in Sweden in October, 2014 after a uterus transplant from a living donor marked the advent of a new technique to help women with absent or non-functional uteruses to bear genetic offspring. The Cleveland Clinic has now led American doctors into this space, performing the first US uterine transplant in February, 2016
Considering the well documented problems of medical error, it’s remarkable that it’s rarely observed. Of course there is much scrutiny of the data that is generated during the health care encounter, but that is not the same thing. For instance, while quality measures track data on how well blood pressure is managed, there are not measures of whether blood pressure is actually measured accurately.
Despite progress in the care and treatment of mental health problems, violence directed at self or others remains high in many parts of the world. Subsequently, there is increasing attention to risk assessment in mental health. But it this doing more harm than good?
‘Ebola is a wake-up call.’ This is a common sentiment expressed by those who have reflected on the ongoing Ebola outbreak in West Africa. It is a reaction to the nearly 30,000 cases and over 11,000 deaths that have occurred since the first cases of the outbreak were reported in March 2014.
For years, my cholesterol level remained high, regardless of what I ate. I gave up all butter, cheese, red meat, and fried food. But every time I visited my doctor, he still shook his head sadly, as he looked at my lab results. Then, anti-cholesterol medications became available, and I started one.
Two women are being trained for work on a factory assembly line. As products arrive on a conveyor belt, their task is to wrap each product and place it back on the belt. Their supervisor warns them that failing to wrap even one product is a firing offense, but once they get started, the work seems easy.
The last two decades have witnessed truly remarkable growth in the field of palliative care. Such growth is challenging, and brings both uncertainties and optimism about the future. In this three-part blog, we’ll take a look at some of the complex issues of continuity, development and evolution in palliative medicine.
“Butler Library smells like Adderall and desperation.”
That note from a blogger at Columbia University isn’t exactly scientific. But it speaks to the atmosphere that settles in around exam time here, and at other competitive universities. For some portion of the students whose exams I’m grading this week, study drugs, stimulants, and cognitive enhancement are as much a part of finals as all-nighter and bluebooks. Exactly how many completed exams are coming to me via Adderall or Provigil is impossible to pin down. But we do know that studies have found past-year, nonprescribed stimulant use rates as high as 35% among students. We know, according to HHS, that full-time students use nonprescribed Adderall at twice the rate of non-students. We can suspect, too, that academics aren’t so different in this regard from their students. In unscientific poll, 20% of the readers of Natureacknowledged off-label use of cognitive enhancement drugs (CEDs).
If this sounds like the windup to a drug-panic piece, it’s not. The use of cognitive enhancement drugs concerns me much less than the silence surrounding their use. At universities like Columbia, cognitive enhancement exists in something of an ethical gray zone: technically against rules that are mostly unenforced; an open conversation topic among students in the library at 2 a.m., but a blank spot in “official” academic culture. That blank in itself is worth our concern. CEDs aren’t going away–but more openness about their use could teach us something valuable about the kind of work we do here, and anywhere else focus-boosting pills are popped.
In fact, much of the anti-cognitive enhancement drug literature dwells on the ethics of work, on the question of how much credit we can and should take for our “enhanced” accomplishments. (In focusing on these arguments, I’m setting to one side any health concerns raised by off-label drug use. I’m doing that not because those concerns are unimportant, but because the most challenging bioethics writing on the topic is less about one drug or another than about the promises and limits of cognitive enhancement in general–up to and including drugs that haven’t been invented yet.) In Beyond Therapy, the influential 2003 report on enhancement technologies from the President’s Council on Bioethics, the central argument against CED use had to do with the kind of work we can honestly claim as our own: “The attainment of [excellence] by means of drugs…looks to many people (including some Members of this Council) to be ‘cheating’ or ‘cheap.’” Work done under the influence of CEDs “seems less real, less one’s own, less worthy of our admiration.”
Is that a persuasive argument for keeping cognitive enhancement drug use in the closet, or even for taking stronger steps to ban it on campus? I’m not so sure it is. This kind of anti-enhancement case rests on an assumption about authorship, which I call the individual view. It claims that the dignity and authenticity of our accomplishments lie largely in our ability to claim individual credit for our work. In a word, it’s producer-focused, not product-focused.
That’s a reasonable way to think about authorship–but much of the weight of the anti-cognitive enhancement drug case rests on the presumption that it’s the only way to think about authorship. In fact, there’s another view that’s just as viable: call it the collaborative view. It’s an impersonal way of seeing accomplishment; it’s a product-focused view; it’s less concerned with allocating ownership of our accomplishments and it’s less likely to emphasize originality as the most important mark of quality. It is founded on the understanding that all work, even the most seemingly original, is subject to influences and takes place in a social context.
You can’t tell the history of accomplishments in the arts and sciences without considering those who thought about their work in this way. We can see it in the “thefts” of content that led passages from Plutarch, via Shakespeare, to T.S. Eliot’s poetry, or in the constant musical borrowing that shapes jazz or blues or classical music. We can see it in the medieval architects and writers who, as C.S. Lewis observed, practiced a kind of “shared authorship,” layering changes one on top of the other until they produced cathedrals or manuscripts that are the product of dozens of anonymous hands. We can see it again in the words of writers like Mark Twain, who forcefully argued that “substantially all ideas are second hand,” or Eliot, who advised critics that “to divert interest from the poet to the poetry is a laudable aim.” We can even see it in the history of our language. Consider the evolution of words like genius (from the classical idea of a guardian spirit, to a special ability, to a talented person himself or herself), invent (from a literal meaning of “to find” to a secondary meaning of “to create”), and talent (from a valuable coin to an internal gift). As Owen Barfield has argued, these changes are marks of the way our understanding of accomplishment has become “internalized.” Where earlier writers tended to imagine inspiration as a process that happens from without, we’re more likely to see it as something that happens from within.
The collaborative view is valuable even for those of us who aren’t, say, producing historically-great art. It might relieve of us of the anxiety that the work we produce is a commentary on our personal worth. It’s well-tailored to the creative borrowing and sampling that define the “remix culture” celebrated by writers like Lawrence Lessig. And it is, I think, a tonic against the kind of “callous meritocracy” that John Rawls cogently warned us about.
That’s not to suggest that the collaborative view is the one true perspective on accomplishment. I’d call it one of a range of possible emphases that have struggled or prospered with the times. But if that’s the case, then we’re free to think more critically about the view of work we want to emphasize at any given time.
What does any of this have to do with cognitive enhancement? The collaborative view I’ve outlined and a culture of open cognitive enhancement share some important links. It’s certainly not true that one has to use CEDs to take that view, but there are strong reasons why an honest and thoughtful CED user ought to do so.
Consider the case of a journalist like David Plotz, who kept a running diary of his two-day experiment with Provigil: “Today I am the picture of vivacity. I am working about twice as fast as usual. I have a desperate urge to write…. These have been the two most productive days I’ve had in years.”
How might such a writer account for the boost in his performance? Would he chalk it up to his inherent skill or effort, or to the temporary influence of a drug? If someone singled out his enhanced work for praise, would he be right in taking all the credit for himself and leaving none for the enhancement?
I don’t think he would be. There is a dishonesty in failing to acknowledge the enhancement, because that failure willingly creates a false assumption: it allows us to believe that the marginal improvement in performance reflects on the writer’s efforts, growing skill, or some other personal quality, when the truth seems to be otherwise. In other words, I don’t think enhancement is dishonest in itself–it’s failing to acknowledge enhancement that’s dishonest.
There’s nothing objectionable in collaborative work, forthrightly acknowledged. When we take an impersonal view of our work, we share credit and openly recognize our influences. And we can take a similar attitude to work done under the influence of cognitive enhancement drugs. When we speak of creative influences and working “under the influence” of CEDs, I think we’re exposing a similarity that runs deeper than a pun. Of course, one does not literally “collaborate” with a drug. But whether we acknowledge influences that shape our work or acknowledge the influence of a drug that helped us accomplish that work by improving our performance, we are forgoing full, personal credit. We are directing observers toward the quality of the work, rather than toward what the work may say about our personal qualities. We are, in a sense, making less of a “property claim” on the work. Given the history of innovators who willingly made this more modest claim, and given the benefits of the collaborative view that I’ve discussed, I don’t think that’s such bad news.
But could a culture of open cognitive enhancement drug use really one day change the way we think about work? There are no guarantees, to be sure. When I read first-person accounts of CED use, I’m struck by the way users perceive fast, temporary, and often surprising gains in focus, processing speed, and articulateness. With that strong subjective experience comes the experience of leaving, and returning to, an “unenhanced” state. The contrast seems visceral and difficult to overlook; the marginal gains in performance seem especially difficult to take credit for. The subjective experience of CED use looks like short-term growth in our abilities, arising from an external source, to which we cannot permanently lay claim. For just that reason, I have trouble agreeing with those, like Michael Sandel, who associate cognitive enhancement with “hubris.” Why not humility instead? Of course, I don’t claim that CEDs will inspire the same reflections in all of their users. It’s certainly possible to be unreflective about the implications of CED use. I only argue that it’s a little harder to be unreflective.
But that reflectiveness, in turn, requires openness about the enhancement already going on. As long as students fear job-market ramifications for talking on the record about their cognitive enhancement drug use, I wouldn’t nominate them as martyrs to the cause. But why not start with professors and academics–with, say, those 20% of respondents to the Nature poll? What’s tenure for anyway?
We simply can’t separate enhancement, of any kind, from the ends we ask of it and the work we do with it. So I sympathize with the New Yorker’s Margaret Talbot when she writes that “every era, it seems, has its own defining drug. Neuroenhancers are perfectly suited for the anxiety of white-collar competition in a floundering economy…. They facilitate a pinched, unromantic, grindingly efficient form of productivity.” Yet that’s giving the drug too much credit. I’d look instead to the culture that surrounds it. Our culture of cognitive enhancement is furtive, embarrassed, dedicated to one-upping one another on exams or on the tenure track. But a healthier culture of enhancement is conceivable, and it begins with a greater measure of honesty. Adderall and desperation don’t have to be synonymous, but as long as they are, I’d blame the desperation, not the drug.
Many in the media and academia (myself included) have been discussing the Ebola crisis, and more specifically, the issues that arise as Ebola has traveled with infected patients and health care workers to the United States and infected other US citizens.
These discussions have been fascinating and frightening, but the terrifying truth is that Ebola is just the tip of the iceberg. Diseases have long traveled with patients, and as the phenomena of medical tourism and the more general globalization of health care grow, these problems are likely to grow as well.
Medical tourists are very good targets of opportunities for pathogens. Many are traveling with compromised or suppressed immune systems to destination countries for treatment with relatively high infection rates, including the risk of exposure to multi-drug–resistant pathogens.
Doctors typically distinguish commensals—the bugs we normally carry on our skin, mouth, digestive tracts, etc.—from pathogens, the harmful bacteria that cause disease through infection. But what is commensal for a person in India might be an exotic pathogen for a US population. Medical tourist patients are transporting their commensals and pathogens to the hospital environments of the destination countries to which they travel, and are exposed to the commensals and pathogens of hospitals and population at large in the destination country. These transmissions tax the health care system and the knowledge of physicians in the home country to whom the new microbe may be unknown, and diagnosis and treatment more difficult.
Air travel can involve each of the four classical modes of disease transmission: contact (e.g. body-to-body or touching an armrest), common vehicle (e.g. via food or water), vector (e.g. via insects or vermin), and airborne (although more recent planes are equipped with high efficiency particulate air (HEPA) filters reducing transmission risk, older planes are not).
We have seen several diseases travel in this way. The Severe Acute Respiratory Syndrome (SARS) outbreak of 2003 involved a three-hour flight from Hong Kong to Beijing carrying one SARS-infected passenger leading to sixteen passengers being subsequently confirmed as cases of SARS, with eight of those passengers sitting in the three rows in front of the passenger.
In January 2008, a new type of enzyme was detected in bacteria found in a fifty-nine-year-old man with a urinary tract infection being treated in Sweden. The man, Swedish but of Indian origin, had in the previous month undergone surgeries at two hospitals in India. The enzyme, labeled “New Delhi metallo-beta-lactamase-1 (NDM-1)” was able to disarm a lot of antibiotics, including one that was the last line of defenses against common respiratory and urinary tract infection.
In 2009, a study found that twenty-nine UK patients had tested positive for the bacteria-carrying NDM-1 and that seventeen of the twenty-nine (60%) had traveled to India or Pakistan in the year before. A majority of those seventeen received medical treatment while abroad in those countries, some for accidents or illness while traveling and others for medical tourism, either for kidney and bone marrow transplants or for cosmetic surgery.
High-income countries face significant problems with these infections. A 2002 study estimated that 1.7 million patients (ninety-nine thousand of whom died as a result) developed health care-acquired infections in the United States that year. In Europe these infections have been estimated to cause thirty-seven thousand deaths a year and add US $9.4 billion in direct costs
What can be done? Although in theory airline or national travel rules can prevent infected patients from boarding planes, detecting these infections in passengers is very difficult for the airline or immigration officials, and concerns about privacy of patients may chill some interventions. A 2007 case of a man who flew from the United States to Europe with extensively resistant tuberculosis and who ultimately circumvented authorities who tried to stop him on return by flying to Montreal, Canada and renting a car, shows some of the limits on these restrictions.
Part of the solution is technological. The HEPA filters discussed above on newer model planes reduce the risk substantially, and we can hope for more breakthroughs.
Part of the solution is better regulating the use of antibiotics: overuse of antibiotics when not effective or necessary, underuse of antibiotics when they are needed, failure to complete a full course of antibiotics, counterfeit drugs, and excessive antibiotic use in food animals. This is not a magic bullet, however, and we see problems even in countries with prescription systems such as the United States.
We also need much better transparency and reaction time. Some countries reacted quickly to the report of the NDM-1 cases discussed above in issuing travel warnings and informing home country physicians, while others did not.
Finally, as became evident with Ebola, we need better protocols in place to screen returning medical tourism patients and to engage in infection control when needed.
Headline image credit: Ebola virus virion by CDC microbiologist Cynthia Goldsmith. Public domain via Wikimedia Commons.
When Eleanor Roosevelt died on this day (7 November) in 1962, she was widely regarded as “the greatest woman in the world.” Not only was she the longest-tenured First Lady of the United States, but also a teacher, author, journalist, diplomat, and talk-show host. She became a major participant in the intense debates over civil rights, economic justice, multiculturalism, and human rights that remain central to policymaking today. As her husband’s most visible surrogate and collaborator, she became the surviving partner who carried their progressive reform agenda deep into the post-war era, helping millions of needy Americans gain a foothold in the middle class, dismantling Jim Crow laws in the South, and transforming the United States from an isolationist into an internationalist power. In spite of her celebrity, or more likely because of it, she had to endure a prolonged period of intense suffering and humiliation before dying, due in large part to her end-of-life care.
Roosevelt’s terminal agonies began in April 1960 when at 75 years of age, she consulted her personal physician, David Gurewitsch, for increasing fatigue. On detecting mild anemia and an abnormal bone marrow, he diagnosed “aplastic anemia” and warned Roosevelt that transfusions could bring temporary relief, but sooner or later, her marrow would break down completely and internal hemorrhaging would result. Roosevelt responded simply that she was “too busy to be sick.”
For a variety of arcane reasons, Roosevelt’s hematological disorder would be given a different name today – myelodysplastic disorder – and most likely treated with a bone marrow transplant. Unfortunately, in 1962 there was no effective treatment for Roosevelt’s hematologic disorder, and over the ensuing two years, Gurewitsch’s grim prognosis proved correct. Though she entered Columbia-Presbyterian Hospital in New York City repeatedly for tests and treatments, her “aplastic anemia” progressively worsened. Premarin produced only vaginal bleeding necessitating dilatation and curettage, transfusions temporary relief of her fatigue, but at the expense of severe bouts of chills and fever. Repeated courses of prednisone produced only the complications of a weakened immune system. By September 1962, deathly pale, covered with bruises and passing tarry stools, Roosevelt begged Gurewitsch in vain to let her die. She began spitting out pills or hiding them under her tongue, refused further tests and demanded to go home. Eight days after leaving the hospital, the TB bacillus was cultured from her bone marrow.
Gurewitsch was elated. The new finding, he proclaimed, had increased Roosevelt’s chances of survival “by 5000%.” Roosevelt’s family, however, was unimpressed and insisted that their mother’s suffering had gone on long enough. Undeterred, Gurewitsch doubled the dose of TB medications, gave additional transfusions, and ordered tracheal suctioning and a urinary catheter inserted.
In spite of these measures, Roosevelt’s condition continued to deteriorate. Late in the afternoon of 7 November 1962 she ceased breathing. Attempts at closed chest resuscitation with mouth-to-mouth breathing and intra-cardiac adrenalin were unsuccessful.
Years later, when reflecting upon these events, Gurewitsch opined that: “He had not done well by [Roosevelt] toward the end. She had told him that if her illness flared up again and fatally that she did not want to linger on and expected him to save her from the protracted, helpless, dragging out of suffering. But he could not do it.” He said, “When the time came, his duty as a doctor prevented him.”
The ethical standards of morally optimal care for the dying we hold dear today had not yet been articulated when Roosevelt became ill and died. Most of them were violated (albeit unknowingly) by Roosevelt’s physicians in their desperate efforts to halt the progression of her hematological disorder: that of non-maleficence (i.e., avoiding harm); by pushing prednisone after it was having no apparent therapeutic effect; that of beneficence (i.e., limiting interventions to those that are beneficial); by performing cardiopulmonary resuscitation in the absence of any reasonable prospect of a favorable outcome; and that of futility (avoiding futile interventions); by continuing transfusions, performing tracheal suctioning and (some might even argue) beginning anti-tuberculosis therapy after it was clear that Roosevelt’s condition was terminal.
Roosevelt’s physicians also unknowingly violated the principle of respect for persons, by ignoring her repeated pleas to discontinue treatment. However, physician-patient relationships were more paternalistic then, and in 1962 many, if not most, physicians likely would have done as Gurewitsch did, believing as he did that their “duty as doctors” compelled them to preserve life at all cost.
Current bioethical concepts and attitudes would dictate a different, presumably more humane, end-of-life care for Eleanor Roosevelt from that received under the direction of Dr. David Gurewitsch. While arguments can be made about whether any ethical principles are timeless, Gurewitsch’s own retrospective angst over his treatment of Roosevelt, coupled with ancient precedents proscribing futile and/or maleficent interventions, and an already growing awareness of the importance of respect for patients’ wishes in the early part of the 20th century, suggest that even by 1962 standards, Roosevelt’s end-of-life care was misguided. Nevertheless, in criticizing Gurewitsch for his failure “to save [Roosevelt] from the protracted, helpless, dragging out of suffering,” one has to wonder if and when a present-day personal physician of a patient as prominent as Roosevelt would have the fortitude to inform her that nothing more can be done to halt the progression of the disorder that is slowly carrying her to her grave. One wonders further if and when that same personal physician would have the fortitude to inform a deeply concerned public that no further treatment will be given, because in his professional opinion, his famous patient’s condition is terminal and further interventions will only prolong her suffering.
Evidence that recent changes in the bioethics of dying have had an impact on the end-of-life care of famous patients is mixed. Former President Richard Nixon and another famous former First Lady, Jacqueline Kennedy Onassis, both had living wills and died peacefully after forgoing potentially life-prolonging interventions. The deaths of Nelson Mandela and Ariel Sharon were different. Though 95 years of age and clearly over-mastered by a severe lung infection as early as June 2013, Mandela was maintained on life support in a vegetative state for another six months before finally dying in December of that year. Sharon’s dying was even more protracted, thanks to the aggressive end-of-life care provided by Israeli physicians. After a massive hemorrhagic stroke destroyed his cognitive abilities in 2006, a series of surgeries and on-going medical care kept Sharon alive until renal failure finally ended his suffering in January 2014. Thus, although bioethical concepts and attitudes regarding end-of-life care have undergone radical changes since 1962, these contrasting cases suggest that those caring for world leaders at the end of their lives today are sometimes as incapable as Roosevelt’s physicians were a half century ago in saving their patients from the protracted suffering and indignities of a lingering death.
Many bioethical challenges surround the promise of genomic technology and the power of genomic information — providing a rich context for critically exploring underlying bioethical traditions and foundations, as well as the practice of multidisciplinary advisory committees and collaborations. Controversial issues abound that call into question the core values and assumptions inherent in bioethics analysis and thus necessitates interprofessional inquiry. Consequently, the teaching of genomics and contemporary bioethics provides an opportunity to re-examine our disciplines’ underpinnings by casting light on the implications of genomics with novel approaches to address thorny issues — such as determining whether, what, to whom, when, and how genomic information, including “incidental” findings, should be discovered and disclosed to individuals and their families, and whose voice matters in making these determinations particularly when children are involved.
One creative approach we developed is narrative genomics using drama with provocative characters and dialogue as an interdisciplinary pedagogical approach to bring to life the diverse voices, varied contexts, and complex processes that encompass the nascent field of genomics as it evolves from research to clinical practice. This creative educational technique focuses on inherent challenges currently posed by the comprehensive interrogation and analysis of DNA through sequencing the human genome with next generation technologies and illuminates bioethical issues, providing a stage to reflect on the controversies together, and temper the sometimes contentious debates that ensue.
As a bioethics teaching method, narrative genomics highlights the breadth of individuals affected by next-gen technologies — the conversations among professionals and families — bringing to life the spectrum of emotions and challenges that envelope genomics. Recent controversies over genomic sequencing in children and consentissues have brought fundamental ethical theses to the stage to be re-examined, further fueling our belief in drama as an interdisciplinary pedagogical approach to explore how society evaluates, processes, and shares genomic information that may implicate future generations. With a mutual interest in enhancing dialogue and understanding about the multi-faceted implications raised by generating and sharing vast amounts of genomic information, and with diverse backgrounds in bioethics, policy, psychology, genetics, law, health humanities, and neuroscience, we have been collaboratively weaving dramatic narratives to enhance the bioethics educational experience within varied professional contexts and a wide range of academic levels to foster interprofessionalism.
Dramatizations of fictionalized individual, familial, and professional relationships that surround the ethical landscape of genomics create the potential to stimulate bioethical reflection and new perceptions amongst “actors” and the audience, sparking the moral imagination through the lens of others. By casting light on all “the storytellers” and the complexity of implications inherent with this powerful technology, dramatic narratives create vivid scenarios through which to imagine the challenges faced on the genomic path ahead, critique the application of bioethical traditions in context, and re-imagine alternative paradigms.
We initially collaborated on the creation of a short vignette play in the context of genomic research and the informed consent process that was performed at the NHGRI-ELSI Congress by a geneticist, genetic counselor, bioethicists, and other conference attendees. The response by “actors” and audience fueled us to write many more plays of varying lengths on different ethical and genomic issues, as well as to explore the dialogues of existing theater with genetic and genomic themes — all to be presented and reflected upon by interdisciplinary professionals in the bioethics and genomics community at professional society meetings and academic medical institutions nationally and internationally.
Because narrative genomics is a pedagogical approach intended to facilitate discourse, as well as provide reflection on the interrelatedness of the cross-disciplinary issues posed, we ground our genomic plays in current scholarship and ensure that it is accurate scientifically as well as provide extensive references and pose focused bioethics questions which can complement and enhance the classroom experience.
In a similar vein, bioethical controversies can also be brought to life with this approach where bioethics reaching incorporates dramatizations and excerpts from existing theatrical narratives, whether to highlight bioethics issues thematically, or to illuminate the historical path to the genomics revolution and other medical innovations from an ethical perspective.
Varying iterations of these dramatic narratives have been experienced (read, enacted, witnessed) by bioethicists, policy makers, geneticists, genetic counselors, other healthcare professionals, basic scientists, bioethicists, lawyers, patient advocates, and students to enhance insight and facilitate interdisciplinary and interprofessional dialogue.
Dramatizations embedded in genomic narratives illuminate the human dimensions and complexity of interactions among family members, medical professionals, and others in the scientific community. By facilitating discourse and raising more questions than answers on difficult issues, narrative genomics links the promise and concerns of next-gen technologies with a creative bioethics pedagogical approach for learning from one another.
Heading image: Andrzej Joachimiak and colleagues at Argonne’s Midwest Center for Structural Genomics deposited the consortium’s 1,000th protein structure into the Protein Data Bank. CC-BY-SA-2.0 via Wikimedia Commons.
0 Comments on Illuminating the drama of DNA: creating a stage for inquiry as of 10/20/2014 10:42:00 PM
Connie Ngo said, on 10/17/2014 12:30:00 AM
Scholars have written a lot about the difficulties in the study of religion generally. Those difficulties become even messier when we use the words black or African American to describe religion. The adjectives bear the burden of a difficult history that colors the way religion is practiced and understood in the United States. They register the horror of slavery and the terror of Jim Crow as well as the richly textured experiences of a captured people, for whom sorrow stands alongside joy. It is in this context, one characterized by the ever-present need to account for one’s presence in the world in the face of the dehumanizing practice of white supremacy, that African American religion takes on such significance.
To be clear, African American religious life is not reducible to those wounds. That life contains within it avenues for solace and comfort in God, answers to questions about who we take ourselves to be and about our relation to the mysteries of the universe; moreover, meaning is found, for some, in submission to God, in obedience to creed and dogma, and in ritual practice. Here evil is accounted for. And hope, at least for some, assured. In short, African American religious life is as rich and as complicated as the religious life of other groups in the United States, but African American religion emerges in the encounter between faith, in all of its complexity, and white supremacy.
I take it that if the phrase African American religion is to have any descriptive usefulness at all, it must signify something more than African Americans who are religious. African Americans practice a number of different religions. There are black people who are Buddhist, Jehovah Witness, Mormon, and Baha’i. But the fact that African Americans practice these traditions does not lead us to describe them as black Buddhism or black Mormonism. African American religion singles out something more substantive than that.
The adjective refers instead to a racial context within which religious meanings have been produced and reproduced. The history of slavery and racial discrimination in the United States birthed particular religious formations among African Americans. African Americans converted to Christianity, for example, in the context of slavery. Many left predominantly white denominations to form their own in pursuit of a sense of self- determination. Some embraced a distinctive interpretation of Islam to make sense of their condition in the United States. Given that history, we can reasonably describe certain variants of Christianity and Islam as African American and mean something beyond the rather uninteresting claim that black individuals belong to these different religious traditions.
The adjective black or African American works as a marker of difference: as a way of signifying a tradition of struggle against white supremacist practices and a cultural repertoire that reflects that unique journey. The phrase calls up a particular history and culture in our efforts to understand the religious practices of a particular people. When I use the phrase, African American religion, then, I am not referring to something that can be defined substantively apart from varied practices; rather, my aim is to orient you in a particular way to the material under consideration, to call attention to a sociopolitical history, and to single out the workings of the human imagination and spirit under particular conditions.
When Howard Thurman, the great 20th century black theologian, declared that the slave dared to redeem the religion profaned in his midst, he offered a particular understanding of black Christianity: that this expression of Christianity was not the idolatrous embrace of Christian doctrine which justified the superiority of white people and the subordination of black people. Instead, black Christianity embraced the liberating power of Jesus’s example: his sense that all, no matter their station in life, were children of God. Thurman sought to orient the reader to a specific inflection of Christianity in the hands of those who lived as slaves. That difference made a difference. We need only listen to the spirituals, give attention to the way African Americans interpreted the Gospel, and to how they invoked Jesus in their lives.
We cannot deny that African American religious life has developed, for much of its history, under captured conditions. Slaves had to forge lives amid the brutal reality of their condition and imagine possibilities beyond their status as slaves. Religion offered a powerful resource in their efforts. They imagined possibilities beyond anything their circumstances suggested. As religious bricoleurs, they created, as did their children and children’s children, on the level of religious consciousness and that creativity gave African American religion its distinctive hue and timber.
African Americans drew on the cultural knowledge, however fleeting, of their African past. They selected what they found compelling and rejected what they found unacceptable in the traditions of white slaveholders. In some cases, they reached for traditions outside of the United States altogether. They took the bits and pieces of their complicated lives and created distinctive expressions of the general order of existence that anchored their efforts to live amid the pressing nastiness of life. They created what we call African American religion.
Headline image credit: Candles, by Markus Grossalber, CC-BY-2.0 via Flickr.
If a “revolution” in our field or area of knowledge was ongoing, would we feel it and recognize it? And if so, how?
I think a methodological “revolution” is probably going on in the science of epidemiology, but I’m not totally sure. Of course, in science not being sure is part of our normal state. And we mostly like it. I had the feeling that a revolution was ongoing in epidemiology many times. While reading scientific articles, for example. And I saw signs of it, which I think are clear, when reading the latest draft of the forthcoming book Causal Inference by M.A. Hernán and J.M. Robins from Harvard (Chapman & Hall / CRC, 2015). I think the “revolution” — or should we just call it a “renewal”? — is deeply changing how epidemiological and clinical research is conceived, how causal inferences are made, and how we assess the validity and relevance of epidemiological findings. I suspect it may be having an immense impact on the production of scientific evidence in the health, life, and social sciences. If this were so, then the impact would also be large on most policies, programs, services, and products in which such evidence is used. And it would be affecting thousands of institutions, organizations and companies, millions of people.
One example: at present, in clinical and epidemiological research, every week “paradoxes” are being deconstructed. Apparent paradoxes that have long been observed, and whose causal interpretation was at best dubious, are now shown to have little or no causal significance. For example, while obesity is a well-established risk factor for type 2 diabetes (T2D), among people who already developed T2D the obese fare better than T2D individuals with normal weight. Obese diabetics appear to survive longer and to have a milder clinical course than non-obese diabetics. But it is now being shown that the observation lacks causal significance. (Yes, indeed, an observation may be real and yet lack causal meaning.) The demonstration comes from physicians, epidemiologists, and mathematicians like Robins, Hernán, and colleagues as diverse as S. Greenland, J. Pearl, A. Wilcox, C. Weinberg, S. Hernández-Díaz, N. Pearce, C. Poole, T. Lash , J. Ioannidis, P. Rosenbaum, D. Lawlor, J. Vandenbroucke, G. Davey Smith, T. VanderWeele, or E. Tchetgen, among others. They are building methodological knowledge upon knowledge and methods generated by graph theory, computer science, or artificial intelligence. Perhaps one way to explain the main reason to argue that observations as the mentioned “obesity paradox” lack causal significance, is that “conditioning on a collider” (in our example, focusing only on individuals who developed T2D) creates a spurious association between obesity and survival.
The “revolution” is partly founded on complex mathematics, and concepts as “counterfactuals,” as well as on attractive “causal diagrams” like Directed Acyclic Graphs (DAGs). Causal diagrams are a simple way to encode our subject-matter knowledge, and our assumptions, about the qualitative causal structure of a problem. Causal diagrams also encode information about potential associations between the variables in the causal network. DAGs must be drawn following rules much more strict than the informal, heuristic graphs that we all use intuitively. Amazingly, but not surprisingly, the new approaches provide insights that are beyond most methods in current use. In particular, the new methods go far deeper and beyond the methods of “modern epidemiology,” a methodological, conceptual, and partly ideological current whose main eclosion took place in the 1980s lead by statisticians and epidemiologists as O. Miettinen, B. MacMahon, K. Rothman, S. Greenland, S. Lemeshow, D. Hosmer, P. Armitage, J. Fleiss, D. Clayton, M. Susser, D. Rubin, G. Guyatt, D. Altman, J. Kalbfleisch, R. Prentice, N. Breslow, N. Day, D. Kleinbaum, and others.
We live exciting days of paradox deconstruction. It is probably part of a wider cultural phenomenon, if you think of the “deconstruction of the Spanish omelette” authored by Ferran Adrià when he was the world-famous chef at the elBulli restaurant. Yes, just kidding.
Right now I cannot find a better or easier way to document the possible “revolution” in epidemiological and clinical research. Worse, I cannot find a firm way to assess whether my impressions are true. No doubt this is partly due to my ignorance in the social sciences. Actually, I don’t know much about social studies of science, epistemic communities, or knowledge construction. Maybe this is why I claimed that a sociology of epidemiology is much needed. A sociology of epidemiology would apply the scientific principles and methods of sociology to the science, discipline, and profession of epidemiology in order to improve understanding of the wider social causes and consequences of epidemiologists’ professional and scientific organization, patterns of practice, ideas, knowledge, and cultures (e.g., institutional arrangements, academic norms, scientific discourses, defense of identity, and epistemic authority). It could also address the patterns of interaction of epidemiologists with other branches of science and professions (e.g. clinical medicine, public health, the other health, life, and social sciences), and with social agents, organizations, and systems (e.g. the economic, political, and legal systems). I believe the tradition of sociology in epidemiology is rich, while the sociology of epidemiology is virtually uncharted (in the sense of not mapped neither surveyed) and unchartered (i.e. not furnished with a charter or constitution).
Another way I can suggest to look at what may be happening with clinical and epidemiological research methods is to read the changes that we are witnessing in the definitions of basic concepts as risk, rate, risk ratio, attributable fraction, bias, selection bias, confounding, residual confounding, interaction, cumulative and density sampling, open population, test hypothesis, null hypothesis, causal null, causal inference, Berkson’s bias, Simpson’s paradox, frequentist statistics, generalizability, representativeness, missing data, standardization, or overadjustment. The possible existence of a “revolution” might also be assessed in recent and new terms as collider, M-bias, causal diagram, backdoor (biasing path), instrumental variable, negative controls, inverse probability weighting, identifiability, transportability, positivity, ignorability, collapsibility, exchangeable, g-estimation, marginal structural models, risk set, immortal time bias, Mendelian randomization, nonmonotonic, counterfactual outcome, potential outcome, sample space, or false discovery rate.
You may say: “And what about textbooks? Are they changing dramatically? Has one changed the rules?” Well, the new generation of textbooks is just emerging, and very few people have yet read them. Two good examples are the already mentioned text by Hernán and Robins, and the soon to be published by T. VanderWeele, Explanation in causal inference: Methods for mediation and interaction (Oxford University Press, 2015). Clues can also be found in widely used textbooks by K. Rothman et al. (Modern Epidemiology, Lippincott-Raven, 2008), M. Szklo and J Nieto (Epidemiology: Beyond the Basics, Jones & Bartlett, 2014), or L. Gordis (Epidemiology, Elsevier, 2009).
Finally, another good way to assess what might be changing is to read what gets published in top journals as Epidemiology, the International Journal of Epidemiology, the American Journal of Epidemiology, or the Journal of Clinical Epidemiology. Pick up any issue of the main epidemiologic journals and you will find several examples of what I suspect is going on. If you feel like it, look for the DAGs. I recently saw a tweet saying “A DAG in The Lancet!”. It was a surprise: major clinical journals are lagging behind. But they will soon follow and adopt the new methods: the clinical relevance of the latter is huge. Or is it not such a big deal? If no “revolution” is going on, how are we to know?
For many of us, nature is defined as an outdoor space, untouched by human hands, and a place we escape to for refuge. We often spend time away from our daily routines to be in nature, such as taking a backwoods camping trip, going for a long hike in an urban park, or gardening in our backyard. Think about the last time you were out in nature, what comes to mind? For me, it was a canoe trip with friends. I can picture myself in our boat, the sound of the birds and rustling leaves in the background, the smell of cedars mixed with the clearing morning mist, and the sight of the still waters in front of me. Most of all, I remember a sense of calmness and clarity which I always achieve when I’m in nature.
Nature takes us away from the demands of life, and allows us to concentrate on the world around us with little to no effort. We can easily be taken back to a summer day by the smell of fresh cut grass, and force ourselves to be still to listen to the distant sound of ocean waves. Time in nature has a wealth of benefits from reducing stress, improving mood, increasing attentional capacities, and facilitating and creating social bonds. A variety of work supports nature being healing and health promoting at both an individual level (such as being energized after a walk with your dog) and a community level (such as neighbors coming together to create a local co-op garden). However, it can become difficult to experience the outdoors when we spend most of our day within a built environment.
I’d like you to stop for a moment and look around. What do you see? Are there windows? Are there any living plants or animals? Are the walls white? Do you hear traffic or perhaps the hum of your computer? Are you smelling circulated air? As I write now I hear the buzz of the florescent lights above me, and take a deep inhale of the lingering smell from my morning coffee. There is no nature except for the few photographs of the countryside and flowers that I keep tapped to my wall. I often feel hypocritical researching nature exposure sitting in front of a computer screen in my windowless office. But this is the reality for most of us. So how can we tap into the benefits of nature in order to create healthy and healing indoor environments that mimic nature and provide us with the same benefits as being outdoors?
Urban spaces often get a bad rap. Sure, they’re typically overcrowded, high in pollution, and limited in their natural and green spaces, but they also offer us the ability to transform the world around us into something that is meaningful and also health promoting. Beyond architectural features such as skylights, windows, and open air courtyards, we can use ambient features to adapt indoor spaces to replicate the outdoors. The integration of plants, animals, sounds, scents, and textures into our existing indoor environments enables us to create a wealth of natural environments indoors.
Notable examples of indoor nature, are potted plants or living walls in office spaces, atriums providing natural light, and large mural landscapes. In fact, much research has shown that the presence of such visual aids provides the same benefits of being outdoors. Incorporating just a few pieces of greenery into your workspace can help increase your productivity, boost your mood, improve your health, and help you concentrate on getting your work done. But being in nature is more than just seeing, it’s experiencing it fully and being immersed into a world that engages all of your senses. The use of natural sounds, scents, and textures (e.g. wooden furniture or carpets that look and feel like grass) provides endless possibilities for creating a natural environment indoors, and encouraging built environments to be therapeutic spaces. The more nature-like the indoor space can be, the more apt it is to illicit the same psychological and physical benefits that being outdoors does. Ultimately, the built environment can engage my senses in a way that brings me back to my canoe trip, and help me feel that same clarity and calmness that I did on the lake.
On a broader level, indoor nature may also be a means of encouraging sustainable and eco-friendly behaviors. With more generations growing up inside, we risk creating a society that is unaware of the value of nature. It’s easy to suggest that the solution to our declining involvement with nature is to just “go outside”; but with today’s busy lifestyle, we cannot always afford the time and money to step away. Integrating nature into our indoor environment is one way to foster the relationship between us and nature, and to encourage a sense of stewardship and appreciation for our natural world. By experiencing the health promoting and healing properties of nature, we can instill individuals with the significance of our natural world.
As I look around my office I’ve decided I need to take some of my own advice and bring my own little piece of nature inside. I encourage you to think about what nature means to you, and how you can incorporate this meaning into your own space. Does it involve fresh cut flowers? A photograph of your annual family campsite? The sound of birds in the background as you work? Whatever it is, I’m sure it’ll leave you feeling a little bit lighter, and maybe have you working a little bit faster.
Image: World Financial Center Winter Garden by WiNG. CC-BY-3.0 via Wikimedia Commons.
We’re getting ready for Halloween this month by reading the classic horror stories that set the stage for the creepy movies and books we love today. Check in every Friday this October as we tell Fitz-James O’Brien’s tale of an unusual entity in What Was It?, a story from the spine-tingling collection of works in Horror Stories: Classic Tales from Hoffmann to Hodgson, edited by Darryl Jones. Last we left off the narrator was headed to bed after a night of opium and philosophical conversation with Dr. Hammond, a friend and fellow boarded at the supposed haunted house where they are staying.
We parted, and each sought his respective chamber. I undressed quickly and got into bed, taking with me, according to my usual custom, a book, over which I generally read myself to sleep. I opened the volume as soon as I had laid my head upon the pillow, and instantly flung it to the other side of the room. It was Goudon’s ‘History of Monsters,’—a curious French work, which I had lately imported from Paris, but which, in the state of mind I had then reached, was anything but an agreeable companion. I resolved to go to sleep at once; so, turning down my gas until nothing but a little blue point of light glimmered on the top of the tube, I composed myself to rest.
The room was in total darkness. The atom of gas that still remained alight did not illuminate a distance of three inches round the burner. I desperately drew my arm across my eyes, as if to shut out even the darkness, and tried to think of nothing. It was in vain. The confounded themes touched on by Hammond in the garden kept obtruding themselves on my brain. I battled against them. I erected ramparts of would-be blankness of intellect to keep them out. They still crowded upon me. While I was lying still as a corpse, hoping that by a perfect physical inaction I should hasten mental repose, an awful incident occurred. A Something dropped, as it seemed, from the ceiling, plumb upon my chest, and the next instant I felt two bony hands encircling my throat, endeavoring to choke me.
I am no coward, and am possessed of considerable physical strength. The suddenness of the attack, instead of stunning me, strung every nerve to its highest tension. My body acted from instinct, before my brain had time to realize the terrors of my position. In an instant I wound two muscular arms around the creature, and squeezed it, with all the strength of despair, against my chest. In a few seconds the bony hands that had fastened on my throat loosened their hold, and I was free to breathe once more. Then commenced a struggle of awful intensity. Immersed in the most profound darkness, totally ignorant of the nature of the Thing by which I was so suddenly attacked, finding my grasp slipping every moment, by reason, it seemed to me, of the entire nakedness of my assailant, bitten with sharp teeth in the shoulder, neck, and chest, having every moment to protect my throat against a pair of sinewy, agile hands, which my utmost efforts could not confine,—these were a combination of circumstances to combat which required all the strength, skill, and courage that I possessed.
At last, after a silent, deadly, exhausting struggle, I got my assailant under by a series of incredible efforts of strength. Once pinned, with my knee on what I made out to be its chest, I knew that I was victor. I rested for a moment to breathe. I heard the creature beneath me panting in the darkness, and felt the violent throbbing of a heart. It was apparently as exhausted as I was; that was one comfort. At this moment I remembered that I usually placed under my pillow, before going to bed, a large yellow silk pocket-handkerchief. I felt for it instantly; it was there. In a few seconds more I had, after a fashion, pinioned the creature’s arms.
I now felt tolerably secure. There was nothing more to be done but to turn on the gas, and, having first seen what my midnight assailant was like, arouse the household. I will confess to being actuated by a certain pride in not giving the alarm before; I wished to make the capture alone and unaided.
Never losing my hold for an instant, I slipped from the bed to the floor, dragging my captive with me. I had but a few steps to make to reach the gas-burner; these I made with the greatest caution, holding the creature in a grip like a vice. At last I got within arm’s-length of the tiny speck of blue light which told me where the gas-burner lay. Quick as lightning I released my grasp with one hand and let on the full flood of light. Then I turned to look at my captive.
I cannot even attempt to give any definition of my sensations the instant after I turned on the gas. I suppose I must have shrieked with terror, for in less than a minute afterward my room was crowded with the inmates of the house. I shudder now as I think of that awful moment. I saw nothing! Yes; I had one arm firmly clasped round a breathing, panting, corporeal shape, my other hand gripped with all its strength a throat as warm, and apparently fleshly, as my own; and yet, with this living substance in my grasp, with its body pressed against my own, and all in the bright glare of a large jet of gas, I absolutely beheld nothing! Not even an outline,—a vapor!
I do not, even at this hour, realize the situation in which I found myself. I cannot recall the astounding incident thoroughly. Imagination in vain tries to compass the awful paradox.
It breathed. I felt its warm breath upon my cheek. It struggled fiercely. It had hands. They clutched me. Its skin was smooth, like my own. There it lay, pressed close up against me, solid as stone,—and yet utterly invisible!
I wonder that I did not faint or go mad on the instant. Some wonderful instinct must have sustained me; for, absolutely, in place of loosening my hold on the terrible Enigma, I seemed to gain an additional strength in my moment of horror, and tightened my grasp with such wonderful force that I felt the creature shivering with agony.
Just then Hammond entered my room at the head of the household. As soon as he beheld my face—which, I suppose, must have been an awful sight to look at—he hastened forward, crying, ‘Great heaven, Harry! what has happened?’
‘Hammond! Hammond!’ I cried, ‘come here. O, this is awful!
I have been attacked in bed by something or other, which I have hold of; but I can’t see it,—I can’t see it!’
Hammond, doubtless struck by the unfeigned horror expressed in my countenance, made one or two steps forward with an anxious yet puzzled expression. A very audible titter burst from the remainder of my visitors. This suppressed laughter made me furious. To laugh at a human being in my position! It was the worst species of cruelty. Now, I can understand why the appearance of a man struggling violently, as it would seem, with an airy nothing, and calling for assistance against a vision, should have appeared ludicrous. Then, so great was my rage against the mocking crowd that had I the power I would have stricken them dead where they stood.
‘Hammond! Hammond!’ I cried again, despairingly, ‘for God’s sake come to me. I can hold the—the thing but a short while longer. It is overpowering me. Help me! Help me!’
‘Harry,’ whispered Hammond, approaching me, ‘you have been smoking too much opium.’
‘I swear to you, Hammond, that this is no vision,’ I answered, in the same low tone. ‘Don’t you see how it shakes my whole frame with its struggles? If you don’t believe me, convince yourself. Feel it,— touch it.’
Hammond advanced and laid his hand in the spot I indicated. A wild cry of horror burst from him. He had felt it! In a moment he had discovered somewhere in my room a long piece of cord, and was the next instant winding it and knotting it about the body of the unseen being that I clasped in my arms.
‘Harry,’ he said, in a hoarse, agitated voice, for, though he preserved his presence of mind, he was deeply moved, ‘Harry, it’s all safe now. You may let go, old fellow, if you’re tired. The Thing can’t move.’
I was utterly exhausted, and I gladly loosed my hold.
Check back next Friday, 24 October to find out what happens next. Missed a part of the story? Catch up with part 1 and part 2.
Last weekend we were thrilled to see so many of you at the 2014 Oral History Association (OHA) Annual Meeting, “Oral History in Motion: Movements, Transformations, and the Power of Story.” The panels and roundtables were full of lively discussions, and the social gatherings provided a great chance to meet fellow oral historians. You can read a recap from Margo Shea, or browse through the Storify below, prepared by Jaycie Vos, to get a sense of the excitement at the meeting. Over the next few weeks, we’ll be sharing some more in depth blog posts from the meeting, so make sure to check back often.
We look forward to seeing you all next year at the Annual Meeting in Florida. And special thanks to Margo Shea for sending in her reflections on the meeting and to Jaycie Vos (@jaycie_v) for putting together the Storify.
Headline image credit: Madison, Wisconsin cityscape at night, looking across Lake Monona from Olin Park. Photo by Richard Hurd. CC BY 2.0 via rahimageworks Flickr.
The outbreak of Ebola, in Africa and in the United States, is a stark reminder of the clear and present danger that infection represents in all our lives, and we need reminding. Despite all of our medical advances, more familiar infections still take tens of thousands of American lives each year – and too often these deaths are avoidable.
Hospital infections kill 75,000 Americans a year — more than twice the number of people who die in car crashes. Most people know that motor vehicle deaths could be drastically reduced. What’s not as widely appreciated is that the far greater number of hospital infections could be reduced by up to 70%.
Changes that would reduce infections are evidence-based and scientific, supported by the Centers for Disease Control and Prevention. For example, the campaign against hospital-acquired urinary tract infection — one of the most common hospital infections in the world — seeks to minimize the use of internal, Foley catheters, a major vector of infection. Nurses who have always relied on Foleys to deal with patients who have urinary incontinence are told to use straight catheters intermittently instead, which increases their workload. Surgeons who are accustomed to placing Foley catheters in their patients for several days after an operation are told to remove the catheter shortly after surgery – or not to use one at all. Similar approaches can be used to reduce other common infections. If we know what needs to be done to lower the rate of hospital infections, why have the many attempts to do so fallen so woefully short?
Our research shows that a major reason is the unwillingness of some nurses and physicians to support the desired new behaviors. We have found that opposition to hospitals’ infection prevention initiatives comes from the three groups we call Active Resisters, Organizational Constipators, and Timeservers. While we know these types of individuals exist in hospitals since we have seen them in action, we suspect they can also be found in all types of organizations.
Active resisters refuse to abide by and sometimes campaign against an initiative’s proposed changes. Some active resisters refuse to change a practice they have used for years because they fear it might have a negative impact on their patients’ health. Others resist because they doubt the scientific validity of a change, or because the change is inconvenient. For others it’s simply a matter of ego, as in, “Don’t tell me what to do.” Some ignore the evidence. Many initiatives to prevent urinary tract infection ask nurses to remind physicians when it’s time to remove an indwelling catheter, but many nurses are unwilling to confront physicians – and many physicians are unwilling to be so confronted.
Organizational constipators present a different set of challenges. Most are mid- to upper-level staff members who have nothing against an infection prevention initiative per se but simply enjoy exercising their power. Sometimes they refuse to permit underlings to help with an initiative. Sometimes they simply do nothing, allowing memos and emails to pile up without taking action. While we have met some physicians in this category, we have seen, unfortunately, a surprising number of nursing leaders employ this approach.
Timeservers do the least possible in any circumstance. That applies to every aspect of their work, including preventing infection. A timeserver surgeon may neglect to wash her hands before examining a patient, not because she opposes that key infection prevention requirement but because it’s just easier that way. A timeserver nurse may “forget” to conduct “sedation vacations” for patients who are on mechanical breathing machines to assess if the patient can be weaned from the ventilator sooner for the simple reason that sedated patients are less work.
We have learned that different overcoming these human-related barriers to improvement requires different styles of engagement.
To win support among the active resisters, we recommend employing data both liberally and strategically. Doctors are trained to respond to facts, and a graph that shows a high rate of infection department can help sway them. Sharing research from respected journals describing proven methods of preventing infection can also help overcome concerns. Nurse resisters are similarly impressed by such data, but we find that they are also likely to be convinced by appeals to their concern for their patients’ welfare – a description, for example, of the discomfort the Foley causes their patients.
Organizational constipators and timeservers are more difficult to win over, largely because their negative behavior is an incidental result of their normal operating style. Managers sometimes try to work around the organizational constipators and assign an authority figure to harass the timeservers, but their success is limited. Efforts to fire them can sometimes be difficult.
Hospitals’ administrative and medical leaders often play an important role in successful infection prevention initiatives by emphasizing their approval in their staff encounters, by occasionally attending an infection prevention planning session, and by making adherence to the goals of the initiative a factor in employee performance reviews. Some innovative leaders also give out physician or nurse champion-of-the-year awards that serve the dual purpose of rewarding the healthcare workers who have been helpful in a successful initiative while encouraging others by showing that they, too, could someday receive similar recognition. It may help to include potential obstructors in planning for an infection prevention campaign; the critics help spot weaknesses and are also inclined to go easy on the campaign once it gets underway.
But the leadership of a successful infection prevention project can also come from lower down in a hospital’s hierarchy, with or without the active support of the senior executives. We found the key to a positive result is a culture of excellence, when the hospital staff is fully devoted to patient-centered, high-quality care. Healthcare workers in such hospitals endeavor to treat each patient as a family member. In such institutions, a dedicated nurse can ignite an infection prevention initiative, and the staff’s all-but-universal commitment to patient safety can win over even the timeservers. The closer the nation’s hospitals approach that state of grace, the greater the success they will have in their efforts to lower infection rates.
Preventing infection is a team sport. Cooperation — among doctors, nurses, microbiologists, public health officials, patients, and families — will be required to control the spread of Ebola. Such cooperation is required to prevent more mundane infections as well.
Anti-politics is in the air. There is a prevalent feeling in many societies that politicians are up to no good, that establishment politics are at best irrelevant and at worst corrupt and power-hungry, and that the centralization of power in national parliaments and governments denies the public a voice. Larger organizations fare even worse, with the European Union’s ostensible detachment from and imperviousness to the real concerns of its citizens now its most-trumpeted feature. Discontent and anxiety build up pressure that erupts in the streets from time to time, whether in Takhrir Square or Tottenham. The Scots rail against a mysterious entity called Westminster; UKIP rides on the crest of what it terms patriotism (and others term typical European populism) intimating, as Matthew Goodwin has pointed out in the Guardian, that Nigel Farage “will lead his followers through a chain of events that will determine the destiny of his modern revolt against Westminster.”
At the height of the media interest in Wootton Bassett, when the frequent corteges of British soldiers who were killed in Afghanistan wended their way through the high street while the townspeople stood in silence, its organizers claimed that it was a spontaneous and apolitical display of respect. “There are no politics here,” stated the local MP. Those involved held that the national stratum of politicians was superfluous to the authentic feeling of solidarity that could solely be generated at the grass roots. A clear resistance emerged to national politics trying to monopolize the mourning that only a town at England’s heart could convey.
Academics have been drawn in to the same phenomenon. A new Anti-politics and Depoliticization Specialist Group has been set up by the Political Studies Association in the UK dedicated, as it describes itself, to “providing a forum for researchers examining those processes throughout society that seem to have marginalized normative political debates, taken power away from elected politicians and fostered an air of disengagement, disaffection and disinterest in politics.” The term “politics” and what it apparently stands for is undoubtedly suffering from a serious reputational problem.
But all that is based on a misunderstanding of politics. Political activity and thinking isn’t something that happens in remote places and institutions outside the experience of everyday life. It is ubiquitous, rooted in human intercourse at every level. It is not merely an elite activity but one that every one of us engages in consciously or unconsciously in our relations with others: commanding, pleading, negotiating, arguing, agreeing, refusing, or resisting. There is a tendency to insist on politics being mainly about one thing: power, dissent, consensus, oppression, rupture, conciliation, decision-making, the public domain, are some of the competing contenders. But politics is about them all, albeit in different combinations.
It concerns ranking group priorities in terms of urgency or importance—whether the group is a family, a sports club, or a municipality. It concerns attempts to achieve finality in human affairs, attempts always doomed to fail yet epitomised in language that refers to victory, authority, sovereignty, rights, order, persuasion—whether on winning or losing sides of political struggle. That ranges from a constitutional ruling to the exasperated parent trying to end an argument with a “because I say so.” It concerns order and disorder in human gatherings, whether parliaments, trade union meetings, classrooms, bus queues, or terrorist attacks—all have a political dimension alongside their other aspects. That gives the lie to a demonstration being anti-political, when its ends are reform, revolution, or the expression of disillusionment. It concerns devising plans and weaving visions for collectivities. It concerns the multiple languages of support and withholding support that we engage in with reference to others, from loyalty and allegiance through obligation to commitment and trust. And it is manifested through conservative, progressive, or reactionary tendencies that the human personality exhibits.
When those involved in the Wootton Bassett corteges claimed to be non-political, they overlooked their organizational role in making certain that every detail of the ceremony was in place. They elided the expression of national loyalty that those homages clearly entailed. They glossed over the tension between political centre and periphery that marked an asymmetry of power and voice. They assumed, without recognizing, the prioritizing of a particular group of the dead – those that fell in battle.
People everywhere engage in political practices, but they do so in different intensities. It makes no more sense to suggest that we are non-political than to suggest that we are non-psychological. Nor does anti-politics ring true, because political disengagement is still a political act: sometimes vociferously so, sometimes seeking shelter in smaller circles of political conduct. Alongside political philosophy and the history of political thought, social scientists need to explore the features of thinking politically as typical and normal features of human life. Those patterns are always with us, though their cultural forms will vary considerably across and within societies. Being anti-establishment, anti-government, anti-sleaze, even anti-state are themselves powerful political statements, never anti-politics.
Headline image credit: Westminster, by “Stròlic Furlàn” – Davide Gabino. CC-BY-SA-2.0 via Flickr.
Biology Week is an annual celebration of the biological sciences that aims to inspire and engage the public in the wonders of biology. The Society of Biology created this awareness day in 2012 to give everyone the chance to learn and appreciate biology, the science of the 21st century, through varied, nationwide events. Our belief that access to education and research changes lives for the better naturally supports the values behind Biology Week, and we are excited to be involved in it year on year.
Biology, as the study of living organisms, has an incredibly vast scope. We’ve identified some key figures from the last couple of centuries who traverse the range of biology: from physiology to biochemistry, sexology to zoology. You can read their stories by checking out our Biology Week 2014 gallery below. These biologists, in various different ways, have had a significant impact on the way we understand and interact with biology today. Whether they discovered dinosaurs or formed the foundations of genetic engineering, their stories have plenty to inspire, encourage, and inform us.
If you’d like to learn more about these key figures in biology, you can explore the resources available on our Biology Week page, or sign up to our e-alerts to stay one step ahead of the next big thing in biology.
Headline image credit: Marie Stopes in her laboratory, 1904, by Schnitzeljack. Public domain via Wikimedia Commons.
Now that Noughth Week has come to an end and the university Full Term is upon us, I thought it might be an appropriate time to investigate the arcane world of Oxford jargon -- the University of Oxford, that is. New students, or freshers, do not arrive in Oxford but come up; at the end of term they go down (irrespective of where they live).
Many bioethical challenges surround the promise of genomic technology and the power of genomic information — providing a rich context for critically exploring underlying bioethical traditions and foundations, as well as the practice of multidisciplinary advisory committees and collaborations. Controversial issues abound that call into question the core values and assumptions inherent in bioethics analysis and thus necessitates interprofessional inquiry. Consequently, the teaching of genomics and contemporary bioethics provides an opportunity to re-examine our disciplines’ underpinnings by casting light on the implications of genomics with novel approaches to address thorny issues — such as determining whether, what, to whom, when, and how genomic information, including “incidental” findings, should be discovered and disclosed to individuals and their families, and whose voice matters in making these determinations particularly when children are involved.
One creative approach we developed is narrative genomics using drama with provocative characters and dialogue as an interdisciplinary pedagogical approach to bring to life the diverse voices, varied contexts, and complex processes that encompass the nascent field of genomics as it evolves from research to clinical practice. This creative educational technique focuses on inherent challenges currently posed by the comprehensive interrogation and analysis of DNA through sequencing the human genome with next generation technologies and illuminates bioethical issues, providing a stage to reflect on the controversies together, and temper the sometimes contentious debates that ensue.
As a bioethics teaching method, narrative genomics highlights the breadth of individuals affected by next-gen technologies — the conversations among professionals and families — bringing to life the spectrum of emotions and challenges that envelope genomics. Recent controversies over genomic sequencing in children and consentissues have brought fundamental ethical theses to the stage to be re-examined, further fueling our belief in drama as an interdisciplinary pedagogical approach to explore how society evaluates, processes, and shares genomic information that may implicate future generations. With a mutual interest in enhancing dialogue and understanding about the multi-faceted implications raised by generating and sharing vast amounts of genomic information, and with diverse backgrounds in bioethics, policy, psychology, genetics, law, health humanities, and neuroscience, we have been collaboratively weaving dramatic narratives to enhance the bioethics educational experience within varied professional contexts and a wide range of academic levels to foster interprofessionalism.
Dramatizations of fictionalized individual, familial, and professional relationships that surround the ethical landscape of genomics create the potential to stimulate bioethical reflection and new perceptions amongst “actors” and the audience, sparking the moral imagination through the lens of others. By casting light on all “the storytellers” and the complexity of implications inherent with this powerful technology, dramatic narratives create vivid scenarios through which to imagine the challenges faced on the genomic path ahead, critique the application of bioethical traditions in context, and re-imagine alternative paradigms.
We initially collaborated on the creation of a short vignette play in the context of genomic research and the informed consent process that was performed at the NHGRI-ELSI Congress by a geneticist, genetic counselor, bioethicists, and other conference attendees. The response by “actors” and audience fueled us to write many more plays of varying lengths on different ethical and genomic issues, as well as to explore the dialogues of existing theater with genetic and genomic themes — all to be presented and reflected upon by interdisciplinary professionals in the bioethics and genomics community at professional society meetings and academic medical institutions nationally and internationally.
Because narrative genomics is a pedagogical approach intended to facilitate discourse, as well as provide reflection on the interrelatedness of the cross-disciplinary issues posed, we ground our genomic plays in current scholarship and ensure that it is accurate scientifically as well as provide extensive references and pose focused bioethics questions which can complement and enhance the classroom experience.
In a similar vein, bioethical controversies can also be brought to life with this approach where bioethics reaching incorporates dramatizations and excerpts from existing theatrical narratives, whether to highlight bioethics issues thematically, or to illuminate the historical path to the genomics revolution and other medical innovations from an ethical perspective.
Varying iterations of these dramatic narratives have been experienced (read, enacted, witnessed) by bioethicists, policy makers, geneticists, genetic counselors, other healthcare professionals, basic scientists, bioethicists, lawyers, patient advocates, and students to enhance insight and facilitate interdisciplinary and interprofessional dialogue.
Dramatizations embedded in genomic narratives illuminate the human dimensions and complexity of interactions among family members, medical professionals, and others in the scientific community. By facilitating discourse and raising more questions than answers on difficult issues, narrative genomics links the promise and concerns of next-gen technologies with a creative bioethics pedagogical approach for learning from one another.
Heading image: Andrzej Joachimiak and colleagues at Argonne’s Midwest Center for Structural Genomics deposited the consortium’s 1,000th protein structure into the Protein Data Bank. CC-BY-SA-2.0 via Wikimedia Commons.
American higher education is at a crossroads. The cost of a college education has made people question the benefits of receiving one. To better understand the issues surrounding the supposed crisis, we asked Goldie Blumenstyk, author of American Higher Education in Crisis: What Everyone Needs to Know, to comment on some of the most hot button topics today.
A discussion on the rising cost of higher education.
What does the future of higher education look like?
Are the salaries of university presidents and coaches too high?
A look into the accountability movement in higher education today.
Causation is now commonly supposed to involve a succession that instantiates some lawlike regularity. This understanding of causality has a history that includes various interrelated conceptions of efficient causation that date from ancient Greek philosophy and that extend to discussions of causation in contemporary metaphysics and philosophy of science. Yet the fact that we now often speak only of causation, as opposed to efficient causation, serves to highlight the distance of our thought on this issue from its ancient origins. In particular, Aristotle (384-322 BCE) introduced four different kinds of “cause” (aitia): material, formal, efficient, and final. We can illustrate this distinction in terms of the generation of living organisms, which for Aristotle was a particularly important case of natural causation. In terms of Aristotle’s (outdated) account of the generation of higher animals, for instance, the matter of the menstrual flow of the mother serves as the material cause, the specially disposed matter from which the organism is formed, whereas the father (working through his semen) is the efficient cause that actually produces the effect. In contrast, the formal cause is the internal principle that drives the growth of the fetus, and the final cause is the healthy adult animal, the end point toward which the natural process of growth is directed.
From a contemporary perspective, it would seem that in this case only the contribution of the father (or perhaps his act of procreation) is a “true” cause. Somewhere along the road that leads from Aristotle to our own time, material, formal and final aitiai were lost, leaving behind only something like efficient aitiai to serve as the central element in our causal explanations. One reason for this transformation is that the historical journey from Aristotle to us passes by way of David Hume (1711-1776). For it is Hume who wrote: “[A]ll causes are of the same kind, and that in particular there is no foundation for that distinction, which we sometimes make betwixt efficient causes, and formal, and material … and final causes” (Treatise of Human Nature, I.iii.14). The one type of cause that remains in Hume serves to explain the producing of the effect, and thus is most similar to Aristotle’s efficient cause. And so, for the most part, it is today.
However, there is a further feature of Hume’s account of causation that has profoundly shaped our current conversation regarding causation. I have in mind his claim that the interrelated notions of cause, force and power are reducible to more basic non-causal notions. In Hume’s case, the causal notions (or our beliefs concerning such notions) are to be understood in terms of the constant conjunction of objects or events, on the one hand, and the mental expectation that an effect will follow from its cause, on the other. This specific account differs from more recent attempts to reduce causality to, for instance, regularity or counterfactual/probabilistic dependence. Hume himself arguably focused more on our beliefs concerning causation (thus the parenthetical above) than, as is more common today, directly on the metaphysical nature of causal relations. Nonetheless, these attempts remain “Humean” insofar as they are guided by the assumption that an analysis of causation must reduce it to non-causal terms. This is reflected, for instance, in the version of “Humean supervenience” in the work of the late David Lewis. According to Lewis’s own guarded statement of this view: “The world has its laws of nature, its chances and causal relationships; and yet — perhaps! — all there is to the world is its point-by-point distribution of local qualitative character” (On the Plurality of Worlds, 14).
Admittedly, Lewis’s particular version of Humean supervenience has some distinctively non-Humean elements. Specifically — and notoriously — Lewis has offered a counterfactural analysis of causation that invokes “modal realism,” that is, the thesis that the actual world is just one of a plurality of concrete possible worlds that are spatio-temporally discontinuous. One can imagine that Hume would have said of this thesis what he said of Malebranche’s occasionalist conclusion that God is the only true cause, namely: “We are got into fairy land, long ere we have reached the last steps of our theory; and there we have no reason to trust our common methods of argument, or to think that our usual analogies and probabilities have any authority” (Enquiry concerning Human Understanding, §VII.1). Yet the basic Humean thesis in Lewis remains, namely, that causal relations must be understood in terms of something more basic.
And it is at this point that Aristotle re-enters the contemporary conversation. For there has been a broadly Aristotelian move recently to re-introduce powers, along with capacities, dispositions, tendencies and propensities, at the ground level, as metaphysically basic features of the world. The new slogan is: “Out with Hume, in with Aristotle.” (I borrow the slogan from Troy Cross’s online review of Powers and Capacities in Philosophy: The New Aristotelianism.) Whereas for contemporary Humeans causal powers are to be understood in terms of regularities or non-causal dependencies, proponents of the new Aristotelian metaphysics of powers insist that regularities and dependencies must be understood rather in terms of causal powers.
Should we be Humean or Aristotelian with respect to the question of whether causal powers are basic or reducible features of the world? Obviously I cannot offer any decisive answer to this question here. But the very fact that the question remains relevant indicates the extent of our historical and philosophical debt to Aristotle and Hume.
Headline image: Face to face. Photo by Eugenio. CC-BY-SA-2.0 via Flickr
It’s fairly common knowledge that languages, like people, have families. English, for instance, is a member of the Germanic family, with sister languages including Dutch, German, and the Scandinavian languages. Germanic, in turn, is a branch of a larger family, Indo-European, whose other members include the Romance languages (French, Italian, Spanish, and more), Russian, Greek, and Persian.
Being part of a family of course means that you share a common ancestor. For the Romance languages, that mother language is Latin; with the spread and then fall of the Roman empire, Latin split into a number of distinct daughter languages. But what did the Germanic mother language look like? Here there’s a problem, because, although we know that language must have existed, we don’t have any direct record of it.
The earliest Old English written texts date from the 7th century AD, and the earliest Germanic text of any length is a 4th-century translation of the Bible into Gothic, a now-extinct Germanic language. Though impressively old, this text still dates from long after the breakup of the Germanic mother language into its daughters.
How does one go about recovering the features of a language that is dead and gone, and which has left no records of itself in spoken or written form? This is the subject matter of linguistic necromancy – or linguistic reconstruction, as it is more conventionally known.
The enterprise, dubbed “darkest of the dark arts” and “the only means to conjure up the ghosts of vanished centuries” in the epigraph to a chapter of Campbell’s historical linguistics textbook, really got off the ground in the 1900s due to a development of a toolkit of techniques known as the comparative method.
Crucial to the comparative method was a revolutionary empirical finding: the regularity of sound change. Though it has wide-reaching implications, the basic finding is simple to grasp. In a nutshell: it’s sounds that change, not words, and when they change, all words which include those sounds are affected.
Let’s take an example. Lots of English words beginning with a p sound have a German counterpart that begins with pf. Here are some of them:
English path: German Pfad
English pepper: German Pfeffer
English pipe: German Pfeife
English pan: German Pfanne
English post: German Pfoste
If the forms of words simply changed at random, these systematic correspondences would be a miraculous coincidence. However, in the light of the regularity of sound change they make perfect sense. Specifically, at some point in the early history of German, the language sounded a lot more like (Old) English. But then the sound p underwent a change to pf at the beginning of words, and all words starting with p were affected.
There’s much more to be said about the regularity of sound change, since it underlies pretty much everything we know about language family groupings. (If you’re interested in finding out more, Guy Deutscher’s book The Unfolding of Language provides an accessible summary.) But for now let’s concentrate on its implications for necromantic purposes, which are immense.
If we want to invoke the words and sounds of a long-dead language like the mother language Proto-Germanic (the ‘proto-’ indicates that the language is reconstructed, rather than directly evidenced in texts), we just need to figure out what changes have happened to the sounds of the daughter languages, and to peel them back one by one like the layers of an onion. Eventually we’ll reach a point where all the daughter languages sound the same; and voilà, we’ve conjured up a proto-language.
There’s more to living languages than just sounds and words though. Living languages have syntax: a structure, a skeleton. By contrast, reconstructed protolanguages tend to look more like ghosts: hauntingly amorphous clouds of words and sounds. There are practical reasons why the reconstruction of proto-syntax has lagged behind. One is simply that our understanding of syntax, in general, has come a long way since the work of the reconstruction pioneers in the 19th century.
Another is that there is nothing quite like the regularity of sound change in syntax: how can we tell which syntactic structures correspond to each other across languages? These problems have led some to be sceptical about the possibility of syntactic reconstruction, or at any rate about its fruitfulness. Nevertheless, progress is being made. To take one example, English is a language that doesn’t like to leave out the subject of a sentence. We say “He speaks Swahili” or “It is raining”, not “Speaks Swahili” or “Is raining”. Though most of the modern Germanic languages behave the same, many other languages, like Italian and Japanese, have no such requirement; speakers can include or omit the subject of the sentence as the fancy takes them. Was Proto-Germanic like English, or like Italian or Japanese, in this respect? Doing a bit of necromancy based on the earliest Germanic written records suggests that Proto-Germanic was, like the latter, quite happy to omit the subject, at least under certain circumstances.Of course the issue is more complex than that – Italian and Japanese themselves differ with regard to the circumstances under which subjects can be omitted.
Slowly but surely, though, historical linguists are starting to add skeletons to the reanimated spectres of proto-languages.
There’s a lot of interesting social science research these days. Conference programs are packed, journals are flooded with submissions, and authors are looking for innovative new ways to publish their work.
This is why we have started up a new type of research publication at Political Analysis, Letters.
Research journals have a limited number of pages, and many authors struggle to fit their research into the “usual formula” for a social science submission — 25 to 30 double-spaced pages, a small handful of tables and figures, and a page or two of references. Many, and some say most, papers published in social science could be much shorter than that “usual formula.”
We have begun to accept Letters submissions, and we anticipate publishing our first Letters in Volume 24 of Political Analysis. We will continue to accept submissions for research articles, though in some cases the editors will suggest that an author edit their manuscript and resubmit it as a Letter. Soon we will have detailed instructions on how to submit a Letter, the expectations for Letters, and other information, on the journal’s website.
We have named Justin Grimmer and Jens Hainmueller, both at Stanford University, to serve as Associate Editors of Political Analysis — with their primary responsibility being Letters. Justin and Jens are accomplished political scientists and methodologists, and we are quite happy that they have agreed to join the Political Analysis team. Justin and Jens have already put in a great deal of work helping us develop the concept, and working out the logistics for how we integrate the Letters submissions into the existing workflow of the journal.
I recently asked Justin and Jens a few quick questions about Letters, to give them an opportunity to get the word out about this new and innovative way of publishing research in Political Analysis.
Political Analysis is now accepting the submission of Letters as well as Research Articles. What are the general requirements for a Letter?
Letters are short reports of original research that move the field forward. This includes, but is not limited to, new empirical findings, methodological advances, theoretical arguments, as well as comments on or extensions of previous work. Letters are peer reviewed and subjected to the same standards as Political Analysis research articles. Accepted Letters are published in the electronic and print versions of Political Analysis and are searchable and citable just like other articles in the journal. Letters should focus on a single idea and are brief—only 2-4 pages and no longer than 1500-3000 words.
Why is Political Analysis taking this new direction, looking for shorter submissions?
Political Analysis is taking this new direction to publish important results that do not traditionally fit in the longer format of journal articles that are currently the standard in the social sciences, but fit well with the shorter format that is often used in the sciences to convey important new findings. In this regard the role model for the Political Analysis Letters are the similar formats used in top general interest science journals like Science, Nature, or PNAS where significant findings are often reported in short reports and articles. Our hope is that these shorter papers also facilitate an ongoing and faster paced dialogue about research findings in the social sciences.
What is the main difference between a Letter and a Research Paper?
The most obvious difference is the length and focus. Letters are intended to only be 2-4 pages, while a standard research article might be 30 pages. The difference in length means that Letters are going to be much more focused on one important result. A letter won’t have the long literature review that is standard in political science articles and will have much more brief introduction, conclusion, and motivation. This does not mean that the motivation is unimportant; it just means that the motivation has to briefly and clearly convey the general relevance of the work and how it moves the field forward. A Letter will typically have 1-3 small display items (figures, tables, or equations) that convey the main results and these have to be well crafted to clearly communicate the main takeaways from the research.
If you had to give advice to an author considering whether to submit their work to Political Analysis as a Letter or a Research Article, what would you say?
Our first piece of advice would be to submit your work! We’re open to working with authors to help them craft their existing research into a format appropriate for letters. As scholars are thinking about their work, they should know that Letters have a very high standard. We are looking for important findings that are well substantiated and motivated. We also encourage authors to think hard about how they design their display items to clearly convey the key message of the Letter. Lastly, authors should be aware that a significant fraction of submissions might be desk rejected to minimize the burden on reviewers.
You both are Associate Editors of Political Analysis, and you are editing the Letters. Why did you decide to take on this professional responsibility?
Letters provides us an opportunity to create an outlet for important work in Political Methodology. It also gives us the opportunity to develop a new format that we hope will enhance the quality and speed of the academic debates in the social sciences.
Checking the website for the Audio Engineering Society (AES) convention in Los Angeles, I took note of the swipes promoting the event. Each heading was framed as follows: If it’s about ____________, it’s at AES. The slide show contained nine headings that are to be a part of the upcoming convention (in no particular order because you start at whatever point in the slide show you happened to log-in to the site).
Archiving & Restoration
Networked Audio
Broadcast & Streaming
Product Design
Recording
Project Studios
Sound for Picture
Live Sound
Game Sound
The list was interesting to me on many levels, but one significant one that struck me immediately was the absence of mixing and mastering (my main areas of work in audio). A relatively short time ago almost half of these categories did not exist. There was no streaming, no project studios, no networked audio and no game sound. So what is the state of affairs for the young audio engineering student or practitioner?
Interestingly, of the four new fields mentioned, three of them represent diminished opportunities in the field of music recording, with one a singular beacon of hope.
Streaming audio represents the brave new world of audio delivery systems. As these services continue to capture more of the consumer market share they continue to diminish artists ability to earn a decent living (or pay an accomplished audio engineer). A friend of mine with 3 CD releases recently got his Spotify statement and saw that he had more that 60,000 streams of his music. His check was for $17. CDs don’t pay as well as vinyl records used to, downloads don’t pay as well as CDs, and streaming doesn’t pay as well as downloads (not to mention “file-sharing” which doesn’t pay anything). Sure, there may be jobs at Pandora and Spotify for a few engineers helping with the infrastructure of audio streaming, but generally streaming is another brick in the wall that is restricting audio jobs by shrinking the earning capacity of recording artists.
Project studios now dominate most recording projects outside the reasonably well-funded major label records and even most of that work is done in project studios (though they might be quite elaborate facilities). Project studios rarely have spots for interns or assistant engineers so they provide no entree positions for those trying to come up in the engineering ranks. Not only does that limit the available sources of income, but it also prevents the kind of mentoring that actually trains young engineers in the fine points of running sessions. Of course, almost no project studios provide regular, dependable work or with any kind of benefits.
Networked audio systems provide new, faster, and more elaborate connectivity of audio using digital technology. While there may be opportunities in the tech realm for engineers designing and building digital audio networks there is, once again, a shrinking of opportunities for those aspiring to making commercial music recordings. In many instances, these networking systems allow fewer people to do more—a boon only to a small number of audio engineers working with music recordings who can now do remote recordings without having to be present and without having to employ local recording engineers and studios to complete projects with musicians in other locations.
The one bright spot here is Game Sound. The explosive world of video games is providing many good jobs for audio engineers who want to record music. These recordings have become more interesting, higher quality, and featuring more prominent and talented composers and musicians than virtually any other area of music production. The only reservation here is that the music is intended as secondary to the game play (of course) and there is a preponderance of violent video games and therefore musical styles that tend to fit well into a violent atmosphere. However, this is changing with a much broader array of game types achieving new levels of popularity (Mindcraft!).
I do not fault AES for pointing to these areas of interest for audio engineers (other than the apparent absence of mixing and mastering). These are the places where significant activity, development, and change are occurring. They’re just not very encouraging for those of us who became audio engineers because of our deep love of music and our desire to be engaged in its production.
Headline Image: Sound Mixing via CC0 Public Domain via Pixabay
In 2014 Oxford University Press celebrates ten years of open access (OA) publishing. In that time open access has grown massively as a movement and an industry. Here we look back at five key moments which have marked that growth.
2004/05 – Nucleic Acids Research (NAR) converts to OA
At first glance it might seem parochial to include this here, but as Rich Roberts noted on this blog in 2012, Nucleic Acids Research’s move to open access was truly ‘momentous’. To put it in context, in 2004 NAR was OUP’s biggest owned journal and it was not at all clear that many of the elements were in place to drive the growth of OA. But in 2004/2005 NAR moved from being free to publish to free to read – with authors now supporting the journal financially by paying APCs (Article Processing Charges). No wonder Roberts adds that it was ‘with great trepidation’ that OUP and the editors made the change. Roberts needn’t have worried — NAR’s switch has been a huge success — its impact factor has increased, and submissions, which could have fallen off a cliff, have continued to climb. As with anything, there are elements of the NAR model which couldn’t be replicated now, but NAR helped show the publishing world in particular that OA could work. It’s saying something that it’s only ten years on, with the transition of Nature Communications to OA, that any journal near NAR’s size has made the switch.
2008 – National Institutes of Health (NIH) Mandate Introduced
Open access presents huge opportunities for research funders; the removal of barriers to access chimes perfectly with most funders’ aim to disseminate the fruits of their research as widely as possible. But as both the NIH and Wellcome, amongst others, have found out, author interests don’t always chime exactly with theirs. Authors have other pressures to consider – primarily career development – and that means publishing in the best journal, the journal with the highest impact factor, etc. and not necessarily the one with the best open access options. So it was that in 2008 the NIH found it was getting a very low rate of compliance with its recommended OA requirements for authors. What happened next was hugely significant for the progress of open access. As part of an Act which passed through the US legislature, it was made mandatory for all NIH-funded authors to make their works available 12 months after publication. This was transformative in two ways: it meant thousands of articles published from NIH research became available through PubMed Central (PMC), and perhaps just as importantly it legitimised government intervention in OA policy, setting a precedent for future developments in Europe and the United Kingdom.
2008 – Springer buys BioMed Central (BMC)
BioMed Central was the first for-profit open access publisher – and since its inception in 2000 it was closely watched in the industry to see if it could make OA ‘work’. When it was purchased by one of the world’s largest publishers, and when that company’s CEO declared that OA was now a ‘sustainable part of STM publishing’, it was a pretty clear sign to the rest of the industry, and all OA-watchers, that the upstart business model was now proving to be more than just an interesting side line. It also reflected the big players in the industry starting to take OA very seriously, and has been followed by other acquisitions – for example Nature purchasing Frontiers in early 2013. The integration of BMC into Springer has happened gradually over the past five years, and has also been marked by a huge expansion of OA at the parent company. Springer was one of the first subscription publishers to embrace hybrid OA, in 2004, but since acquiring BMC they have also massively increased their fully OA publishing. It seems bizarre to think that back in 2008 there were even some who feared the purchase was aimed at moving all BMC’s journals back to subscription access.
2007 on – Growth of PLOS ONE
The Public Library of Science (PLOS) started publishing open access journals back in 2003, but while its journals quickly developed a reputation for high-quality publishing, the not-for-profit struggled to succeed financially. The advent of PLOS ONE changed all that. PLOS ONE has been transformative for several reasons, most notably its method of peer review. Typically top journals have tended to have their niche, and be selective. A journal on carcinogens would be unlikely to accept a paper about molecular biology, and it would only accept a paper on carcinogens if it was seen to be sufficiently novel and interesting. PLOS ONE changed that. It covers every scientific field, and its peer review is methodological (i.e. is the basic science sound) rather than looking for anything else. This enabled PLOS ONE to rapidly turn into the biggest journal in the world, publishing a staggering 31,500 papers in 2013 alone. PLOS ONE’s success cannot be solely attributed to its OA nature, but it was being OA which enabled PLOS ONE to become the ‘megajournal’ we know today. It would simply not be possible to bring such scale to a subscription journal. The price would balloon beyond the reach of even the biggest library budget. PLOS ONE has spawned a rash of similar journals and more than any one title it has energised the development of OA, dispelling previously-held notions of what could and couldn’t be done in journals publishing.
2012 – The ‘Finch’ Report
It’s difficult to sum up the vast impact of the Finch Report on journals publishing in the UK. The product of a group chaired by the eponymous Dame Janet Finch, the report, by way of two government investigations, catalysed a massive investment in gold open access (funded by APCs) from the UK government, crystallised by Research Councils UK’s OA policy. In setting the direction clearly towards gold OA, ‘Finch’ led to a huge number of journals changing their policies to accommodate UK researchers, and the establishment of OA policies, departments, and infrastructure at academic institutions and publishers across the UK and beyond. The wide-ranging policy implications of ‘Finch’ continue to be felt as time progresses, through 2014’s Higher Education Funding Council (HEFCE) for England policy, through research into the feasibility of OA monographs, and through deliberations in other jurisdictions over whether to follow the UK route to open access. HEFCE’s OA mandate in particular will prove incredibly influential for UK researchers – as it directly ties the assessment of a university’s funding to their success in ensuring their authors publish OA. The mainstream media attention paid to ‘Finch’ also brought OA publishing into the public eye in a way never seen before (or since).
How rapidly does medical knowledge advance? Very quickly if you read modern newspapers, but rather slowly if you study history. Nowhere is this more true than in the fields of neurology and psychiatry.
It was believed that studies of common disorders of the nervous system began with Greco-Roman Medicine, for example, epilepsy, “The sacred disease” (Hippocrates) or “melancholia”, now called depression. Our studies have now revealed remarkable Babylonian descriptions of common neuropsychiatric disorders a millennium earlier.
There were several Babylonian Dynasties with their capital at Babylon on the River Euphrates. Best known is the Neo-Babylonian Dynasty (626-539 BC) associated with King Nebuchadnezzar II (604-562 BC) and the capture of Jerusalem (586 BC). But the neuropsychiatric sources we have studied nearly all derive from the Old Babylonian Dynasty of the first half of the second millennium BC, united under King Hammurabi (1792-1750 BC).
The Babylonians made important contributions to mathematics, astronomy, law and medicine conveyed in the cuneiform script, impressed into clay tablets with reeds, the earliest form of writing which began in Mesopotamia in the late 4th millennium BC. When Babylon was absorbed into the Persian Empire cuneiform writing was replaced by Aramaic and simpler alphabetic scripts and was only revived (translated) by European scholars in the 19th century AD.
The Babylonians were remarkably acute and objective observers of medical disorders and human behaviour. In texts located in museums in London, Paris, Berlin and Istanbul we have studied surprisingly detailed accounts of what we recognise today as epilepsy, stroke, psychoses, obsessive compulsive disorder (OCD), psychopathic behaviour, depression and anxiety. For example they described most of the common seizure types we know today e.g. tonic clonic, absence, focal motor, etc, as well as auras, post-ictal phenomena, provocative factors (such as sleep or emotion) and even a comprehensive account of schizophrenia-like psychoses of epilepsy.
Early attempts at prognosis included a recognition that numerous seizures in one day (i.e. status epilepticus) could lead to death. They recognised the unilateral nature of stroke involving limbs, face, speech and consciousness, and distinguished the facial weakness of stroke from the isolated facial paralysis we call Bell’s palsy. The modern psychiatrist will recognise an accurate description of an agitated depression, with biological features including insomnia, anorexia, weakness, impaired concentration and memory. The obsessive behaviour described by the Babylonians included such modern categories as contamination, orderliness of objects, aggression, sex, and religion. Accounts of psychopathic behaviour include the liar, the thief, the troublemaker, the sexual offender, the immature delinquent and social misfit, the violent, and the murderer.
The Babylonians had only a superficial knowledge of anatomy and no knowledge of brain, spinal cord or psychological function. They had no systematic classifications of their own and would not have understood our modern diagnostic categories. Some neuropsychiatric disorders e.g. stroke or facial palsy had a physical basis requiring the attention of the physician or asû, using a plant and mineral based pharmacology. Most disorders, such as epilepsy, psychoses and depression were regarded as supernatural due to evil demons and spirits, or the anger of personal gods, and thus required the intervention of the priest or ašipu. Other disorders, such as OCD, phobias and psychopathic behaviour were viewed as a mystery, yet to be resolved, revealing a surprisingly open-minded approach.
From the perspective of a modern neurologist or psychiatrist these ancient descriptions of neuropsychiatric phenomenology suggest that the Babylonians were observing many of the common neurological and psychiatric disorders that we recognise today. There is nothing comparable in the ancient Egyptian medical writings and the Babylonians therefore were the first to describe the clinical foundations of modern neurology and psychiatry.
A major and intriguing omission from these entirely objective Babylonian descriptions of neuropsychiatric disorders is the absence of any account of subjective thoughts or feelings, such as obsessional thoughts or ruminations in OCD, or suicidal thoughts or sadness in depression. The latter subjective phenomena only became a relatively modern field of description and enquiry in the 17th and 18th centuries AD. This raises interesting questions about the possibly slow evolution of human self awareness, which is central to the concept of “mental illness”, which only became the province of a professional medical discipline, i.e. psychiatry, in the last 200 years.
The theme of this year’s meeting is “International Law in a Time of Chaos”, exploring the role of international law in conflict mitigation. Panel discussions will examine various aspects of both public international law and private international law, including trade, investment, arbitration, intellectual property, combatting corruption, labor standards in the global supply chain, and human rights, as well as issues of international organizations and international security.
ILW is sponsored and organized by the American Branch of the International Law Association (ABILA) and the International Law Students Association (ILSA). Every year more than one thousand practitioners, academics, diplomats, members of the governmental and nongovernmental sectors, and students attend this conference.
This year’s conference highlights include:
This year’s keynote from Lori Damrosch, Hamilton Fish Professor of International Law and Diplomacy, Columbia Law School, and President of the American Society of International Law. “Democratization of Foreign Policy and International Law, 1914-2014” Friday, 1:30PM (Room 2-02A)
Several talks on recent events in Crimea. (Check out our OPIL Debate Map: Ukraine Use of Force, to learn more on the subject in advance.)
“European Union – Challenges or Chaos,” Friday, 9:00AM (Room 2-02A)
“Update on the International Criminal Court’s Crime of Aggression: Considering Crimea,” Friday, 10:45AM (Room 2-02B)
<“Self-Determination, Secession, and Non Intervention in the Age of Crimea and Kosovo,” Friday, 4:45PM (Room 2-02B)
The “International Adjudication in the 21st Century” panel, including OUP author Cesare Romano, will discuss the key findings of the recently published The Oxford Handbook of International Adjudication. Friday, 9:00AM (Room 2-01B). (Read up on the topic before the event, with free content from the book.)
Top practitioners in the field discuss “International Investment Arbitration and the Rule of Law”, Friday 4:45PM (Room 2-02A). (Sign up for our Free Investment Claims Webinar on October 20th to brush up on VCLT in BIT arbitrations in time for this panel.)
Looking for career advice? Attend this roundtable discussion on Saturday afternoon “Careers in International Human Rights, International Development, and International Rule of Law,” Saturday, 3:30PM (Room 2-02B)
Fordham Law School is located in the wonderful Lincoln Square neighborhood of New York and just around the corner from some great activities after the conference:
ILW Opening Reception. The wine and cheese reception at the Association of the Bar of the City of New York is open to all ILW attendees. 2nd Floor, Reception Area, ABCNY, Thursday at 8:00PM.
Of course, we hope to see you at Oxford University Press booth. We’ll be offering the chance to browse and buy our new and bestselling titles on display at a 20% conference discount, discover what’s new in Oxford Law Online, and pick up sample copies of our latest law journals.
To follow the latest updates about the ILW Conference as it happens, follow us on Twitter at @OUPIntLaw and the hashtag #ILW2014.
See you in there!
Headline image credit: 2011, 62nd St by Cornerstones of New York, CC BY-NC 2.0 via Flickr.
As an Africanist historian committed to reaching broader publics, I was thrilled when the research team for the BBC’s genealogy program Who Do You Think You Are? contacted me late last February about an episode they were working on that involved the subject of some of my research, mixed race relationships in colonial Ghana. I was even more pleased when I realized that their questions about shifting practices and perceptions of intimate relationships between African women and European men in the Gold Coast, as Ghana was then known, were ones I had just explored in a newly published American Historical Review article, which I readily shared with them. This led to a month-long series of lengthy email exchanges, phone conversations, Skype chats, and eventually to an invitation to come to Ghana to shoot the Who Do You Think You Are? episode.
After landing in Ghana in early April, I quickly set off for the coastal town of Sekondi where I met the production team, and the episode’s subject, Reggie Yates, a remarkable young British DJ, actor, and television presenter. Reggie had come to Ghana to find out more about his West African roots, but he discovered along the way that his great grandfather was a British mining accountant who worked in the Gold Coast for close to a decade. His great grandmother, Dorothy Lloyd, was a mixed-race Fante woman whose father — Reggie’s great-great grandfather — was rumored to be a British district commissioner at the turn of the century in the Gold Coast.
The episode explores the nature of the relationship between Dorothy and George, who were married by customary law around 1915 in the mining town of Broomassi, where George worked as the paymaster at the local mine. George and Dorothy set up house in Broomassi and raised their infant son, Harry, there for two years before George left the Gold Coast in 1917 for good. Although their marriage was relatively short lived, it appears that Dorothy’s family and the wider community that she lived in regarded it as a respectable union and no social stigma was attached to her or Harry after George’s departure from the coast.
George and Dorothy lived openly as man and wife in Broomassi during a time period in which publicly recognized intermarriages were almost unheard of. As a privately employed European, George was not bound by the colonial government’s directives against cohabitation between British officers and local women, but he certainly would have been aware of the informal codes of conduct that regulated colonial life. While it was an open secret that white men “kept” local women, these relationships were not to be publicly legitimated.
Precisely because George and Dorothy’s union challenged the racial prescripts of colonial life, it did not resemble the increasingly strident characterizations of interracial relationships as immoral and insalubrious that frequently appeared in the African-owned Gold Coast press during these years. Although not a perfect union, as George was already married to an English woman who lived in London with their children, the trajectory of their relationship suggests that George and Dorothy had a meaningful relationship while they were together, that they provided their son Harry with a loving home, and that they were recognized as a respectable married couple. The latter helps to account for why Dorothy was able to “marry well” after George left. Her marriage to Frank Vardon, a prominent Gold Coaster, would have been unlikely had she been regarded as nothing more than a discarded “whiteman’s toy,” as one Gold Coast writer mockingly called local women who casually liaised with European men. In her own right, Dorothy became an important figure in the Sekondi community where she ultimately settled and raised her son Harry, alongside the children she had with Frank Vardon.
The “white peril” commentaries that I explored in my American Historical Review article proved to be a rhetorically powerful strategy for challenging the moral legitimacy of British colonial rule because they pointed to the gap between the civilizing mission’s moral rhetoric and the sexual immorality of white men in the colony. But rhetoric often sacrifices nuance for argumentative force and Gold Coasters’ “white peril” commentaries were no exception. Left out of view were men like George Yates, who challenged the conventions of their times, albeit imperfectly, and women like Dorothy Lloyd who were not cast out of “respectable” society, but rather took their place in it.
This sense of conflict and connection and of categorical uncertainty surrounding these relationships is what I hope to have contributed to the research process, storyline development, and filming of the Reggie Yates episode of Who Do You Think You Are? The central question the show raises is how do we think about and define relationships that were so heavily circumscribed by racialized power without denying the “possibility of love?” By “endeavor[ing] to trace its imperfections, its perversions,” was Martinican philosopher and anticolonial revolutionary Frantz Fanon’s answer. His insight surely reverberates throughout the episode.
Voting for the 2014 Atlas Place of the Year is now underway. However, you still be curious about the nominees. What makes them so special? Each year, we put the spotlight on the top locations in the world that make us go, “wow”. For good or for bad, this year’s longlist is quite the round-up.
Just hover over the place-markers on the map to learn a bit more about this year’s nominations.
Make sure to vote for your Place of the Year below. If you have another Place of the Year that you would like to nominate, we’d love to know about it in the comments section. Follow along with #POTY2014 until our announcement on 1 December.What do you think Place of the Year 2014 should be?
Image Credits: Ferguson: “Cops Kill Kids”. Photo by Shawn Semmler. CC BY 2.0 via Flickr. Liberia: Ebola Virus Particles. Photo by NIAID. CC BY 2.0 via Flickr. Ukraine: Euromaiden in Kiev 2014-02-19 10-22. Photo by Amakuha. CC BY-SA 3.0 via Wikimedia Commons. Colorado: Grow House 105. Photo by Coleen Whitfield. CC BY-SA 2.0 via Flickr. Nauru: In front of the Menen. Photo by Sean Kelleher. CC BY-SA 2.0 via Flickr. Sochi: Olympic Park Flags (2). Photo by american_rugbler. CC BY-SA 2.0 via Flickr. Mount Sinjar: Sinjar Karst. Photo by Cpl. Dean Davis. Public Domain via Wikimedia Commons. Gaza: The home of the Kware family after it was bombed by the military. Photo by B’Tselem. CC BY 4.0 via Wikimedia Commons. Scotland: Vandalised no thanks sign. Photo by kay roxby. CC BY 2.0 via Flickr. Brazil: World Cup stuff, Rio de Janeiro, Brazil (15). Photo by Jorge in Brazil. CC BY 2.0 via Flickr.
Heading image: Old Globe by Petar Milošević. CC-BY-SA-3.0 via Wikimedia Commons.
Rated by the British Medical Journal as one of the top 15 breakthroughs in medicine over the last 150 years evidence-based medicine (EBM) is an idea that has become highly influential in both clinical practice and health policy-making. EBM promotes a seemingly irrefutable principle: that decision-making in medical practice should be based, as much as possible, on the most up-to-date research findings. Nowhere has this idea been more welcome than in psychiatry, a field that continues to be dogged by a legacy of controversial clinical interventions. Many mental health experts believe that following the rules of EBM is the best way of safeguarding patients from unproven fads or dangerous interventions. If something is effective or ineffective, EBM will tell us.
But it turns out that ensuring medical practice is based on solid evidence is not as straightforward as it sounds. After all, evidence does not emerge from thin air. There are finite resources for research, which means that there is always someone deciding what topics should be researched, whose studies merit funding, and which results will be published. These kinds of decisions are not neutral. They reflect the beliefs and values of policymakers, funders, researchers, and journal editors about what is important. And determining what is important depends on one’s goals: improving clinical practice to be sure, but also reaping profits, promoting one’s preferred hypotheses, and advancing one’s career. In other words, what counts as evidence is partly determined by values and interests.
Let’s take a concrete example from psychiatry. The two most common types of psychiatric interventions are medications and psychotherapy. As in all areas of medicine, manufacturers of psychiatric drugs play a very significant role in the funding of clinical research, more significant in dollar amount than government funding bodies. Pharmaceutical companies develop drugs in order to sell them and make profits and they want to do so in such a manner that maximizes revenue. Research into drug treatments has a natural sponsor — the companies who stand to profit from their sales. Meanwhile, psychotherapy has no such natural sponsor. There are researchers who are interested in psychotherapy and do obtain funding in order to study it. However, the body of research data supporting the use of pharmaceuticals is simply much larger and continues to grow faster than the body of data concerning psychotherapy. If one were to prioritize treatments that were evidence-based, one would have no choice but to privilege medications. In this way the values of the marketplace become incorporated into research, into evidence, and eventually into clinical practice.
The idea that values effect what counts as evidence is a particularly challenging problem for psychiatry because it has always suffered from the criticism that it is not sufficiently scientific. A broken leg is a fact, but whether someone is normal or abnormal is seen as a value judgement. There is a hope amongst proponents of evidence-based psychiatry that EBM can take this subjective component out of psychiatry but it cannot. Showing that a drug, like an antidepressant, can make a person feel less sad does not take away the judgement that there is something wrong with being sad in the first place. The thorniest ethical problems in psychiatry surround clinical cases in which psychiatrists and/or families want to impose treatment on mentally ill persons in hopes of achieving a certain mental state that the patient himself does not want. At the heart of this dispute is whose version of a good life ought to prevail. Evidence doesn’t resolve this debate. Even worse, it might end up hiding it. After all, evidence that a treatment works for certain symptoms — like hallucinations — focuses our attention on getting rid of those symptoms rather than helping people in other ways such as finding ways to learn to live with them.
The original authors of EBM worried that clinicians’ values and their exercise of judgment in clinical decision-making actually led to bad decisions and harmed patients. They wanted to get rid of judgment and values as much as possible and let scientific data guide practice instead. But this is not possible. No research is done without values, no data becomes evidence without judgments. The challenge for psychiatry is to be as open as possible about how values are intertwined with evidence. Frank discussion of the many ethical, cultural, and economic factors that inform psychiatry enriches rather than diminishes the field.
Heading image: Lexapro pills by Tom Varco. CC-BY-SA-3.0 via Wikimedia Commons.
Until the current epidemic, Ebola was largely regarded as not a Western problem. Although fearsome, Ebola seemed contained to remote corners of Africa, far from major international airports. We are now learning the hard way that Ebola is not—and indeed was never—just someone else’s problem. Yes, this outbreak is different: it originated in West Africa, at the border of three countries, where the transportation infrastructure was better developed, and was well under way before it was recognized. But we should have understood that we are “all in this together” for Ebola, as for any, infectious disease.
Understanding that we were profoundly wrong about Ebola can help us to see ethical considerations that should shape how we go forward. Here, I have space just to outline two: reciprocity and fairness.
In the aftermath of the global SARS epidemic that spread to Canada, the Joint Centre for Bioethics at the University of Toronto produced a touchstone document for pandemic planning, Stand on Guard for Thee, which highlights reciprocity as a value. When health care workers take risks to protect us all, we owe them special concern if they are harmed. Dr. Bruce Ribner, speaking on ABC, described Emory University Hospital as willing to take two US health care workers who became infected abroad because they believed these workers deserved the best available treatment for the risks they took for humanitarian ends. Calls to ban the return of US workers—or treatment in the United States of other infected front-line workers—forget that contagious diseases do not occur in a vacuum. Even Ann Coulter recognized, in her own unwitting way, that we owe support to first responders for the burdens they undertake for us all when she excoriated Dr. Kent Brantly for humanitarian work abroad rather than in the United States.
We too often fail to recognize that all the health care and public health workers at risk in the Ebola epidemic—and many have died—are owed duties of special concern. Yet unlike health care workers at Emory, health care workers on the front lines in Africa must make do with limited equipment under circumstances in which it is very difficult for them to be safe, according to a recent Wall Street Journal article. As we go forward we must remember the importance of providing adequately for these workers and for workers in the next predictable epidemics — not just for Americans who are able to return to the US for care. Supporting these workers means providing immediate care for those who fall ill, as well as ongoing care for them and their families if they die or are not longer able to work. But this is not all; health care workers on the front lines can be supported by efforts to minimize disease spread—for example conducting burials to minimize risks of infection from the dead—as well as unceasing attention to the development of public health infrastructures so that risks can be swiftly identified and contained and care can be delivered as safely as possible.
Fairness requires treating others as we would like to be treated ourselves. A way of thinking about what is fair is to ask what we would want done if we did not know our position under the circumstances at hand. In a classic of political philosophy, A Theory of Justice, John Rawls suggested the thought experiment of asking what principles of justice we would be willing to accept for a society in which we were to live, if we didn’t know anything about ourselves except that we would be somewhere in that society. Infectious disease confronts us all with an actual possibility of the Rawlsian thought experiment. We are all enmeshed in a web of infectious organisms, potential vectors to one another and hence potential victims, too. We never know at any given point in time whether we will be victim, vector, or both. It’s as though we were all on a giant airplane, not knowing who might cough, or spit, or bleed, what to whom, and when. So we need to ask what would be fair under these brute facts of human interconnectedness.
At a minimum, we need to ask what would be fair about the allocation of Ebola treatments, both before and if they become validated and more widely available. Ethical issues such as informed consent and exploitation of vulnerable populations in testing of experimental medicines certainly matter but should not obscure that fairness does, too, whether we view the medications as experimental or last-ditch treatment. Should limited supplies be administered to the worst off? Are these the sickest, most impoverished, or those subjected to the greatest risks, especially risks of injustice? Or, should limited supplies be directed where they might do the most good—where health care workers are deeply fearful and abandoning patients, or where we need to encourage people who have been exposed to be monitored and isolated if needed?
These questions of fairness occur in the broader context of medicine development and distribution. ZMAPP (the experimental monoclonal antibody administered on a compassionate use basis to the two Americans) was jointly developed by the US government, the Public Health Agency of Canada, and a few very small companies. Ebola has not drawn a great deal of drug development attention; indeed, infectious diseases more generally have not drawn their fair share of attention from Big Pharma, as least as measured by the global burden of disease.
WHO has declared the Ebola epidemic an international emergency and is convening ethics experts to consider such questions as whether and how the experimental treatment administered to the two Americans should be made available to others. I expect that the values of reciprocity and fairness will surface in these discussions. Let us hope they do, and that their import is remembered beyond the immediate emergency.
Headline Image credit: Ebola virus virion. Created by CDC microbiologist Cynthia Goldsmith, this colorized transmission electron micrograph (TEM) revealed some of the ultrastructural morphology displayed by an Ebola virus virion. Centers for Disease Control and Prevention’s Public Health Image Library, #10816 . Public domain via Wikimedia Commons.
Science and morality are often seen as poles apart. Doesn’t science deal with facts, and morality with, well, opinions? Isn’t science about empirical evidence, and morality about philosophy? In my view this is wrong. Science and morality are neighbours. Both are rational enterprises. Both require a combination of conceptual analysis, and empirical evidence. Many, perhaps most moral disagreements hinge on disagreements over evidence and facts, rather than disagreements over moral principle.
Consider the recent child euthanasia law in Belgium that allows a child to be killed – as a mercy killing – if: (a) the child has a serious and incurable condition with death expected to occur within a brief period; (b) the child is experiencing constant and unbearable suffering; (c) the child requests the euthanasia and has the capacity of discernment – the capacity to understand what he or she is requesting; and, (d) the parents agree to the child’s request for euthanasia. The law excludes children with psychiatric disorders. No one other than the child can make the request.
Is this law immoral? Thought experiments can be useful in testing moral principles. These are like the carefully controlled experiments that have been so useful in science. A lorry driver is trapped in the cab. The lorry is on fire. The driver is on the verge of being burned to death. His life cannot be saved. You are standing by. You have a gun and are an excellent shot and know where to shoot to kill instantaneously. The bullet will be able to penetrate the cab window. The driver begs you to shoot him to avoid a horribly painful death.
Would it be right to carry out the mercy killing? Setting aside legal considerations, I believe that it would be. It seems wrong to allow the driver to suffer horribly for the sake of preserving a moral ideal against killing.
Thought experiments are often criticised for being unrealistic. But this can be a strength. The point of the experiment is to test a principle, and the ways in which it is unrealistic can help identify the factual aspects that are morally relevant. If you and I agree that it would be right to kill the lorry driver then any disagreement over the Belgian law cannot be because of a fundamental disagreement over mercy killing. It is likely to be a disagreement over empirical facts or about how facts integrate with moral principles.
There is a lot of discussion of the Belgian law on the internet. Most of it against. What are the arguments?
Some allow rhetoric to ride roughshod over reason. Take this, for example: “I’m sure the Belgian parliament would agree that minors should not have access to alcohol, should not have access to pornography, should not have access to tobacco, but yet minors for some reason they feel should have access to three grams of phenobarbitone in their veins – it just doesn’t make sense.”
But alcohol, pornography and tobacco are all considered to be against the best interests of children. There is, however, a very significant reason for the ‘three grams of phenobarbitone’: it prevents unnecessary suffering for a dying child. There may be good arguments against euthanasia but using unexamined and poor analogies is just sloppy thinking.
I have more sympathy for personal experience. A mother of two terminally ill daughters wrote in the Catholic Herald: “Through all of their suffering and pain the girls continued to love life and to make the most of it…. I would have done anything out of love for them, but I would never have considered euthanasia.”
But this moving anecdote is no argument against the Belgian law. Indeed, under that law the mother’s refusal of euthanasia would be decisive. It is one thing for a parent to say that I do not believe that euthanasia is in my child’s best interests; it is quite another to say that any parent who thinks euthanasia is in their child’s best interests must be wrong.
To understand a moral position it is useful to state the moral principles and the empirical assumptions on which it is based. So I will state mine.
Moral Principles
A mercy killing can be in a person’s best interests.
A person’s competent wishes should have very great weight in what is done to her.
Parents’ views as to what it right for their children should normally be given significant moral weight.
Mercy killing, in the situation where a person is suffering and faces a short life anyway, and where the person is requesting it, can be the right thing to do.
Empirical assumptions
There are some situations in which children with a terminal illness suffer so much that it is in their interests to be dead.
There are some situations in which the child’s suffering cannot be sufficiently alleviated short of keeping the child permanently unconscious.
A law can be formulated with sufficient safeguards to prevent euthansia from being carried out in situations when it is not justified.
This last empirical claim is the most difficult to assess. Opponents of child euthanasia may believe such safeguards are not possible: that it is better not to risk sliding down the slippery slope. But the ‘slippery slope argument’ is morally problematic: it is an argument against doing the right thing on some occasions (carrying out a mercy killing when that is right) because of the danger of doing the wrong thing on other occasions (carrying out a killing when that is wrong). I prefer to focus on safeguards against slipping. But empirical evidence could lead me to change my views on child euthanasia. My guess is that for many people who are against the new Belgian law, it is the fear of the slippery slope that is ultimately crucial. Much moral disagreement, when carefully considered, comes down to disagreement over facts. Scientific evidence is a key component of moral argument.
The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday, subscribe to Very Short Introductions articles on the OUPblog via email or RSS, and like Very Short Introductions on Facebook.
Subscribe to the OUPblog via email or RSS.
Subscribe to only science and medicine articles on the OUPblog via email or RSS.
Image credit: Legality of Euthanasia throughout the world By Jrockley. Public domain via Wikimedia Commons
First published in Philosophy Now Issue 91, July/Aug 2012.
For the vast majority of our 150,000 years or so on the planet, we lived in small, close-knit groups, working hard with primitive tools to scratch sufficient food and shelter from the land. Sometimes we competed with other small groups for limited resources. Thanks to evolution, we are supremely well adapted to that world, not only physically, but psychologically, socially and through our moral dispositions.
But this is no longer the world in which we live. The rapid advances of science and technology have radically altered our circumstances over just a few centuries. The population has increased a thousand times since the agricultural revolution eight thousand years ago. Human societies consist of millions of people. Where our ancestors’ tools shaped the few acres on which they lived, the technologies we use today have effects across the world, and across time, with the hangovers of climate change and nuclear disaster stretching far into the future. The pace of scientific change is exponential. But has our moral psychology kept up?
With great power comes great responsibility. However, evolutionary pressures have not developed for us a psychology that enables us to cope with the moral problems our new power creates. Our political and economic systems only exacerbate this. Industrialisation and mechanisation have enabled us to exploit natural resources so efficiently that we have over-stressed two-thirds of the most important eco-systems.
A basic fact about the human condition is that it is easier for us to harm each other than to benefit each other. It is easier for us to kill than it is for us to save a life; easier to injure than to cure. Scientific developments have enhanced our capacity to benefit, but they have enhanced our ability to harm still further. As a result, our power to harm is overwhelming. We are capable of forever putting an end to all higher life on this planet. Our success in learning to manipulate the world around us has left us facing two major threats: climate change – along with the attendant problems caused by increasingly scarce natural resources – and war, using immensely powerful weapons. What is to be done to counter these threats?
Our Natural Moral Psychology
Our sense of morality developed around the imbalance between our capacities to harm and to benefit on the small scale, in groups the size of a small village or a nomadic tribe – no bigger than a hundred and fifty or so people. To take the most basic example, we naturally feel bad when we cause harm to others within our social groups. And commonsense morality links responsibility directly to causation: the more we feel we caused an outcome, the more we feel responsible for it. So causing a harm feels worse than neglecting to create a benefit. The set of rights that we have developed from this basic rule includes rights not to be harmed, but not rights to receive benefits. And we typically extend these rights only to our small group of family and close acquaintances. When we lived in small groups, these rights were sufficient to prevent us harming one another. But in the age of the global society and of weapons with global reach, they cannot protect us well enough.
There are three other aspects of our evolved psychology which have similarly emerged from the imbalance between the ease of harming and the difficulty of benefiting, and which likewise have been protective in the past, but leave us open now to unprecedented risk:
Our vulnerability to harm has left us loss-averse, preferring to protect against losses than to seek benefits of a similar level.
We naturally focus on the immediate future, and on our immediate circle of friends. We discount the distant future in making judgements, and can only empathise with a few individuals based on their proximity or similarity to us, rather than, say, on the basis of their situations. So our ability to cooperate, applying our notions of fairness and justice, is limited to our circle, a small circle of family and friends. Strangers, or out-group members, in contrast, are generally mistrusted, their tragedies downplayed, and their offences magnified.
We feel responsible if we have individually caused a bad outcome, but less responsible if we are part of a large group causing the same outcome and our own actions can’t be singled out.
Case Study: Climate Change and the Tragedy of the Commons
There is a well-known cooperation or coordination problem called ‘the tragedy of the commons’. In its original terms, it asks whether a group of village herdsmen sharing common pasture can trust each other to the extent that it will be rational for each of them to reduce the grazing of their own cattle when necessary to prevent over-grazing. One herdsman alone cannot achieve the necessary saving if the others continue to over-exploit the resource. If they simply use up the resource he has saved, he has lost his own chance to graze but has gained no long term security, so it is not rational for him to self-sacrifice. It is rational for an individual to reduce his own herd’s grazing only if he can trust a sufficient number of other herdsmen to do the same. Consequently, if the herdsmen do not trust each other, most of them will fail to reduce their grazing, with the result that they will all starve.
The tragedy of the commons can serve as a simplified small-scale model of our current environmental problems, which are caused by billions of polluters, each of whom contributes some individually-undetectable amount of carbon dioxide to the atmosphere. Unfortunately, in such a model, the larger the number of participants the more inevitable the tragedy, since the larger the group, the less concern and trust the participants have for one another. Also, it is harder to detect free-riders in a larger group, and humans are prone to free ride, benefiting from the sacrifice of others while refusing to sacrifice themselves. Moreover, individual damage is likely to become imperceptible, preventing effective shaming mechanisms and reducing individual guilt.
Anthropogenic climate change and environmental destruction have additional complicating factors. Although there is a large body of scientific work showing that the human emission of greenhouse gases contributes to global climate change, it is still possible to entertain doubts about the exact scale of the effects we are causing – for example, whether our actions will make the global temperature increase by 2°C or whether it will go higher, even to 4°C – and how harmful such a climate change will be.
In addition, our bias towards the near future leaves us less able to adequately appreciate the graver effects of our actions, as they will occur in the more remote future. The damage we’re responsible for today will probably not begin to bite until the end of the present century. We will not benefit from even drastic action now, and nor will our children. Similarly, although the affluent countries are responsible for the greatest emissions, it is in general destitute countries in the South that will suffer most from their harmful effects (although Australia and the south-west of the United States will also have their fair share of droughts). Our limited and parochial altruism is not strong enough to provide a reason for us to give up our consumerist life-styles for the sake of our distant descendants, or our distant contemporaries in far-away places.
Given the psychological obstacles preventing us from voluntarily dealing with climate change, effective changes would need to be enforced by legislation. However, politicians in democracies are unlikely to propose such legislation. Effective measures will need to be tough, and so are unlikely to win a political leader a second term in office. Can voters be persuaded to sacrifice their own comfort and convenience to protect the interests of people who are not even born yet, or to protect species of animals they have never even heard of? Will democracy ever be able to free itself from powerful industrial interests? Democracy is likely to fail. Developed countries have the technology and wealth to deal with climate change, but we do not have the political will.
If we keep believing that responsibility is directly linked to causation, that we are more responsible for the results of our actions than the results of our omissions, and that if we share responsibility for an outcome with others our individual responsibility is lowered or removed, then we will not be able to solve modern problems like climate change, where each person’s actions contribute imperceptibly but inevitably. If we reject these beliefs, we will see that we in the rich, developed countries are more responsible for the misery occurring in destitute, developing countries than we are spontaneously inclined to think. But will our attitudes change?
Moral Bioenhancement
Our moral shortcomings are preventing our political institutions from acting effectively. Enhancing our moral motivation would enable us to act better for distant people, future generations, and non-human animals. One method to achieve this enhancement is already practised in all societies: moral education. Al Gore, Friends of the Earth and Oxfam have already had success with campaigns vividly representing the problems our selfish actions are creating for others – others around the world and in the future. But there is another possibility emerging. Our knowledge of human biology – in particular of genetics and neurobiology – is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process. We could use these techniques to overcome the moral and psychological shortcomings that imperil the human species. We are at the early stages of such research, but there are few cogent philosophical or moral objections to the use of specifically biomedical moral enhancement – or moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility of developing moral bioenhancement technologies – not to replace traditional moral education, but to complement it. We simply can’t afford to miss opportunities. We have provided ourselves with the tools to end worthwhile life on Earth forever. Nuclear war, with the weapons already in existence today could achieve this alone. If we must possess such a formidable power, it should be entrusted only to those who are both morally enlightened and adequately informed.
Objection 1: Too Little, Too Late?
We already have the weapons, and we are already on the path to disastrous climate change, so perhaps there is not enough time for this enhancement to take place. Moral educators have existed within societies across the world for thousands of years – Buddha, Confucius and Socrates, to name only three – yet we still lack the basic ethical skills we need to ensure our own survival is not jeopardised. As for moral bioenhancement, it remains a field in its infancy.
We do not dispute this. The relevant research is in its inception, and there is no guarantee that it will deliver in time, or at all. Our claim is merely that the requisite moral enhancement is theoretically possible – in other words, that we are not biologically or genetically doomed to cause our own destruction – and that we should do what we can to achieve it.
Objection 2: The Bootstrapping Problem
We face an uncomfortable dilemma as we seek out and implement such enhancements: they will have to be developed and selected by the very people who are in need of them, and as with all science, moral bioenhancement technologies will be open to abuse, misuse or even a simple lack of funding or resources.
The risks of misapplying any powerful technology are serious. Good moral reasoning was often overruled in small communities with simple technology, but now failure of morality to guide us could have cataclysmic consequences. A turning point was reached at the middle of the last century with the invention of the atomic bomb. For the first time, continued technological progress was no longer clearly to the overall advantage of humanity. That is not to say we should therefore halt all scientific endeavour. It is possible for humankind to improve morally to the extent that we can use our new and overwhelming powers of action for the better. The very progress of science and technology increases this possibility by promising to supply new instruments of moral enhancement, which could be applied alongside traditional moral education.
Objection 3: Liberal Democracy – a Panacea?
In recent years we have put a lot of faith in the power of democracy. Some have even argued that democracy will bring an ‘end’ to history, in the sense that it will end social and political development by reaching its summit. Surely democratic decision-making, drawing on the best available scientific evidence, will enable government action to avoid the looming threats to our future, without any need for moral enhancement?
In fact, as things stand today, it seems more likely that democracy will bring history to an end in a different sense: through a failure to mitigate human-induced climate change and environmental degradation. This prospect is bad enough, but increasing scarcity of natural resources brings an increased risk of wars, which, with our weapons of mass destruction, makes complete destruction only too plausible.
Sometimes an appeal is made to the so-called ‘jury theorem’ to support the prospect of democracy reaching the right decisions: even if voters are on average only slightly more likely to get a choice right than wrong – suppose they are right 51% of the time – then, where there is a sufficiently large numbers of voters, a majority of the voters (ie, 51%) is almost certain to make the right choice.
However, if the evolutionary biases we have already mentioned – our parochial altruism and bias towards the near future – influence our attitudes to climatic and environmental policies, then there is good reason to believe that voters are more likely to get it wrong than right. The jury theorem then means it’s almost certain that a majority will opt for the wrong policies! Nor should we take it for granted that the right climatic and environmental policy will always appear in manifestoes. Powerful business interests and mass media control might block effective environmental policy in a market economy.
Conclusion
Modern technology provides us with many means to cause our downfall, and our natural moral psychology does not provide us with the means to prevent it. The moral enhancement of humankind is necessary for there to be a way out of this predicament. If we are to avoid catastrophe by misguided employment of our power, we need to be morally motivated to a higher degree (as well as adequately informed about relevant facts). A stronger focus on moral education could go some way to achieving this, but as already remarked, this method has had only modest success during the last couple of millennia. Our growing knowledge of biology, especially genetics and neurobiology, could deliver additional moral enhancement, such as drugs or genetic modifications, or devices to augment moral education.
The development and application of such techniques is risky – it is after all humans in their current morally-inept state who must apply them – but we think that our present situation is so desperate that this course of action must be investigated.
We have radically transformed our social and natural environments by technology, while our moral dispositions have remained virtually unchanged. We must now consider applying technology to our own nature, supporting our efforts to cope with the external environment that we have created.
Biomedical means of moral enhancement may turn out to be no more effective than traditional means of moral education or social reform, but they should not be rejected out of hand. Advances are already being made in this area. However, it is too early to predict how, or even if, any moral bioenhancement scheme will be achieved. Our ambition is not to launch a definitive and detailed solution to climate change or other mega-problems. Perhaps there is no realistic solution. Our ambition at this point is simply to put moral enhancement in general, and moral bioenhancement in particular, on the table. Last century we spent vast amounts of resources increasing our ability to cause great harm. It would be sad if, in this century, we reject opportunities to increase our capacity to create benefits, or at least to prevent such harm.
Julian Savulescu is a Professor of Philosophy at Oxford University and Ingmar Persson is a Professor of Philosophy at the University of Gothenburg. This article is drawn from their book Unfit for the Future: The Urgent Need for Moral Enhancement (Oxford University Press, 2012).
Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.
View more about this book on the
0 Comments on Unfit for the future: The urgent need for moral enhancement as of 1/1/1900
Robert M. Veatch is Professor of Medical Ethics at The Kennedy Institute of Ethics at Georgetown University. He received the career distinguished achievement award from Georgetown University in 2005 and has received honorary doctorates from Creighton and Union College. In his new book, Patient, Heal Thyself: How the “New Medicine” Puts the Patient in Charge, he sheds light on a fundamental change sweeping through the American health care system, a change that puts the patient in charge of treatment to an unprecedented extent. In the original article below, Veatch looks at the recent debate over mammograms.
Controversy has erupted over recommendations of a government-sponsored task force that are widely interpreted as opposing mammography for women ages 40-50 without special risk factors. This reverses an earlier recommendation favoring such screening. In response a number of critics including Bernadine Healy, the form head of the National Institutes of Health, and spokespersons for the American Cancer Society and the American College of Radiation have challenged the recommendation claiming that cutting out the screening will cost people’s lives. They insist that 40-50 year-olds should still be screened routinely.
Strange as it may seem, both of these positions are wrong. Both the defenders of the task force recommendations and the critics make the mistake of assuming that the data from medical science can tell a person what the correct decision is regarding a medical choice such as breast cancer screening. I am a defender of what I call the “new medicine,” the medicine in which it is up to the patient to make the value choices related to her medical treatment. In principle, decisions such as those addressed by the mammography task force and its critics cannot be derived from the facts alone. Each person must evaluate the possible outcomes based on his or her own beliefs and values. This is true not only for areas of obvious value judgment such as abortion and withdrawing life-support during terminal illness, but literally for every medical choice, no matter how mundane.
In the case of mammography screening for breast cancer remarkable agreement exists on the medical facts. Mammography catches cancers that cannot be found by other techniques such as breast self-exam. People’s lives are saved by mammography. The problem is that many more lives can be saved screening older women in part because the incidence of cancer is greater. The task force expresses the benefit in terms of the number of people who would need to be screened to extend one life. For women 40 to 49, 1904 would have to be screened; for women 50-59 only 1339. Thus the absolute risk reduction from screening is greater for the older women. In an article published in last week’sAnnals of Internal Medicine alongside the task force report, the same idea is expressed in terms of percentage reduction in breast cancer deaths from screening compared to no screening. For women
0 Comments on The Mammography Furor: Why Both Opponents and Proponents of Screening Are Wrong as of 1/1/1900
Scholars have written a lot about the difficulties in the study of religion generally. Those difficulties become even messier when we use the words black or African American to describe religion. The adjectives bear the burden of a difficult history that colors the way religion is practiced and understood in the United States. They register the horror of slavery and the terror of Jim Crow as well as the richly textured experiences of a captured people, for whom sorrow stands alongside joy. It is in this context, one characterized by the ever-present need to account for one’s presence in the world in the face of the dehumanizing practice of white supremacy, that African American religion takes on such significance.
To be clear, African American religious life is not reducible to those wounds. That life contains within it avenues for solace and comfort in God, answers to questions about who we take ourselves to be and about our relation to the mysteries of the universe; moreover, meaning is found, for some, in submission to God, in obedience to creed and dogma, and in ritual practice. Here evil is accounted for. And hope, at least for some, assured. In short, African American religious life is as rich and as complicated as the religious life of other groups in the United States, but African American religion emerges in the encounter between faith, in all of its complexity, and white supremacy.
I take it that if the phrase African American religion is to have any descriptive usefulness at all, it must signify something more than African Americans who are religious. African Americans practice a number of different religions. There are black people who are Buddhist, Jehovah Witness, Mormon, and Baha’i. But the fact that African Americans practice these traditions does not lead us to describe them as black Buddhism or black Mormonism. African American religion singles out something more substantive than that.
The adjective refers instead to a racial context within which religious meanings have been produced and reproduced. The history of slavery and racial discrimination in the United States birthed particular religious formations among African Americans. African Americans converted to Christianity, for example, in the context of slavery. Many left predominantly white denominations to form their own in pursuit of a sense of self- determination. Some embraced a distinctive interpretation of Islam to make sense of their condition in the United States. Given that history, we can reasonably describe certain variants of Christianity and Islam as African American and mean something beyond the rather uninteresting claim that black individuals belong to these different religious traditions.
The adjective black or African American works as a marker of difference: as a way of signifying a tradition of struggle against white supremacist practices and a cultural repertoire that reflects that unique journey. The phrase calls up a particular history and culture in our efforts to understand the religious practices of a particular people. When I use the phrase, African American religion, then, I am not referring to something that can be defined substantively apart from varied practices; rather, my aim is to orient you in a particular way to the material under consideration, to call attention to a sociopolitical history, and to single out the workings of the human imagination and spirit under particular conditions.
When Howard Thurman, the great 20th century black theologian, declared that the slave dared to redeem the religion profaned in his midst, he offered a particular understanding of black Christianity: that this expression of Christianity was not the idolatrous embrace of Christian doctrine which justified the superiority of white people and the subordination of black people. Instead, black Christianity embraced the liberating power of Jesus’s example: his sense that all, no matter their station in life, were children of God. Thurman sought to orient the reader to a specific inflection of Christianity in the hands of those who lived as slaves. That difference made a difference. We need only listen to the spirituals, give attention to the way African Americans interpreted the Gospel, and to how they invoked Jesus in their lives.
We cannot deny that African American religious life has developed, for much of its history, under captured conditions. Slaves had to forge lives amid the brutal reality of their condition and imagine possibilities beyond their status as slaves. Religion offered a powerful resource in their efforts. They imagined possibilities beyond anything their circumstances suggested. As religious bricoleurs, they created, as did their children and children’s children, on the level of religious consciousness and that creativity gave African American religion its distinctive hue and timber.
African Americans drew on the cultural knowledge, however fleeting, of their African past. They selected what they found compelling and rejected what they found unacceptable in the traditions of white slaveholders. In some cases, they reached for traditions outside of the United States altogether. They took the bits and pieces of their complicated lives and created distinctive expressions of the general order of existence that anchored their efforts to live amid the pressing nastiness of life. They created what we call African American religion.
Headline image credit: Candles, by Markus Grossalber, CC-BY-2.0 via Flickr.
The post What is African American religion? appeared first on OUPblog.
Related Stories
If a “revolution” in our field or area of knowledge was ongoing, would we feel it and recognize it? And if so, how?
I think a methodological “revolution” is probably going on in the science of epidemiology, but I’m not totally sure. Of course, in science not being sure is part of our normal state. And we mostly like it. I had the feeling that a revolution was ongoing in epidemiology many times. While reading scientific articles, for example. And I saw signs of it, which I think are clear, when reading the latest draft of the forthcoming book Causal Inference by M.A. Hernán and J.M. Robins from Harvard (Chapman & Hall / CRC, 2015). I think the “revolution” — or should we just call it a “renewal”? — is deeply changing how epidemiological and clinical research is conceived, how causal inferences are made, and how we assess the validity and relevance of epidemiological findings. I suspect it may be having an immense impact on the production of scientific evidence in the health, life, and social sciences. If this were so, then the impact would also be large on most policies, programs, services, and products in which such evidence is used. And it would be affecting thousands of institutions, organizations and companies, millions of people.
One example: at present, in clinical and epidemiological research, every week “paradoxes” are being deconstructed. Apparent paradoxes that have long been observed, and whose causal interpretation was at best dubious, are now shown to have little or no causal significance. For example, while obesity is a well-established risk factor for type 2 diabetes (T2D), among people who already developed T2D the obese fare better than T2D individuals with normal weight. Obese diabetics appear to survive longer and to have a milder clinical course than non-obese diabetics. But it is now being shown that the observation lacks causal significance. (Yes, indeed, an observation may be real and yet lack causal meaning.) The demonstration comes from physicians, epidemiologists, and mathematicians like Robins, Hernán, and colleagues as diverse as S. Greenland, J. Pearl, A. Wilcox, C. Weinberg, S. Hernández-Díaz, N. Pearce, C. Poole, T. Lash , J. Ioannidis, P. Rosenbaum, D. Lawlor, J. Vandenbroucke, G. Davey Smith, T. VanderWeele, or E. Tchetgen, among others. They are building methodological knowledge upon knowledge and methods generated by graph theory, computer science, or artificial intelligence. Perhaps one way to explain the main reason to argue that observations as the mentioned “obesity paradox” lack causal significance, is that “conditioning on a collider” (in our example, focusing only on individuals who developed T2D) creates a spurious association between obesity and survival.
The “revolution” is partly founded on complex mathematics, and concepts as “counterfactuals,” as well as on attractive “causal diagrams” like Directed Acyclic Graphs (DAGs). Causal diagrams are a simple way to encode our subject-matter knowledge, and our assumptions, about the qualitative causal structure of a problem. Causal diagrams also encode information about potential associations between the variables in the causal network. DAGs must be drawn following rules much more strict than the informal, heuristic graphs that we all use intuitively. Amazingly, but not surprisingly, the new approaches provide insights that are beyond most methods in current use. In particular, the new methods go far deeper and beyond the methods of “modern epidemiology,” a methodological, conceptual, and partly ideological current whose main eclosion took place in the 1980s lead by statisticians and epidemiologists as O. Miettinen, B. MacMahon, K. Rothman, S. Greenland, S. Lemeshow, D. Hosmer, P. Armitage, J. Fleiss, D. Clayton, M. Susser, D. Rubin, G. Guyatt, D. Altman, J. Kalbfleisch, R. Prentice, N. Breslow, N. Day, D. Kleinbaum, and others.
We live exciting days of paradox deconstruction. It is probably part of a wider cultural phenomenon, if you think of the “deconstruction of the Spanish omelette” authored by Ferran Adrià when he was the world-famous chef at the elBulli restaurant. Yes, just kidding.
Right now I cannot find a better or easier way to document the possible “revolution” in epidemiological and clinical research. Worse, I cannot find a firm way to assess whether my impressions are true. No doubt this is partly due to my ignorance in the social sciences. Actually, I don’t know much about social studies of science, epistemic communities, or knowledge construction. Maybe this is why I claimed that a sociology of epidemiology is much needed. A sociology of epidemiology would apply the scientific principles and methods of sociology to the science, discipline, and profession of epidemiology in order to improve understanding of the wider social causes and consequences of epidemiologists’ professional and scientific organization, patterns of practice, ideas, knowledge, and cultures (e.g., institutional arrangements, academic norms, scientific discourses, defense of identity, and epistemic authority). It could also address the patterns of interaction of epidemiologists with other branches of science and professions (e.g. clinical medicine, public health, the other health, life, and social sciences), and with social agents, organizations, and systems (e.g. the economic, political, and legal systems). I believe the tradition of sociology in epidemiology is rich, while the sociology of epidemiology is virtually uncharted (in the sense of not mapped neither surveyed) and unchartered (i.e. not furnished with a charter or constitution).
Another way I can suggest to look at what may be happening with clinical and epidemiological research methods is to read the changes that we are witnessing in the definitions of basic concepts as risk, rate, risk ratio, attributable fraction, bias, selection bias, confounding, residual confounding, interaction, cumulative and density sampling, open population, test hypothesis, null hypothesis, causal null, causal inference, Berkson’s bias, Simpson’s paradox, frequentist statistics, generalizability, representativeness, missing data, standardization, or overadjustment. The possible existence of a “revolution” might also be assessed in recent and new terms as collider, M-bias, causal diagram, backdoor (biasing path), instrumental variable, negative controls, inverse probability weighting, identifiability, transportability, positivity, ignorability, collapsibility, exchangeable, g-estimation, marginal structural models, risk set, immortal time bias, Mendelian randomization, nonmonotonic, counterfactual outcome, potential outcome, sample space, or false discovery rate.
You may say: “And what about textbooks? Are they changing dramatically? Has one changed the rules?” Well, the new generation of textbooks is just emerging, and very few people have yet read them. Two good examples are the already mentioned text by Hernán and Robins, and the soon to be published by T. VanderWeele, Explanation in causal inference: Methods for mediation and interaction (Oxford University Press, 2015). Clues can also be found in widely used textbooks by K. Rothman et al. (Modern Epidemiology, Lippincott-Raven, 2008), M. Szklo and J Nieto (Epidemiology: Beyond the Basics, Jones & Bartlett, 2014), or L. Gordis (Epidemiology, Elsevier, 2009).
Finally, another good way to assess what might be changing is to read what gets published in top journals as Epidemiology, the International Journal of Epidemiology, the American Journal of Epidemiology, or the Journal of Clinical Epidemiology. Pick up any issue of the main epidemiologic journals and you will find several examples of what I suspect is going on. If you feel like it, look for the DAGs. I recently saw a tweet saying “A DAG in The Lancet!”. It was a surprise: major clinical journals are lagging behind. But they will soon follow and adopt the new methods: the clinical relevance of the latter is huge. Or is it not such a big deal? If no “revolution” is going on, how are we to know?
Feature image credit: Test tubes by PublicDomainPictures. Public Domain via Pixabay.
The post The deconstruction of paradoxes in epidemiology appeared first on OUPblog.
Related Stories
For many of us, nature is defined as an outdoor space, untouched by human hands, and a place we escape to for refuge. We often spend time away from our daily routines to be in nature, such as taking a backwoods camping trip, going for a long hike in an urban park, or gardening in our backyard. Think about the last time you were out in nature, what comes to mind? For me, it was a canoe trip with friends. I can picture myself in our boat, the sound of the birds and rustling leaves in the background, the smell of cedars mixed with the clearing morning mist, and the sight of the still waters in front of me. Most of all, I remember a sense of calmness and clarity which I always achieve when I’m in nature.
Nature takes us away from the demands of life, and allows us to concentrate on the world around us with little to no effort. We can easily be taken back to a summer day by the smell of fresh cut grass, and force ourselves to be still to listen to the distant sound of ocean waves. Time in nature has a wealth of benefits from reducing stress, improving mood, increasing attentional capacities, and facilitating and creating social bonds. A variety of work supports nature being healing and health promoting at both an individual level (such as being energized after a walk with your dog) and a community level (such as neighbors coming together to create a local co-op garden). However, it can become difficult to experience the outdoors when we spend most of our day within a built environment.
I’d like you to stop for a moment and look around. What do you see? Are there windows? Are there any living plants or animals? Are the walls white? Do you hear traffic or perhaps the hum of your computer? Are you smelling circulated air? As I write now I hear the buzz of the florescent lights above me, and take a deep inhale of the lingering smell from my morning coffee. There is no nature except for the few photographs of the countryside and flowers that I keep tapped to my wall. I often feel hypocritical researching nature exposure sitting in front of a computer screen in my windowless office. But this is the reality for most of us. So how can we tap into the benefits of nature in order to create healthy and healing indoor environments that mimic nature and provide us with the same benefits as being outdoors?
Urban spaces often get a bad rap. Sure, they’re typically overcrowded, high in pollution, and limited in their natural and green spaces, but they also offer us the ability to transform the world around us into something that is meaningful and also health promoting. Beyond architectural features such as skylights, windows, and open air courtyards, we can use ambient features to adapt indoor spaces to replicate the outdoors. The integration of plants, animals, sounds, scents, and textures into our existing indoor environments enables us to create a wealth of natural environments indoors.
Notable examples of indoor nature, are potted plants or living walls in office spaces, atriums providing natural light, and large mural landscapes. In fact, much research has shown that the presence of such visual aids provides the same benefits of being outdoors. Incorporating just a few pieces of greenery into your workspace can help increase your productivity, boost your mood, improve your health, and help you concentrate on getting your work done. But being in nature is more than just seeing, it’s experiencing it fully and being immersed into a world that engages all of your senses. The use of natural sounds, scents, and textures (e.g. wooden furniture or carpets that look and feel like grass) provides endless possibilities for creating a natural environment indoors, and encouraging built environments to be therapeutic spaces. The more nature-like the indoor space can be, the more apt it is to illicit the same psychological and physical benefits that being outdoors does. Ultimately, the built environment can engage my senses in a way that brings me back to my canoe trip, and help me feel that same clarity and calmness that I did on the lake.
On a broader level, indoor nature may also be a means of encouraging sustainable and eco-friendly behaviors. With more generations growing up inside, we risk creating a society that is unaware of the value of nature. It’s easy to suggest that the solution to our declining involvement with nature is to just “go outside”; but with today’s busy lifestyle, we cannot always afford the time and money to step away. Integrating nature into our indoor environment is one way to foster the relationship between us and nature, and to encourage a sense of stewardship and appreciation for our natural world. By experiencing the health promoting and healing properties of nature, we can instill individuals with the significance of our natural world.
As I look around my office I’ve decided I need to take some of my own advice and bring my own little piece of nature inside. I encourage you to think about what nature means to you, and how you can incorporate this meaning into your own space. Does it involve fresh cut flowers? A photograph of your annual family campsite? The sound of birds in the background as you work? Whatever it is, I’m sure it’ll leave you feeling a little bit lighter, and maybe have you working a little bit faster.
Image: World Financial Center Winter Garden by WiNG. CC-BY-3.0 via Wikimedia Commons.
The post Going inside to get a taste of nature appeared first on OUPblog.
Related Stories
We parted, and each sought his respective chamber. I undressed quickly and got into bed, taking with me, according to my usual custom, a book, over which I generally read myself to sleep. I opened the volume as soon as I had laid my head upon the pillow, and instantly flung it to the other side of the room. It was Goudon’s ‘History of Monsters,’—a curious French work, which I had lately imported from Paris, but which, in the state of mind I had then reached, was anything but an agreeable companion. I resolved to go to sleep at once; so, turning down my gas until nothing but a little blue point of light glimmered on the top of the tube, I composed myself to rest.
The room was in total darkness. The atom of gas that still remained alight did not illuminate a distance of three inches round the burner. I desperately drew my arm across my eyes, as if to shut out even the darkness, and tried to think of nothing. It was in vain. The confounded themes touched on by Hammond in the garden kept obtruding themselves on my brain. I battled against them. I erected ramparts of would-be blankness of intellect to keep them out. They still crowded upon me. While I was lying still as a corpse, hoping that by a perfect physical inaction I should hasten mental repose, an awful incident occurred. A Something dropped, as it seemed, from the ceiling, plumb upon my chest, and the next instant I felt two bony hands encircling my throat, endeavoring to choke me.
I am no coward, and am possessed of considerable physical strength. The suddenness of the attack, instead of stunning me, strung every nerve to its highest tension. My body acted from instinct, before my brain had time to realize the terrors of my position. In an instant I wound two muscular arms around the creature, and squeezed it, with all the strength of despair, against my chest. In a few seconds the bony hands that had fastened on my throat loosened their hold, and I was free to breathe once more. Then commenced a struggle of awful intensity. Immersed in the most profound darkness, totally ignorant of the nature of the Thing by which I was so suddenly attacked, finding my grasp slipping every moment, by reason, it seemed to me, of the entire nakedness of my assailant, bitten with sharp teeth in the shoulder, neck, and chest, having every moment to protect my throat against a pair of sinewy, agile hands, which my utmost efforts could not confine,—these were a combination of circumstances to combat which required all the strength, skill, and courage that I possessed.
At last, after a silent, deadly, exhausting struggle, I got my assailant under by a series of incredible efforts of strength. Once pinned, with my knee on what I made out to be its chest, I knew that I was victor. I rested for a moment to breathe. I heard the creature beneath me panting in the darkness, and felt the violent throbbing of a heart. It was apparently as exhausted as I was; that was one comfort. At this moment I remembered that I usually placed under my pillow, before going to bed, a large yellow silk pocket-handkerchief. I felt for it instantly; it was there. In a few seconds more I had, after a fashion, pinioned the creature’s arms.
I now felt tolerably secure. There was nothing more to be done but to turn on the gas, and, having first seen what my midnight assailant was like, arouse the household. I will confess to being actuated by a certain pride in not giving the alarm before; I wished to make the capture alone and unaided.
Never losing my hold for an instant, I slipped from the bed to the floor, dragging my captive with me. I had but a few steps to make to reach the gas-burner; these I made with the greatest caution, holding the creature in a grip like a vice. At last I got within arm’s-length of the tiny speck of blue light which told me where the gas-burner lay. Quick as lightning I released my grasp with one hand and let on the full flood of light. Then I turned to look at my captive.
I cannot even attempt to give any definition of my sensations the instant after I turned on the gas. I suppose I must have shrieked with terror, for in less than a minute afterward my room was crowded with the inmates of the house. I shudder now as I think of that awful moment. I saw nothing! Yes; I had one arm firmly clasped round a breathing, panting, corporeal shape, my other hand gripped with all its strength a throat as warm, and apparently fleshly, as my own; and yet, with this living substance in my grasp, with its body pressed against my own, and all in the bright glare of a large jet of gas, I absolutely beheld nothing! Not even an outline,—a vapor!
I do not, even at this hour, realize the situation in which I found myself. I cannot recall the astounding incident thoroughly. Imagination in vain tries to compass the awful paradox.
It breathed. I felt its warm breath upon my cheek. It struggled fiercely. It had hands. They clutched me. Its skin was smooth, like my own. There it lay, pressed close up against me, solid as stone,—and yet utterly invisible!
I wonder that I did not faint or go mad on the instant. Some wonderful instinct must have sustained me; for, absolutely, in place of loosening my hold on the terrible Enigma, I seemed to gain an additional strength in my moment of horror, and tightened my grasp with such wonderful force that I felt the creature shivering with agony.
Just then Hammond entered my room at the head of the household. As soon as he beheld my face—which, I suppose, must have been an awful sight to look at—he hastened forward, crying, ‘Great heaven, Harry! what has happened?’
‘Hammond! Hammond!’ I cried, ‘come here. O, this is awful!
I have been attacked in bed by something or other, which I have hold of; but I can’t see it,—I can’t see it!’
Hammond, doubtless struck by the unfeigned horror expressed in my countenance, made one or two steps forward with an anxious yet puzzled expression. A very audible titter burst from the remainder of my visitors. This suppressed laughter made me furious. To laugh at a human being in my position! It was the worst species of cruelty. Now, I can understand why the appearance of a man struggling violently, as it would seem, with an airy nothing, and calling for assistance against a vision, should have appeared ludicrous. Then, so great was my rage against the mocking crowd that had I the power I would have stricken them dead where they stood.
‘Hammond! Hammond!’ I cried again, despairingly, ‘for God’s sake come to me. I can hold the—the thing but a short while longer. It is overpowering me. Help me! Help me!’
‘Harry,’ whispered Hammond, approaching me, ‘you have been smoking too much opium.’
‘I swear to you, Hammond, that this is no vision,’ I answered, in the same low tone. ‘Don’t you see how it shakes my whole frame with its struggles? If you don’t believe me, convince yourself. Feel it,— touch it.’
Hammond advanced and laid his hand in the spot I indicated. A wild cry of horror burst from him. He had felt it! In a moment he had discovered somewhere in my room a long piece of cord, and was the next instant winding it and knotting it about the body of the unseen being that I clasped in my arms.
‘Harry,’ he said, in a hoarse, agitated voice, for, though he preserved his presence of mind, he was deeply moved, ‘Harry, it’s all safe now. You may let go, old fellow, if you’re tired. The Thing can’t move.’
I was utterly exhausted, and I gladly loosed my hold.
Headline image credit: Green Scream by Matt Coughlin, CC 2.0 via Flickr.
The post A Halloween horror story : What was it? Part 3 appeared first on OUPblog.
Related Stories
Last weekend we were thrilled to see so many of you at the 2014 Oral History Association (OHA) Annual Meeting, “Oral History in Motion: Movements, Transformations, and the Power of Story.” The panels and roundtables were full of lively discussions, and the social gatherings provided a great chance to meet fellow oral historians. You can read a recap from Margo Shea, or browse through the Storify below, prepared by Jaycie Vos, to get a sense of the excitement at the meeting. Over the next few weeks, we’ll be sharing some more in depth blog posts from the meeting, so make sure to check back often.
We look forward to seeing you all next year at the Annual Meeting in Florida. And special thanks to Margo Shea for sending in her reflections on the meeting and to Jaycie Vos (@jaycie_v) for putting together the Storify.
Headline image credit: Madison, Wisconsin cityscape at night, looking across Lake Monona from Olin Park. Photo by Richard Hurd. CC BY 2.0 via rahimageworks Flickr.
The post Recap of the 2014 OHA Annual Meeting appeared first on OUPblog.
Related Stories
The outbreak of Ebola, in Africa and in the United States, is a stark reminder of the clear and present danger that infection represents in all our lives, and we need reminding. Despite all of our medical advances, more familiar infections still take tens of thousands of American lives each year – and too often these deaths are avoidable.
Hospital infections kill 75,000 Americans a year — more than twice the number of people who die in car crashes. Most people know that motor vehicle deaths could be drastically reduced. What’s not as widely appreciated is that the far greater number of hospital infections could be reduced by up to 70%.
Changes that would reduce infections are evidence-based and scientific, supported by the Centers for Disease Control and Prevention. For example, the campaign against hospital-acquired urinary tract infection — one of the most common hospital infections in the world — seeks to minimize the use of internal, Foley catheters, a major vector of infection. Nurses who have always relied on Foleys to deal with patients who have urinary incontinence are told to use straight catheters intermittently instead, which increases their workload. Surgeons who are accustomed to placing Foley catheters in their patients for several days after an operation are told to remove the catheter shortly after surgery – or not to use one at all. Similar approaches can be used to reduce other common infections. If we know what needs to be done to lower the rate of hospital infections, why have the many attempts to do so fallen so woefully short?
Our research shows that a major reason is the unwillingness of some nurses and physicians to support the desired new behaviors. We have found that opposition to hospitals’ infection prevention initiatives comes from the three groups we call Active Resisters, Organizational Constipators, and Timeservers. While we know these types of individuals exist in hospitals since we have seen them in action, we suspect they can also be found in all types of organizations.
Active resisters refuse to abide by and sometimes campaign against an initiative’s proposed changes. Some active resisters refuse to change a practice they have used for years because they fear it might have a negative impact on their patients’ health. Others resist because they doubt the scientific validity of a change, or because the change is inconvenient. For others it’s simply a matter of ego, as in, “Don’t tell me what to do.” Some ignore the evidence. Many initiatives to prevent urinary tract infection ask nurses to remind physicians when it’s time to remove an indwelling catheter, but many nurses are unwilling to confront physicians – and many physicians are unwilling to be so confronted.
Organizational constipators present a different set of challenges. Most are mid- to upper-level staff members who have nothing against an infection prevention initiative per se but simply enjoy exercising their power. Sometimes they refuse to permit underlings to help with an initiative. Sometimes they simply do nothing, allowing memos and emails to pile up without taking action. While we have met some physicians in this category, we have seen, unfortunately, a surprising number of nursing leaders employ this approach.
Timeservers do the least possible in any circumstance. That applies to every aspect of their work, including preventing infection. A timeserver surgeon may neglect to wash her hands before examining a patient, not because she opposes that key infection prevention requirement but because it’s just easier that way. A timeserver nurse may “forget” to conduct “sedation vacations” for patients who are on mechanical breathing machines to assess if the patient can be weaned from the ventilator sooner for the simple reason that sedated patients are less work.
We have learned that different overcoming these human-related barriers to improvement requires different styles of engagement.
To win support among the active resisters, we recommend employing data both liberally and strategically. Doctors are trained to respond to facts, and a graph that shows a high rate of infection department can help sway them. Sharing research from respected journals describing proven methods of preventing infection can also help overcome concerns. Nurse resisters are similarly impressed by such data, but we find that they are also likely to be convinced by appeals to their concern for their patients’ welfare – a description, for example, of the discomfort the Foley causes their patients.
Organizational constipators and timeservers are more difficult to win over, largely because their negative behavior is an incidental result of their normal operating style. Managers sometimes try to work around the organizational constipators and assign an authority figure to harass the timeservers, but their success is limited. Efforts to fire them can sometimes be difficult.
Hospitals’ administrative and medical leaders often play an important role in successful infection prevention initiatives by emphasizing their approval in their staff encounters, by occasionally attending an infection prevention planning session, and by making adherence to the goals of the initiative a factor in employee performance reviews. Some innovative leaders also give out physician or nurse champion-of-the-year awards that serve the dual purpose of rewarding the healthcare workers who have been helpful in a successful initiative while encouraging others by showing that they, too, could someday receive similar recognition. It may help to include potential obstructors in planning for an infection prevention campaign; the critics help spot weaknesses and are also inclined to go easy on the campaign once it gets underway.
But the leadership of a successful infection prevention project can also come from lower down in a hospital’s hierarchy, with or without the active support of the senior executives. We found the key to a positive result is a culture of excellence, when the hospital staff is fully devoted to patient-centered, high-quality care. Healthcare workers in such hospitals endeavor to treat each patient as a family member. In such institutions, a dedicated nurse can ignite an infection prevention initiative, and the staff’s all-but-universal commitment to patient safety can win over even the timeservers. The closer the nation’s hospitals approach that state of grace, the greater the success they will have in their efforts to lower infection rates.
Preventing infection is a team sport. Cooperation — among doctors, nurses, microbiologists, public health officials, patients, and families — will be required to control the spread of Ebola. Such cooperation is required to prevent more mundane infections as well.
The post What will it take to reduce infections in the hospital? appeared first on OUPblog.
Related Stories
Anti-politics is in the air. There is a prevalent feeling in many societies that politicians are up to no good, that establishment politics are at best irrelevant and at worst corrupt and power-hungry, and that the centralization of power in national parliaments and governments denies the public a voice. Larger organizations fare even worse, with the European Union’s ostensible detachment from and imperviousness to the real concerns of its citizens now its most-trumpeted feature. Discontent and anxiety build up pressure that erupts in the streets from time to time, whether in Takhrir Square or Tottenham. The Scots rail against a mysterious entity called Westminster; UKIP rides on the crest of what it terms patriotism (and others term typical European populism) intimating, as Matthew Goodwin has pointed out in the Guardian, that Nigel Farage “will lead his followers through a chain of events that will determine the destiny of his modern revolt against Westminster.”
At the height of the media interest in Wootton Bassett, when the frequent corteges of British soldiers who were killed in Afghanistan wended their way through the high street while the townspeople stood in silence, its organizers claimed that it was a spontaneous and apolitical display of respect. “There are no politics here,” stated the local MP. Those involved held that the national stratum of politicians was superfluous to the authentic feeling of solidarity that could solely be generated at the grass roots. A clear resistance emerged to national politics trying to monopolize the mourning that only a town at England’s heart could convey.
Academics have been drawn in to the same phenomenon. A new Anti-politics and Depoliticization Specialist Group has been set up by the Political Studies Association in the UK dedicated, as it describes itself, to “providing a forum for researchers examining those processes throughout society that seem to have marginalized normative political debates, taken power away from elected politicians and fostered an air of disengagement, disaffection and disinterest in politics.” The term “politics” and what it apparently stands for is undoubtedly suffering from a serious reputational problem.
But all that is based on a misunderstanding of politics. Political activity and thinking isn’t something that happens in remote places and institutions outside the experience of everyday life. It is ubiquitous, rooted in human intercourse at every level. It is not merely an elite activity but one that every one of us engages in consciously or unconsciously in our relations with others: commanding, pleading, negotiating, arguing, agreeing, refusing, or resisting. There is a tendency to insist on politics being mainly about one thing: power, dissent, consensus, oppression, rupture, conciliation, decision-making, the public domain, are some of the competing contenders. But politics is about them all, albeit in different combinations.
It concerns ranking group priorities in terms of urgency or importance—whether the group is a family, a sports club, or a municipality. It concerns attempts to achieve finality in human affairs, attempts always doomed to fail yet epitomised in language that refers to victory, authority, sovereignty, rights, order, persuasion—whether on winning or losing sides of political struggle. That ranges from a constitutional ruling to the exasperated parent trying to end an argument with a “because I say so.” It concerns order and disorder in human gatherings, whether parliaments, trade union meetings, classrooms, bus queues, or terrorist attacks—all have a political dimension alongside their other aspects. That gives the lie to a demonstration being anti-political, when its ends are reform, revolution, or the expression of disillusionment. It concerns devising plans and weaving visions for collectivities. It concerns the multiple languages of support and withholding support that we engage in with reference to others, from loyalty and allegiance through obligation to commitment and trust. And it is manifested through conservative, progressive, or reactionary tendencies that the human personality exhibits.
When those involved in the Wootton Bassett corteges claimed to be non-political, they overlooked their organizational role in making certain that every detail of the ceremony was in place. They elided the expression of national loyalty that those homages clearly entailed. They glossed over the tension between political centre and periphery that marked an asymmetry of power and voice. They assumed, without recognizing, the prioritizing of a particular group of the dead – those that fell in battle.
People everywhere engage in political practices, but they do so in different intensities. It makes no more sense to suggest that we are non-political than to suggest that we are non-psychological. Nor does anti-politics ring true, because political disengagement is still a political act: sometimes vociferously so, sometimes seeking shelter in smaller circles of political conduct. Alongside political philosophy and the history of political thought, social scientists need to explore the features of thinking politically as typical and normal features of human life. Those patterns are always with us, though their cultural forms will vary considerably across and within societies. Being anti-establishment, anti-government, anti-sleaze, even anti-state are themselves powerful political statements, never anti-politics.
Headline image credit: Westminster, by “Stròlic Furlàn” – Davide Gabino. CC-BY-SA-2.0 via Flickr.
The post The chimera of anti-politics appeared first on OUPblog.
Related Stories
Biology Week is an annual celebration of the biological sciences that aims to inspire and engage the public in the wonders of biology. The Society of Biology created this awareness day in 2012 to give everyone the chance to learn and appreciate biology, the science of the 21st century, through varied, nationwide events. Our belief that access to education and research changes lives for the better naturally supports the values behind Biology Week, and we are excited to be involved in it year on year.
Biology, as the study of living organisms, has an incredibly vast scope. We’ve identified some key figures from the last couple of centuries who traverse the range of biology: from physiology to biochemistry, sexology to zoology. You can read their stories by checking out our Biology Week 2014 gallery below. These biologists, in various different ways, have had a significant impact on the way we understand and interact with biology today. Whether they discovered dinosaurs or formed the foundations of genetic engineering, their stories have plenty to inspire, encourage, and inform us.
If you’d like to learn more about these key figures in biology, you can explore the resources available on our Biology Week page, or sign up to our e-alerts to stay one step ahead of the next big thing in biology.
Headline image credit: Marie Stopes in her laboratory, 1904, by Schnitzeljack. Public domain via Wikimedia Commons.
The post Biologists that changed the world appeared first on OUPblog.
Related Stories
Now that Noughth Week has come to an end and the university Full Term is upon us, I thought it might be an appropriate time to investigate the arcane world of Oxford jargon -- the University of Oxford, that is. New students, or freshers, do not arrive in Oxford but come up; at the end of term they go down (irrespective of where they live).
The post Battels and subfusc: the language of Oxford appeared first on OUPblog.
Related Stories
Many bioethical challenges surround the promise of genomic technology and the power of genomic information — providing a rich context for critically exploring underlying bioethical traditions and foundations, as well as the practice of multidisciplinary advisory committees and collaborations. Controversial issues abound that call into question the core values and assumptions inherent in bioethics analysis and thus necessitates interprofessional inquiry. Consequently, the teaching of genomics and contemporary bioethics provides an opportunity to re-examine our disciplines’ underpinnings by casting light on the implications of genomics with novel approaches to address thorny issues — such as determining whether, what, to whom, when, and how genomic information, including “incidental” findings, should be discovered and disclosed to individuals and their families, and whose voice matters in making these determinations particularly when children are involved.
One creative approach we developed is narrative genomics using drama with provocative characters and dialogue as an interdisciplinary pedagogical approach to bring to life the diverse voices, varied contexts, and complex processes that encompass the nascent field of genomics as it evolves from research to clinical practice. This creative educational technique focuses on inherent challenges currently posed by the comprehensive interrogation and analysis of DNA through sequencing the human genome with next generation technologies and illuminates bioethical issues, providing a stage to reflect on the controversies together, and temper the sometimes contentious debates that ensue.
As a bioethics teaching method, narrative genomics highlights the breadth of individuals affected by next-gen technologies — the conversations among professionals and families — bringing to life the spectrum of emotions and challenges that envelope genomics. Recent controversies over genomic sequencing in children and consent issues have brought fundamental ethical theses to the stage to be re-examined, further fueling our belief in drama as an interdisciplinary pedagogical approach to explore how society evaluates, processes, and shares genomic information that may implicate future generations. With a mutual interest in enhancing dialogue and understanding about the multi-faceted implications raised by generating and sharing vast amounts of genomic information, and with diverse backgrounds in bioethics, policy, psychology, genetics, law, health humanities, and neuroscience, we have been collaboratively weaving dramatic narratives to enhance the bioethics educational experience within varied professional contexts and a wide range of academic levels to foster interprofessionalism.
Dramatizations of fictionalized individual, familial, and professional relationships that surround the ethical landscape of genomics create the potential to stimulate bioethical reflection and new perceptions amongst “actors” and the audience, sparking the moral imagination through the lens of others. By casting light on all “the storytellers” and the complexity of implications inherent with this powerful technology, dramatic narratives create vivid scenarios through which to imagine the challenges faced on the genomic path ahead, critique the application of bioethical traditions in context, and re-imagine alternative paradigms.
Building upon the legacy of using case vignettes as a clinical teaching modality, and inspired by “readers’ theater”, “narrative medicine,” and “narrative ethics” as approaches that helped us expand the analyses to implications of genomic technologies, our experience suggests similar value for bioethics education within the translational research and public policy domain. While drama has often been utilized in academic and medical settings to facilitate empathy and spotlight ethical and legal controversies such as end-of-life issues and health law, to date there appears to be few dramatizations focusing on next-generation sequencing (NGS) in genomic research and medicine.
We initially collaborated on the creation of a short vignette play in the context of genomic research and the informed consent process that was performed at the NHGRI-ELSI Congress by a geneticist, genetic counselor, bioethicists, and other conference attendees. The response by “actors” and audience fueled us to write many more plays of varying lengths on different ethical and genomic issues, as well as to explore the dialogues of existing theater with genetic and genomic themes — all to be presented and reflected upon by interdisciplinary professionals in the bioethics and genomics community at professional society meetings and academic medical institutions nationally and internationally.
Because narrative genomics is a pedagogical approach intended to facilitate discourse, as well as provide reflection on the interrelatedness of the cross-disciplinary issues posed, we ground our genomic plays in current scholarship and ensure that it is accurate scientifically as well as provide extensive references and pose focused bioethics questions which can complement and enhance the classroom experience.
In a similar vein, bioethical controversies can also be brought to life with this approach where bioethics reaching incorporates dramatizations and excerpts from existing theatrical narratives, whether to highlight bioethics issues thematically, or to illuminate the historical path to the genomics revolution and other medical innovations from an ethical perspective.
Varying iterations of these dramatic narratives have been experienced (read, enacted, witnessed) by bioethicists, policy makers, geneticists, genetic counselors, other healthcare professionals, basic scientists, bioethicists, lawyers, patient advocates, and students to enhance insight and facilitate interdisciplinary and interprofessional dialogue.
Dramatizations embedded in genomic narratives illuminate the human dimensions and complexity of interactions among family members, medical professionals, and others in the scientific community. By facilitating discourse and raising more questions than answers on difficult issues, narrative genomics links the promise and concerns of next-gen technologies with a creative bioethics pedagogical approach for learning from one another.
Heading image: Andrzej Joachimiak and colleagues at Argonne’s Midwest Center for Structural Genomics deposited the consortium’s 1,000th protein structure into the Protein Data Bank. CC-BY-SA-2.0 via Wikimedia Commons.
The post Illuminating the drama of DNA: creating a stage for inquiry appeared first on OUPblog.
Related Stories
American higher education is at a crossroads. The cost of a college education has made people question the benefits of receiving one. To better understand the issues surrounding the supposed crisis, we asked Goldie Blumenstyk, author of American Higher Education in Crisis: What Everyone Needs to Know, to comment on some of the most hot button topics today.
A discussion on the rising cost of higher education.
What does the future of higher education look like?
Are the salaries of university presidents and coaches too high?
A look into the accountability movement in higher education today.
Featured image credit: Grads with diplomas by Saint Louis University Plus Memorial Library. CC BY-NC-SA 2.0 via Flickr.
The post Is American higher education in crisis? appeared first on OUPblog.
Related Stories
Causation is now commonly supposed to involve a succession that instantiates some lawlike regularity. This understanding of causality has a history that includes various interrelated conceptions of efficient causation that date from ancient Greek philosophy and that extend to discussions of causation in contemporary metaphysics and philosophy of science. Yet the fact that we now often speak only of causation, as opposed to efficient causation, serves to highlight the distance of our thought on this issue from its ancient origins. In particular, Aristotle (384-322 BCE) introduced four different kinds of “cause” (aitia): material, formal, efficient, and final. We can illustrate this distinction in terms of the generation of living organisms, which for Aristotle was a particularly important case of natural causation. In terms of Aristotle’s (outdated) account of the generation of higher animals, for instance, the matter of the menstrual flow of the mother serves as the material cause, the specially disposed matter from which the organism is formed, whereas the father (working through his semen) is the efficient cause that actually produces the effect. In contrast, the formal cause is the internal principle that drives the growth of the fetus, and the final cause is the healthy adult animal, the end point toward which the natural process of growth is directed.
From a contemporary perspective, it would seem that in this case only the contribution of the father (or perhaps his act of procreation) is a “true” cause. Somewhere along the road that leads from Aristotle to our own time, material, formal and final aitiai were lost, leaving behind only something like efficient aitiai to serve as the central element in our causal explanations. One reason for this transformation is that the historical journey from Aristotle to us passes by way of David Hume (1711-1776). For it is Hume who wrote: “[A]ll causes are of the same kind, and that in particular there is no foundation for that distinction, which we sometimes make betwixt efficient causes, and formal, and material … and final causes” (Treatise of Human Nature, I.iii.14). The one type of cause that remains in Hume serves to explain the producing of the effect, and thus is most similar to Aristotle’s efficient cause. And so, for the most part, it is today.
However, there is a further feature of Hume’s account of causation that has profoundly shaped our current conversation regarding causation. I have in mind his claim that the interrelated notions of cause, force and power are reducible to more basic non-causal notions. In Hume’s case, the causal notions (or our beliefs concerning such notions) are to be understood in terms of the constant conjunction of objects or events, on the one hand, and the mental expectation that an effect will follow from its cause, on the other. This specific account differs from more recent attempts to reduce causality to, for instance, regularity or counterfactual/probabilistic dependence. Hume himself arguably focused more on our beliefs concerning causation (thus the parenthetical above) than, as is more common today, directly on the metaphysical nature of causal relations. Nonetheless, these attempts remain “Humean” insofar as they are guided by the assumption that an analysis of causation must reduce it to non-causal terms. This is reflected, for instance, in the version of “Humean supervenience” in the work of the late David Lewis. According to Lewis’s own guarded statement of this view: “The world has its laws of nature, its chances and causal relationships; and yet — perhaps! — all there is to the world is its point-by-point distribution of local qualitative character” (On the Plurality of Worlds, 14).
Admittedly, Lewis’s particular version of Humean supervenience has some distinctively non-Humean elements. Specifically — and notoriously — Lewis has offered a counterfactural analysis of causation that invokes “modal realism,” that is, the thesis that the actual world is just one of a plurality of concrete possible worlds that are spatio-temporally discontinuous. One can imagine that Hume would have said of this thesis what he said of Malebranche’s occasionalist conclusion that God is the only true cause, namely: “We are got into fairy land, long ere we have reached the last steps of our theory; and there we have no reason to trust our common methods of argument, or to think that our usual analogies and probabilities have any authority” (Enquiry concerning Human Understanding, §VII.1). Yet the basic Humean thesis in Lewis remains, namely, that causal relations must be understood in terms of something more basic.
And it is at this point that Aristotle re-enters the contemporary conversation. For there has been a broadly Aristotelian move recently to re-introduce powers, along with capacities, dispositions, tendencies and propensities, at the ground level, as metaphysically basic features of the world. The new slogan is: “Out with Hume, in with Aristotle.” (I borrow the slogan from Troy Cross’s online review of Powers and Capacities in Philosophy: The New Aristotelianism.) Whereas for contemporary Humeans causal powers are to be understood in terms of regularities or non-causal dependencies, proponents of the new Aristotelian metaphysics of powers insist that regularities and dependencies must be understood rather in terms of causal powers.
Should we be Humean or Aristotelian with respect to the question of whether causal powers are basic or reducible features of the world? Obviously I cannot offer any decisive answer to this question here. But the very fact that the question remains relevant indicates the extent of our historical and philosophical debt to Aristotle and Hume.
Headline image: Face to face. Photo by Eugenio. CC-BY-SA-2.0 via Flickr
The post Efficient causation: Our debt to Aristotle and Hume appeared first on OUPblog.
Related Stories
It’s fairly common knowledge that languages, like people, have families. English, for instance, is a member of the Germanic family, with sister languages including Dutch, German, and the Scandinavian languages. Germanic, in turn, is a branch of a larger family, Indo-European, whose other members include the Romance languages (French, Italian, Spanish, and more), Russian, Greek, and Persian.
Being part of a family of course means that you share a common ancestor. For the Romance languages, that mother language is Latin; with the spread and then fall of the Roman empire, Latin split into a number of distinct daughter languages. But what did the Germanic mother language look like? Here there’s a problem, because, although we know that language must have existed, we don’t have any direct record of it.
The earliest Old English written texts date from the 7th century AD, and the earliest Germanic text of any length is a 4th-century translation of the Bible into Gothic, a now-extinct Germanic language. Though impressively old, this text still dates from long after the breakup of the Germanic mother language into its daughters.
How does one go about recovering the features of a language that is dead and gone, and which has left no records of itself in spoken or written form? This is the subject matter of linguistic necromancy – or linguistic reconstruction, as it is more conventionally known.
The enterprise, dubbed “darkest of the dark arts” and “the only means to conjure up the ghosts of vanished centuries” in the epigraph to a chapter of Campbell’s historical linguistics textbook, really got off the ground in the 1900s due to a development of a toolkit of techniques known as the comparative method.
Crucial to the comparative method was a revolutionary empirical finding: the regularity of sound change. Though it has wide-reaching implications, the basic finding is simple to grasp. In a nutshell: it’s sounds that change, not words, and when they change, all words which include those sounds are affected.
Let’s take an example. Lots of English words beginning with a p sound have a German counterpart that begins with pf. Here are some of them:
If the forms of words simply changed at random, these systematic correspondences would be a miraculous coincidence. However, in the light of the regularity of sound change they make perfect sense. Specifically, at some point in the early history of German, the language sounded a lot more like (Old) English. But then the sound p underwent a change to pf at the beginning of words, and all words starting with p were affected.
There’s much more to be said about the regularity of sound change, since it underlies pretty much everything we know about language family groupings. (If you’re interested in finding out more, Guy Deutscher’s book The Unfolding of Language provides an accessible summary.) But for now let’s concentrate on its implications for necromantic purposes, which are immense.
If we want to invoke the words and sounds of a long-dead language like the mother language Proto-Germanic (the ‘proto-’ indicates that the language is reconstructed, rather than directly evidenced in texts), we just need to figure out what changes have happened to the sounds of the daughter languages, and to peel them back one by one like the layers of an onion. Eventually we’ll reach a point where all the daughter languages sound the same; and voilà, we’ve conjured up a proto-language.
There’s more to living languages than just sounds and words though. Living languages have syntax: a structure, a skeleton. By contrast, reconstructed protolanguages tend to look more like ghosts: hauntingly amorphous clouds of words and sounds. There are practical reasons why the reconstruction of proto-syntax has lagged behind. One is simply that our understanding of syntax, in general, has come a long way since the work of the reconstruction pioneers in the 19th century.
Another is that there is nothing quite like the regularity of sound change in syntax: how can we tell which syntactic structures correspond to each other across languages? These problems have led some to be sceptical about the possibility of syntactic reconstruction, or at any rate about its fruitfulness. Nevertheless, progress is being made. To take one example, English is a language that doesn’t like to leave out the subject of a sentence. We say “He speaks Swahili” or “It is raining”, not “Speaks Swahili” or “Is raining”. Though most of the modern Germanic languages behave the same, many other languages, like Italian and Japanese, have no such requirement; speakers can include or omit the subject of the sentence as the fancy takes them. Was Proto-Germanic like English, or like Italian or Japanese, in this respect? Doing a bit of necromancy based on the earliest Germanic written records suggests that Proto-Germanic was, like the latter, quite happy to omit the subject, at least under certain circumstances.Of course the issue is more complex than that – Italian and Japanese themselves differ with regard to the circumstances under which subjects can be omitted.
Slowly but surely, though, historical linguists are starting to add skeletons to the reanimated spectres of proto-languages.
The post Linguistic necromancy: a guide for the uninitiated appeared first on OUPblog.
Related Stories
There’s a lot of interesting social science research these days. Conference programs are packed, journals are flooded with submissions, and authors are looking for innovative new ways to publish their work.
This is why we have started up a new type of research publication at Political Analysis, Letters.
Research journals have a limited number of pages, and many authors struggle to fit their research into the “usual formula” for a social science submission — 25 to 30 double-spaced pages, a small handful of tables and figures, and a page or two of references. Many, and some say most, papers published in social science could be much shorter than that “usual formula.”
We have begun to accept Letters submissions, and we anticipate publishing our first Letters in Volume 24 of Political Analysis. We will continue to accept submissions for research articles, though in some cases the editors will suggest that an author edit their manuscript and resubmit it as a Letter. Soon we will have detailed instructions on how to submit a Letter, the expectations for Letters, and other information, on the journal’s website.
We have named Justin Grimmer and Jens Hainmueller, both at Stanford University, to serve as Associate Editors of Political Analysis — with their primary responsibility being Letters. Justin and Jens are accomplished political scientists and methodologists, and we are quite happy that they have agreed to join the Political Analysis team. Justin and Jens have already put in a great deal of work helping us develop the concept, and working out the logistics for how we integrate the Letters submissions into the existing workflow of the journal.
I recently asked Justin and Jens a few quick questions about Letters, to give them an opportunity to get the word out about this new and innovative way of publishing research in Political Analysis.
Political Analysis is now accepting the submission of Letters as well as Research Articles. What are the general requirements for a Letter?
Letters are short reports of original research that move the field forward. This includes, but is not limited to, new empirical findings, methodological advances, theoretical arguments, as well as comments on or extensions of previous work. Letters are peer reviewed and subjected to the same standards as Political Analysis research articles. Accepted Letters are published in the electronic and print versions of Political Analysis and are searchable and citable just like other articles in the journal. Letters should focus on a single idea and are brief—only 2-4 pages and no longer than 1500-3000 words.
Why is Political Analysis taking this new direction, looking for shorter submissions?
Political Analysis is taking this new direction to publish important results that do not traditionally fit in the longer format of journal articles that are currently the standard in the social sciences, but fit well with the shorter format that is often used in the sciences to convey important new findings. In this regard the role model for the Political Analysis Letters are the similar formats used in top general interest science journals like Science, Nature, or PNAS where significant findings are often reported in short reports and articles. Our hope is that these shorter papers also facilitate an ongoing and faster paced dialogue about research findings in the social sciences.
What is the main difference between a Letter and a Research Paper?
The most obvious difference is the length and focus. Letters are intended to only be 2-4 pages, while a standard research article might be 30 pages. The difference in length means that Letters are going to be much more focused on one important result. A letter won’t have the long literature review that is standard in political science articles and will have much more brief introduction, conclusion, and motivation. This does not mean that the motivation is unimportant; it just means that the motivation has to briefly and clearly convey the general relevance of the work and how it moves the field forward. A Letter will typically have 1-3 small display items (figures, tables, or equations) that convey the main results and these have to be well crafted to clearly communicate the main takeaways from the research.
If you had to give advice to an author considering whether to submit their work to Political Analysis as a Letter or a Research Article, what would you say?
Our first piece of advice would be to submit your work! We’re open to working with authors to help them craft their existing research into a format appropriate for letters. As scholars are thinking about their work, they should know that Letters have a very high standard. We are looking for important findings that are well substantiated and motivated. We also encourage authors to think hard about how they design their display items to clearly convey the key message of the Letter. Lastly, authors should be aware that a significant fraction of submissions might be desk rejected to minimize the burden on reviewers.
You both are Associate Editors of Political Analysis, and you are editing the Letters. Why did you decide to take on this professional responsibility?
Letters provides us an opportunity to create an outlet for important work in Political Methodology. It also gives us the opportunity to develop a new format that we hope will enhance the quality and speed of the academic debates in the social sciences.
Headline image credit: Letters, CC0 via Pixabay.
The post Political Analysis Letters: a new way to publish innovative research appeared first on OUPblog.
Related Stories
Checking the website for the Audio Engineering Society (AES) convention in Los Angeles, I took note of the swipes promoting the event. Each heading was framed as follows: If it’s about ____________, it’s at AES. The slide show contained nine headings that are to be a part of the upcoming convention (in no particular order because you start at whatever point in the slide show you happened to log-in to the site).
The list was interesting to me on many levels, but one significant one that struck me immediately was the absence of mixing and mastering (my main areas of work in audio). A relatively short time ago almost half of these categories did not exist. There was no streaming, no project studios, no networked audio and no game sound. So what is the state of affairs for the young audio engineering student or practitioner?
Interestingly, of the four new fields mentioned, three of them represent diminished opportunities in the field of music recording, with one a singular beacon of hope.
Streaming audio represents the brave new world of audio delivery systems. As these services continue to capture more of the consumer market share they continue to diminish artists ability to earn a decent living (or pay an accomplished audio engineer). A friend of mine with 3 CD releases recently got his Spotify statement and saw that he had more that 60,000 streams of his music. His check was for $17. CDs don’t pay as well as vinyl records used to, downloads don’t pay as well as CDs, and streaming doesn’t pay as well as downloads (not to mention “file-sharing” which doesn’t pay anything). Sure, there may be jobs at Pandora and Spotify for a few engineers helping with the infrastructure of audio streaming, but generally streaming is another brick in the wall that is restricting audio jobs by shrinking the earning capacity of recording artists.
Project studios now dominate most recording projects outside the reasonably well-funded major label records and even most of that work is done in project studios (though they might be quite elaborate facilities). Project studios rarely have spots for interns or assistant engineers so they provide no entree positions for those trying to come up in the engineering ranks. Not only does that limit the available sources of income, but it also prevents the kind of mentoring that actually trains young engineers in the fine points of running sessions. Of course, almost no project studios provide regular, dependable work or with any kind of benefits.
Networked audio systems provide new, faster, and more elaborate connectivity of audio using digital technology. While there may be opportunities in the tech realm for engineers designing and building digital audio networks there is, once again, a shrinking of opportunities for those aspiring to making commercial music recordings. In many instances, these networking systems allow fewer people to do more—a boon only to a small number of audio engineers working with music recordings who can now do remote recordings without having to be present and without having to employ local recording engineers and studios to complete projects with musicians in other locations.
The one bright spot here is Game Sound. The explosive world of video games is providing many good jobs for audio engineers who want to record music. These recordings have become more interesting, higher quality, and featuring more prominent and talented composers and musicians than virtually any other area of music production. The only reservation here is that the music is intended as secondary to the game play (of course) and there is a preponderance of violent video games and therefore musical styles that tend to fit well into a violent atmosphere. However, this is changing with a much broader array of game types achieving new levels of popularity (Mindcraft!).
I do not fault AES for pointing to these areas of interest for audio engineers (other than the apparent absence of mixing and mastering). These are the places where significant activity, development, and change are occurring. They’re just not very encouraging for those of us who became audio engineers because of our deep love of music and our desire to be engaged in its production.
Headline Image: Sound Mixing via CC0 Public Domain via Pixabay
The post 2014 AES Convention: shrinking opportunities in music audio appeared first on OUPblog.
Related Stories
In 2014 Oxford University Press celebrates ten years of open access (OA) publishing. In that time open access has grown massively as a movement and an industry. Here we look back at five key moments which have marked that growth.
2004/05 – Nucleic Acids Research (NAR) converts to OA
At first glance it might seem parochial to include this here, but as Rich Roberts noted on this blog in 2012, Nucleic Acids Research’s move to open access was truly ‘momentous’. To put it in context, in 2004 NAR was OUP’s biggest owned journal and it was not at all clear that many of the elements were in place to drive the growth of OA. But in 2004/2005 NAR moved from being free to publish to free to read – with authors now supporting the journal financially by paying APCs (Article Processing Charges). No wonder Roberts adds that it was ‘with great trepidation’ that OUP and the editors made the change. Roberts needn’t have worried — NAR’s switch has been a huge success — its impact factor has increased, and submissions, which could have fallen off a cliff, have continued to climb. As with anything, there are elements of the NAR model which couldn’t be replicated now, but NAR helped show the publishing world in particular that OA could work. It’s saying something that it’s only ten years on, with the transition of Nature Communications to OA, that any journal near NAR’s size has made the switch.
2008 – National Institutes of Health (NIH) Mandate Introduced
Open access presents huge opportunities for research funders; the removal of barriers to access chimes perfectly with most funders’ aim to disseminate the fruits of their research as widely as possible. But as both the NIH and Wellcome, amongst others, have found out, author interests don’t always chime exactly with theirs. Authors have other pressures to consider – primarily career development – and that means publishing in the best journal, the journal with the highest impact factor, etc. and not necessarily the one with the best open access options. So it was that in 2008 the NIH found it was getting a very low rate of compliance with its recommended OA requirements for authors. What happened next was hugely significant for the progress of open access. As part of an Act which passed through the US legislature, it was made mandatory for all NIH-funded authors to make their works available 12 months after publication. This was transformative in two ways: it meant thousands of articles published from NIH research became available through PubMed Central (PMC), and perhaps just as importantly it legitimised government intervention in OA policy, setting a precedent for future developments in Europe and the United Kingdom.
2008 – Springer buys BioMed Central (BMC)
BioMed Central was the first for-profit open access publisher – and since its inception in 2000 it was closely watched in the industry to see if it could make OA ‘work’. When it was purchased by one of the world’s largest publishers, and when that company’s CEO declared that OA was now a ‘sustainable part of STM publishing’, it was a pretty clear sign to the rest of the industry, and all OA-watchers, that the upstart business model was now proving to be more than just an interesting side line. It also reflected the big players in the industry starting to take OA very seriously, and has been followed by other acquisitions – for example Nature purchasing Frontiers in early 2013. The integration of BMC into Springer has happened gradually over the past five years, and has also been marked by a huge expansion of OA at the parent company. Springer was one of the first subscription publishers to embrace hybrid OA, in 2004, but since acquiring BMC they have also massively increased their fully OA publishing. It seems bizarre to think that back in 2008 there were even some who feared the purchase was aimed at moving all BMC’s journals back to subscription access.
2007 on – Growth of PLOS ONE
The Public Library of Science (PLOS) started publishing open access journals back in 2003, but while its journals quickly developed a reputation for high-quality publishing, the not-for-profit struggled to succeed financially. The advent of PLOS ONE changed all that. PLOS ONE has been transformative for several reasons, most notably its method of peer review. Typically top journals have tended to have their niche, and be selective. A journal on carcinogens would be unlikely to accept a paper about molecular biology, and it would only accept a paper on carcinogens if it was seen to be sufficiently novel and interesting. PLOS ONE changed that. It covers every scientific field, and its peer review is methodological (i.e. is the basic science sound) rather than looking for anything else. This enabled PLOS ONE to rapidly turn into the biggest journal in the world, publishing a staggering 31,500 papers in 2013 alone. PLOS ONE’s success cannot be solely attributed to its OA nature, but it was being OA which enabled PLOS ONE to become the ‘megajournal’ we know today. It would simply not be possible to bring such scale to a subscription journal. The price would balloon beyond the reach of even the biggest library budget. PLOS ONE has spawned a rash of similar journals and more than any one title it has energised the development of OA, dispelling previously-held notions of what could and couldn’t be done in journals publishing.
2012 – The ‘Finch’ Report
It’s difficult to sum up the vast impact of the Finch Report on journals publishing in the UK. The product of a group chaired by the eponymous Dame Janet Finch, the report, by way of two government investigations, catalysed a massive investment in gold open access (funded by APCs) from the UK government, crystallised by Research Councils UK’s OA policy. In setting the direction clearly towards gold OA, ‘Finch’ led to a huge number of journals changing their policies to accommodate UK researchers, and the establishment of OA policies, departments, and infrastructure at academic institutions and publishers across the UK and beyond. The wide-ranging policy implications of ‘Finch’ continue to be felt as time progresses, through 2014’s Higher Education Funding Council (HEFCE) for England policy, through research into the feasibility of OA monographs, and through deliberations in other jurisdictions over whether to follow the UK route to open access. HEFCE’s OA mandate in particular will prove incredibly influential for UK researchers – as it directly ties the assessment of a university’s funding to their success in ensuring their authors publish OA. The mainstream media attention paid to ‘Finch’ also brought OA publishing into the public eye in a way never seen before (or since).
Headline image credit: Storm of Stars in the Trifid Nebula. NASA/JPL-Caltech/UCLA
The post Five key moments in the Open Access movement in the last ten years appeared first on OUPblog.
Related Stories
How rapidly does medical knowledge advance? Very quickly if you read modern newspapers, but rather slowly if you study history. Nowhere is this more true than in the fields of neurology and psychiatry.
It was believed that studies of common disorders of the nervous system began with Greco-Roman Medicine, for example, epilepsy, “The sacred disease” (Hippocrates) or “melancholia”, now called depression. Our studies have now revealed remarkable Babylonian descriptions of common neuropsychiatric disorders a millennium earlier.
There were several Babylonian Dynasties with their capital at Babylon on the River Euphrates. Best known is the Neo-Babylonian Dynasty (626-539 BC) associated with King Nebuchadnezzar II (604-562 BC) and the capture of Jerusalem (586 BC). But the neuropsychiatric sources we have studied nearly all derive from the Old Babylonian Dynasty of the first half of the second millennium BC, united under King Hammurabi (1792-1750 BC).
The Babylonians made important contributions to mathematics, astronomy, law and medicine conveyed in the cuneiform script, impressed into clay tablets with reeds, the earliest form of writing which began in Mesopotamia in the late 4th millennium BC. When Babylon was absorbed into the Persian Empire cuneiform writing was replaced by Aramaic and simpler alphabetic scripts and was only revived (translated) by European scholars in the 19th century AD.
The Babylonians were remarkably acute and objective observers of medical disorders and human behaviour. In texts located in museums in London, Paris, Berlin and Istanbul we have studied surprisingly detailed accounts of what we recognise today as epilepsy, stroke, psychoses, obsessive compulsive disorder (OCD), psychopathic behaviour, depression and anxiety. For example they described most of the common seizure types we know today e.g. tonic clonic, absence, focal motor, etc, as well as auras, post-ictal phenomena, provocative factors (such as sleep or emotion) and even a comprehensive account of schizophrenia-like psychoses of epilepsy.
Early attempts at prognosis included a recognition that numerous seizures in one day (i.e. status epilepticus) could lead to death. They recognised the unilateral nature of stroke involving limbs, face, speech and consciousness, and distinguished the facial weakness of stroke from the isolated facial paralysis we call Bell’s palsy. The modern psychiatrist will recognise an accurate description of an agitated depression, with biological features including insomnia, anorexia, weakness, impaired concentration and memory. The obsessive behaviour described by the Babylonians included such modern categories as contamination, orderliness of objects, aggression, sex, and religion. Accounts of psychopathic behaviour include the liar, the thief, the troublemaker, the sexual offender, the immature delinquent and social misfit, the violent, and the murderer.
The Babylonians had only a superficial knowledge of anatomy and no knowledge of brain, spinal cord or psychological function. They had no systematic classifications of their own and would not have understood our modern diagnostic categories. Some neuropsychiatric disorders e.g. stroke or facial palsy had a physical basis requiring the attention of the physician or asû, using a plant and mineral based pharmacology. Most disorders, such as epilepsy, psychoses and depression were regarded as supernatural due to evil demons and spirits, or the anger of personal gods, and thus required the intervention of the priest or ašipu. Other disorders, such as OCD, phobias and psychopathic behaviour were viewed as a mystery, yet to be resolved, revealing a surprisingly open-minded approach.
From the perspective of a modern neurologist or psychiatrist these ancient descriptions of neuropsychiatric phenomenology suggest that the Babylonians were observing many of the common neurological and psychiatric disorders that we recognise today. There is nothing comparable in the ancient Egyptian medical writings and the Babylonians therefore were the first to describe the clinical foundations of modern neurology and psychiatry.
A major and intriguing omission from these entirely objective Babylonian descriptions of neuropsychiatric disorders is the absence of any account of subjective thoughts or feelings, such as obsessional thoughts or ruminations in OCD, or suicidal thoughts or sadness in depression. The latter subjective phenomena only became a relatively modern field of description and enquiry in the 17th and 18th centuries AD. This raises interesting questions about the possibly slow evolution of human self awareness, which is central to the concept of “mental illness”, which only became the province of a professional medical discipline, i.e. psychiatry, in the last 200 years.
The post Neurology and psychiatry in Babylon appeared first on OUPblog.
Related Stories
The 2014 International Law Weekend Annual Meeting is taking place this month at Fordham Law School, in New York City (24-25 October 2014).
The theme of this year’s meeting is “International Law in a Time of Chaos”, exploring the role of international law in conflict mitigation. Panel discussions will examine various aspects of both public international law and private international law, including trade, investment, arbitration, intellectual property, combatting corruption, labor standards in the global supply chain, and human rights, as well as issues of international organizations and international security.
ILW is sponsored and organized by the American Branch of the International Law Association (ABILA) and the International Law Students Association (ILSA). Every year more than one thousand practitioners, academics, diplomats, members of the governmental and nongovernmental sectors, and students attend this conference.
This year’s conference highlights include:
This year we are excited to see a number of OUP authors sitting on panels, including: Cesare Romano, editor of The Oxford Handbook of International Adjudication (with Karen J. Alter, and Yuval Shany); Ryan Goodman, author of the ASIL award winning book Socializing States: Promoting Human Rights through International Law (with Derek Jinks); August Reinisch, editor of The Privileges and Immunities of International Organizations in Domestic Courts; Jose E. Alvarez, author of The Evolving International Investment Regime (with Karl P. Sauvant); Ruti G. Teitel, author of Globalizing Transitional Justice: Contemporary Essays; Daniel H. Joyner, author of Interpreting the Nuclear Non-Proliferation Treaty; and Philip Alston, author of International Human Rights (with Ryan Goodman), to name a few.
For the full International Law Weekend 2014 schedule of events, visit ILSA and American Branch of the International Law Association websites.
Fordham Law School is located in the wonderful Lincoln Square neighborhood of New York and just around the corner from some great activities after the conference:
Of course, we hope to see you at Oxford University Press booth. We’ll be offering the chance to browse and buy our new and bestselling titles on display at a 20% conference discount, discover what’s new in Oxford Law Online, and pick up sample copies of our latest law journals.
To follow the latest updates about the ILW Conference as it happens, follow us on Twitter at @OUPIntLaw and the hashtag #ILW2014.
See you in there!
Headline image credit: 2011, 62nd St by Cornerstones of New York, CC BY-NC 2.0 via Flickr.
The post Preparing for the International Law Weekend 2014 appeared first on OUPblog.
Related Stories
As an Africanist historian committed to reaching broader publics, I was thrilled when the research team for the BBC’s genealogy program Who Do You Think You Are? contacted me late last February about an episode they were working on that involved the subject of some of my research, mixed race relationships in colonial Ghana. I was even more pleased when I realized that their questions about shifting practices and perceptions of intimate relationships between African women and European men in the Gold Coast, as Ghana was then known, were ones I had just explored in a newly published American Historical Review article, which I readily shared with them. This led to a month-long series of lengthy email exchanges, phone conversations, Skype chats, and eventually to an invitation to come to Ghana to shoot the Who Do You Think You Are? episode.
After landing in Ghana in early April, I quickly set off for the coastal town of Sekondi where I met the production team, and the episode’s subject, Reggie Yates, a remarkable young British DJ, actor, and television presenter. Reggie had come to Ghana to find out more about his West African roots, but he discovered along the way that his great grandfather was a British mining accountant who worked in the Gold Coast for close to a decade. His great grandmother, Dorothy Lloyd, was a mixed-race Fante woman whose father — Reggie’s great-great grandfather — was rumored to be a British district commissioner at the turn of the century in the Gold Coast.
The episode explores the nature of the relationship between Dorothy and George, who were married by customary law around 1915 in the mining town of Broomassi, where George worked as the paymaster at the local mine. George and Dorothy set up house in Broomassi and raised their infant son, Harry, there for two years before George left the Gold Coast in 1917 for good. Although their marriage was relatively short lived, it appears that Dorothy’s family and the wider community that she lived in regarded it as a respectable union and no social stigma was attached to her or Harry after George’s departure from the coast.
George and Dorothy lived openly as man and wife in Broomassi during a time period in which publicly recognized intermarriages were almost unheard of. As a privately employed European, George was not bound by the colonial government’s directives against cohabitation between British officers and local women, but he certainly would have been aware of the informal codes of conduct that regulated colonial life. While it was an open secret that white men “kept” local women, these relationships were not to be publicly legitimated.
Precisely because George and Dorothy’s union challenged the racial prescripts of colonial life, it did not resemble the increasingly strident characterizations of interracial relationships as immoral and insalubrious that frequently appeared in the African-owned Gold Coast press during these years. Although not a perfect union, as George was already married to an English woman who lived in London with their children, the trajectory of their relationship suggests that George and Dorothy had a meaningful relationship while they were together, that they provided their son Harry with a loving home, and that they were recognized as a respectable married couple. The latter helps to account for why Dorothy was able to “marry well” after George left. Her marriage to Frank Vardon, a prominent Gold Coaster, would have been unlikely had she been regarded as nothing more than a discarded “whiteman’s toy,” as one Gold Coast writer mockingly called local women who casually liaised with European men. In her own right, Dorothy became an important figure in the Sekondi community where she ultimately settled and raised her son Harry, alongside the children she had with Frank Vardon.
The “white peril” commentaries that I explored in my American Historical Review article proved to be a rhetorically powerful strategy for challenging the moral legitimacy of British colonial rule because they pointed to the gap between the civilizing mission’s moral rhetoric and the sexual immorality of white men in the colony. But rhetoric often sacrifices nuance for argumentative force and Gold Coasters’ “white peril” commentaries were no exception. Left out of view were men like George Yates, who challenged the conventions of their times, albeit imperfectly, and women like Dorothy Lloyd who were not cast out of “respectable” society, but rather took their place in it.
This sense of conflict and connection and of categorical uncertainty surrounding these relationships is what I hope to have contributed to the research process, storyline development, and filming of the Reggie Yates episode of Who Do You Think You Are? The central question the show raises is how do we think about and define relationships that were so heavily circumscribed by racialized power without denying the “possibility of love?” By “endeavor[ing] to trace its imperfections, its perversions,” was Martinican philosopher and anticolonial revolutionary Frantz Fanon’s answer. His insight surely reverberates throughout the episode.
All images courtesy of Carina Ray.
The post Race, sex, and colonialism appeared first on OUPblog.
Related Stories
Voting for the 2014 Atlas Place of the Year is now underway. However, you still be curious about the nominees. What makes them so special? Each year, we put the spotlight on the top locations in the world that make us go, “wow”. For good or for bad, this year’s longlist is quite the round-up.
Just hover over the place-markers on the map to learn a bit more about this year’s nominations.
Make sure to vote for your Place of the Year below. If you have another Place of the Year that you would like to nominate, we’d love to know about it in the comments section. Follow along with #POTY2014 until our announcement on 1 December.What do you think Place of the Year 2014 should be?
Image Credits: Ferguson: “Cops Kill Kids”. Photo by Shawn Semmler. CC BY 2.0 via Flickr. Liberia: Ebola Virus Particles. Photo by NIAID. CC BY 2.0 via Flickr. Ukraine: Euromaiden in Kiev 2014-02-19 10-22. Photo by Amakuha. CC BY-SA 3.0 via Wikimedia Commons. Colorado: Grow House 105. Photo by Coleen Whitfield. CC BY-SA 2.0 via Flickr. Nauru: In front of the Menen. Photo by Sean Kelleher. CC BY-SA 2.0 via Flickr. Sochi: Olympic Park Flags (2). Photo by american_rugbler. CC BY-SA 2.0 via Flickr. Mount Sinjar: Sinjar Karst. Photo by Cpl. Dean Davis. Public Domain via Wikimedia Commons. Gaza: The home of the Kware family after it was bombed by the military. Photo by B’Tselem. CC BY 4.0 via Wikimedia Commons. Scotland: Vandalised no thanks sign. Photo by kay roxby. CC BY 2.0 via Flickr. Brazil: World Cup stuff, Rio de Janeiro, Brazil (15). Photo by Jorge in Brazil. CC BY 2.0 via Flickr.
Heading image: Old Globe by Petar Milošević. CC-BY-SA-3.0 via Wikimedia Commons.
The post Place of the Year 2014: behind the longlist appeared first on OUPblog.
Related Stories