A small group of scientists and scholars sat around a coffee table recently, balancing lunches on laps while discussing the prospects of greatly extending human life using new genetics tools and nanotechnologies. The group included a Johns Hopkins University cancer biologist, a Yale University philosopher, the executive director of an Oakland, Calif.-based advocacy group focusing on genetics and society, a Washington lobbyist, and various others.
The talk, about the societal implications of life extension, was occasionally difficult—and not just because of the divergent views. The discussion was taking place under the 105-foot dome of what once was the main reading room of Columbia University’s Low Library (now an administration building). The cavernous space, designed precisely to quiet stray noise, made conversations across the coffee table difficult.
Moreover, nearly a dozen similar groups were scattered about the floor and adding to the general susurration as they discussed topics such as the alienation of scientists from policy or the need for science literacy in policymaking. The occasion was an unusual conference, Living with the Genie: Governing scientific and technological transformation in the 21st century (www.livingwiththegenie.org). Cosponsored by Columbia’s Washington, DC-based Center for Science, Policy & Outcomes (CSPO) and the Funders’ Working Group on Emerging Technologies, the March 5-7 meeting was not intended to lead to answers to difficult policy questions.
The goal, instead, was something almost as difficult to achieve, according to the organizers: a place for scientists, technologists, sociologists, activists, and artists—among others—to talk about where science and technology is heading and how to incorporate societal values into science and technology policy.
As new, more powerful technologies of genetic manipulation, reproductive enhancement, artificial intelligence, and miniaturization develop at accelerating rates, who is monitoring them? Who is making decisions about which technologies to develop, what directions to take them in, and how to protect people and the environment from use and abuse of these technologies? And how can the fruits of science and technology be distributed more equitably, rather than widening—or appearing to widen—the gulf between the haves and the have nots?
These are not the sorts of questions scientists at the bench or in the field often ask themselves. In fact, some scientists and scholars of the scientific process say that engaging in such speculations might be counterproductive for scientists, either because it detracts from time and effort devoted to one’s research or because it might signal such a lack of single-minded devotion to supervisors and colleagues. “We were trying to create a safe space for a discourse about science and human aspirations and do it in a way not seen as a threat by scientists to their autonomy,” says Dan Sarewitz, director of CSPO.
“How do I know we need a safe place” to discuss these issues, asked Michael Crow, founder of CSPO and executive vice provost at Columbia, in opening the conference. He teaches a course on science and technology policy and politics, which attracts both policy students and science and engineering students. Every year, he told the audience, “some of the science and engineering students tell me, ‘Don’t let my professor know that I’m in this class.’ They are afraid they’ll be perceived by the professor as not being viable scientists or engineers in training.”
One unusual aspect of the conference was the absence of canned talks. Instead, there were six panel discussions, each with five or six people on the stage representing diverse professions, viewpoints, and nationalities. In between the panel discussions, audience members gathered for small, informal “Participant Designed Dialogues” such as the discussion about life extension. A few broad themes emerged from the panels:
The benefits of science and technology are not equitably distributed.
Many scientists and technologists want to discuss the ethical and moral implications of their research but may fear backlash if they question the direction of research.
The potential for emerging technologies to affect human nature and possibly even threaten human existence reinforces the cliché that science and technology are too important to leave to the scientists and technologists.
Technologies developed for one purpose often find other uses, a phenomenon often called “function creep.”
Present institutions and processes for governing science and technology may not be up to the challenges posed by technologies that, because they encompass self-replication, prompted computer scientist Bill Joy to write “the future doesn’t need us” in a provocative 2000 Wired article (www.wired.com/wired/archive/8.04/joy_pr.html).
Citizens of the United States and other western countries enjoy a high standard of living partly as a result of science and technology. Meanwhile, citizens in the developing world may, as Indian anthropologist Shiv Visvanathan told the audience, “look at science as organized evil … as organized violence.” He cited the displacement of millions of Indians as a result of dams. Moreover, while affluent westerners may choose from a growing array of shrinking portable music boxes, some 2 billion people around the world lack clean drinking water.
How does this happen? The choices of what science and which technologies to pursue are not always done with goals and values in mind. With two-thirds of research and development now done in the private sector in the United States, profit often drives the direction of research. Philip Kitcher, a Columbia philosopher of science says the way scientific research gets done “is completely anarchic. Scientists shout with different voices. Some don’t get heard; some get heard a lot.” The result of this adversarial process, he says, is that some people can describe the results of research as evil.
Several scientists described how their choice of research field at least partly reflects their ethical values. Eva Harris, of the University of California, Berkeley, ties basic research on dengue fever with work on public health in Central America. “I felt I should work to make the world a better place,” she said. Of course, as Carol Greider, a cancer biologist at Johns Hopkins, put it during the same panel, such an outlook was a “given” at the meeting, “but it’s not a given outside this room.” Among scientists broadly, said Greider, curiosity-driven research is still the paradigm. “The value of thinking of the consequences of research really isn’t embedded in the culture of scientists,” she said.
Kitcher, in addressing the question of what science should be done, wondered how often pure curiosity-driven research leads to practical breakthroughs. “We don’t know how successful a strategy it is to allow people to follow their interests,” he said.
The key question for many at the meeting was how best to align research and development with the public good. Many participants, especially from developing countries, felt that financial considerations play too large a role in setting the research agenda now. Similarly, more government regulation of science and technology was not widely seen as helpful. Yet, say several who attended the meeting, existing institutions and processes in the United States and elsewhere for governing science and technology are not up to the task of simultaneously limiting risks of emerging technologies and encouraging their use for societal goals.
Radford Byerly, a visiting scholar at the Center for Science and Technology Policy Research at the University of Colorado, who helped organize the meeting, calls for “scientists to develop more of a social consciousness and regulate themselves. Now, scientists govern themselves through peer review. They should broaden the scope of the review and think about outcomes.”
Outcomes are on the mind of some who would like to see certain kinds of technology more tightly controlled, and even banned. Bill Joy, chief scientist at Sun Microsystems, examined in his Wired article the potential perils of unfettered development of robotics, genetic engineering, and nanotechnology. He wrote that some of the threats posed by these technologies might be grounds for “relinquishing” their continued development. “[C]ertain knowledge is too dangerous and is best forgone,” he wrote.
That view did not get much traction at the conference. Independent entrepreneur and technologist Ray Kurzweil (www.kurzweilai.net) told the crowd, “The dangerous technologies are the same as the beneficial ones.” Elaborating in a later telephone interview on why it would be wrong to ban certain technologies, he says, “The economic imperatives are too strong. Any company that stops technological development would go out of business. There’s also a moral imperative.” There is suffering in the world from cancer, other diseases, poverty, he says, and future technology will provide the means to alleviate some of that. To minimize the potential for catastrophe, Kurzweil says, defensive technologies must be developed, just as defenses against self-replicating computer viruses have prevented widespread computer-network meltdowns.
Jeff Holmes, a biomedical engineer at Columbia who attended the conference, says in many cases it may be more effective for society to “focus our resources on the things we want and pay for technologies we think are important, rather than trying to outlaw a technology we don’t like.” Sarewitz and the other organizers would like nothing better than if Holmes and others in attendance continued the discussions that began under the lofty dome at Columbia in their own laboratories, offices, and classrooms.
Billy Goodman ([email protected]) is a freelance science writer in Montclair, NJ.