Regenesis Page 26
They named the lab Livly. By the summer of 2010 Livly had received a round of seed funding from ImmunePath, a Silicon Valley start-up aimed at curing cancer with stem cell immunotherapy. Gentry, meanwhile, inspired by the DIYbio movement, investigated the idea of creating a community lab in the San Francisco Bay area. The idea was to open the lab to interested citizen-scientists who, for a monthly membership fee of about $200, could perform their own molecular biology experiments, everything from DNA sequencing to the rite-of-passage project of inserting green fluorescent protein genes into bacteria. If successful, it would be a case of wresting science from its conventional setting in industry and academia and returning it to something like the days of the gentleman-scientist-scholar who was not formally connected with any established organization: people such as Newton, Darwin, Alfred Russel Wallace, Benjamin Franklin, and others.
Gentry and two partners came up with the name BioCurious (biocurious.org) and started a fund drive on the website Kickstarter.com (“A New Way to Fund and Follow Creativity”), where inventors, entrepreneurs, and dreamers of every stripe could post their wild schemes and pet projects and ask for money to fund them. BioCurious announced an initial goal of $30,000. The partners were soon oversubscribed, almost overwhelmed, with 239 backers pledging $35,319. In the fall of 2010 Gentry and her partners were looking to lease 3,000 square feet of industrial space in Mountain View, but in the end settled for a 2,400 square feet in Sunnyvale, calling it “Your Bay Area hackerspace for biotech.” In December 2010, meanwhile, another DIY biohacker lab, Genspace, opened in Brooklyn, New York. The founders referred to it as “the world’s first permanent, biosafety level 1 community laboratory” (genspace.org). Many others soon followed, in the United States, Canada, Europe, and Asia.
With free synthetic biology kits, DIYbio, Livly lab, BioCurious, Genspace, and others, the synthetic biology genie was well and truly out of the bottle.
That did not mean we were headed for some sort of synthetic biology holocaust, Armageddon, or meltdown, however. For one thing, despite efforts by iGEMites to make biological engineering “easy” it is still reasonably difficult to design and implement biological circuitry that actually works (much less works reliably and exactly as planned). Biological systems are complex, “noisy” and susceptible to mutations, evolution, and emergent behaviors. For these reasons their operations are full of surprises. A random change to any given genome is more likely to weaken the organism than to strengthen it, and the same is often true of changes that have been carefully designed and engineered in advance.
For another, in parallel with the development of the biohacker, DIYbio, and garage biology movements (and in fact slightly predating them), many of those doing hands-on synthetic biology work had written white papers, position papers, opinion pieces, and had participated in conferences, study groups, and other organized attempts aimed at dealing with the risks and dangers posed by engineering life. In 2004, for example, I wrote “A Synthetic Biohazard Nonproliferation Proposal.” Here I advanced two main ideas for enhancing the safety and security of synthetic biology research. The first was that the sale and maintenance of oligonucleotide (DNA) synthesis machines and supplies would be restricted to government-licensed nonprofit, government, and for-profit institutions; in addition, the use of reagents and oligos would be tracked, much after the manner of nuclear materials. Second, orders placed with commercial oligo suppliers would be screened for similarity to known infectious agents. While unpopular back then, today many suppliers now do this.
Admittedly, both measures had limitations. Restricting the sale of DNA synthesis machines to licensed or “legitimate” users would be easier to apply to future sales and do little to restrict the use of the machines that were already out there. Also, it’s an open question how stringently enforced such licensing could be. Further, the proposal to screen orders placed with DNA synthesis companies would not affect firms that, for proprietary, competitive, or other reasons, did their DNA synthesis with their own equipment, in-house, unless those instruments fell under the same regulations.
Still, further measures to keep people safe and to keep engineered organisms under control have been proposed by myself and others. One is that synthetic biology research be done in physical isolation, employing the same safeguards that are routinely employed in biosafety labs: safety cabinets, protective clothing and gloves, plus face protection. (But as we have seen, accidents happen even in biosafety labs.) Another is to build one or more self-destruct mechanisms into engineered organisms so that they would die if they escaped from a lab. They could be made dependent on nutrients found only in a laboratory setting, or they could be programmed with suicide genes that would kill the organism after a certain predetermined number of replications, or in response to high-density levels of the organism, or even in response to an externally generated chemical signal. Farther into the future, you could also base the organism on a genetic code different from the one used by natural organisms. Such a code change, I have argued, or the introduction of “mirror life” (see Chapter 1) would mean that such organisms would not be recognizable to natural organisms, and therefore would be unable to exchange genetic material with them.
Further safety measures have been proposed by others. In their 2006 piece, “The Promise and Perils of Synthetic Biology,” Jonathan Tucker and Raymond Zilinskas suggested that before a genetically engineered organism was released to the wild, it first be tested in a microcosm, a model ecosystem ranging in size from a few millimeters to fifteen cubic meters (about the size of a one-car garage), and then to a “mesocosm,” a model system larger than fifteen cubic meters. “Ideally” they wrote, “a model ecosystem should be sufficiently realistic and reproducible to serve as a bridge between the laboratory and a field test in the open environment.”
Synthetic biology needs a suite of safety measures comparable to those that we now have for cars. Modern cars represent powerful technology, and so we require licensing, surveillance, safety design, and safety testing of both the cars and their drivers. Despite the complex technology of today’s automobiles and traffic management, the average citizen is expected to be able to pass a written and practical licensing exam. In addition, we enact numerous surveillance procedures, including radar monitoring of speed, license plate photos, timing speed between tolls, annual inspections, visual checks for erratic behavior, weighing trucks, and checking registration papers. The cars are designed for safety, including anti-hydroplaning tires; antilock brake systems; seat belts; shoulder harnesses; front, side, and supplemental air bags; and so on. In addition, cars are tested using actual cars and synthetic humans—originally cadavers, and later increasingly realistic crash dummies. But even with all these systems, checks, devices, and procedures in place, there are still about 40,000 automotive deaths annually in the United States alone.
We could similarly require licensing for all aspects and users, even DIY-Bio; computer-aided surveillance of all commerce; designing cells that self-destruct outside of the lab; and rigorous testing of what would happen if the cell escaped from the lab by bringing the outside ecosystem into the lab in a secure physical containment setting.
Regulations, however, can be circumvented by anyone who is sufficiently determined to evade them. In other words, security is far more difficult to achieve than safety. This point was made repeatedly by the authors of the 2007 report, Synthetic Genomics: Options for Governance. The document was the fruit of an exhaustive two-year study funded by the Alfred P. Sloan Foundation. The study involved eighteen core members (including Drew Endy, Tom Knight, Craig Venter, and myself) and three institutes: the J. Craig Venter Institute, MIT’s Department of Biological Engineering, and the Center for Strategic and International Studies.
Our final report advanced many policy options along the lines of those already mentioned. We made no bones about the fact that their “security benefits would be modest because no such regime could have high confidence in preventing illegitimate synthesis” DNA synth
esizers, after all, were relatively small (desktop-size) machines, easy to acquire and hide from view. Even if the registration of synthesizers were legally required, the policy would be difficult to enforce because it would be virtually impossible to ensure that all existing machines had been identified and incorporated into the registry. Furthermore, DNA synthesis machines can be built from scratch, can be stolen, and can be misused at “legitimate” institutions by someone posing as benign and genuine while nevertheless engaging in illicit activity (the Bruce Ivins paradigm).
The group tackling options for governance considered the difficulty of synthesizing several pathogenic viruses, including the 1918 influenza virus, poliovirus, Marburg and Ebola viruses, and foot and mouth disease virus. (Foot and mouth disease affects only certain hoofed animals such as sheep and cattle, but it is highly contagious and could trigger the wholesale loss of herds that in turn would entail carcass removal and decontamination costs. An outbreak would destroy consumer confidence, cripple the economy, and provoke trade embargoes.) Of these viruses, we classified two of them—poliovirus and foot and mouth disease virus—as easy to synthesize by “someone with knowledge of and experience in virology and molecular biology and an equipped lab but not necessarily with advanced experience (‘difficulty’ includes obtaining the nucleic acid and making the nucleic acid infectious).”
In the end, we found no magic bullets for absolutely preventing worstcase scenarios, no fail-safe fail-safes, but in my opinion the measures we proposed are worth implementing anyway since their costs are low, the risks high, and their effectiveness would be measureable, and subject to improvement.
To be on the safe side, then, why not prohibit the entire enterprise, or at least the riskiest parts of it? Given the amount of information, machinery, and engineered organisms that already exist in the world, total prohibition would be unrealistic even if it were desirable, which it is not: synthetic genomics offers too many benefits in comparison to its risks. And there are powerful arguments against prohibiting even a subset of experiments or research directions that might be considered relatively dangerous. The first is that prohibitions mostly don’t work; instead, they merely drive the prohibited activity underground and create black markets, or clandestine labs and lab work that are more difficult to monitor and control than open markets and open laboratories. The second is that they also produce a raft of adverse unintended consequences, many of them foreseeable.
The classic case, of course, is Prohibition, which was enacted in 1920 by the Eighteenth Amendment outlawing “the manufacture, sale, or transportation of intoxicating liquors” within the United States. The law stopped none of that activity and instead created a huge network of illegal alcohol production, distribution, and transportation, including massive smuggling across the Canadian border. At one point there were 20,000 speakeasies in Detroit alone (one for every thirty adults). Millions of formerly law-abiding citizens suddenly became habitual lawbreakers. Many drinkers were poisoned by poorly made bootlegged liquor. The net effect, in sum, was to increase crime, violence, and death.
The other major prohibition story of our time is the war on drugs, which has created a cottage industry of illegal drug production, transportation, distribution, and sale in the United States and abroad. It also fosters ingenious methods of drug smuggling, including the use of false-paneled pickup trucks, vans, and tractor trailers, and the building of air-conditioned tunnels under the US-Mexican border. But the ultimate high-tech drug-running innovation was the home-built submarine (called narco subs in the trade), used to move large amounts of cocaine underwater. In July 2010 Ecuadorean police discovered a so-called supersub in the jungle, a seventy-four-foot (23-meter), camouflage-painted submarine that was almost twice as long as a city bus, was equipped with diesel engines, battery-powered electric motors, twin propellers, and topped by a five-foot conning tower. With a crew of four, the sub had a range of 6,800 nautical miles, and in its cargo hold could carry nine tons of cocaine, worth a total of about $250 million. Jay Bergman, the top US Drug Enforcement Agency official in South America, said of the sub, which he praised as “a quantum leap in technology,” that “it poses some formidable challenges.”
Still, it is possible to outlaw entire technologies. In 2006 Kevin Kelly, the former editor of Wired magazine, did a study of the effectiveness of technology prohibitions across the last thousand years, beginning in the year 1000. During this period governments had banned numerous technologies and inventions, including crossbows, guns, mines, nuclear bombs, electricity, automobiles, large sailing ships, bathtubs, blood transfusions, vaccines, television, computers, and the Internet. Kelly found that few technology prohibitions had any staying power and that in general, the more recent the prohibition, the shorter its duration.
Figure Epilogue Kevin Kelly’s chart of the duration of a technology prohibition plotted against the year in which it was imposed.
“Prohibitions are in effect postponements,” Kelly concluded. “You might be able to delay the arrival of a specific version of technology, but not stop it. If we take a global view of technology, prohibition seems very ephemeral. While it may be banned in one place, it will thrive in another. In a global marketplace, nothing is eliminated. Where a technology is banned locally, it later revives to global levels.”
Even if a technology is banned globally, as for example under the terms of a legally binding international treaty, the ban will not necessarily stop its development. In 1972 seventy-nine nations signed the Convention on the Prohibition of the Development, Production, and Stockpiling of Bacteriological and Toxin Weapons, more popularly known as the biological weapons convention (BWC). But in 1996, more than twenty years after the weapons convention came into force, US intelligence sources claimed that twice as many countries possessed or were actively developing biological weapons as when the treaty was signed. Most of the violators of the convention, including the Soviet Union, India, Bulgaria, China, Iran, Cuba, Vietnam, and Laos, were also signatories to the convention.
The moral of the story is that prohibitions are generally ineffective and counterproductive, and have negative unintended consequences. There is no reason to think that a prohibition would halt the development of synthetic genomics, although it might slow down the pace of progress—and the potential benefits that unimpeded progress would have brought.
In general, concerns about a new technology arise mainly during the transition to it. The year 2010 marked the 200th anniversary of the Luddite response to the industrial revolution. It was also the thirty-fifth anniversary of the recombinant DNA moratorium. Ten years earlier a few deaths in the first gene therapy trials drastically reduced funding for this promising new field. But the industrial revolution that the Luddites tried to prevent in 1811 has brought us enormous benefits.
Travel speeds could be considered one of our first transitional human traits—going from natural long-distance rates of 10 km/hr to 1,000 km/hr in passenger jets, to 26,720 km/hr in orbit. Other technologies have radically increased our ability to sense and interact with the universe around us.
Past Current
Input
Visible light 4 to 7x10-5 meters 10-12 to 106 meters
Hearing 10 to 20,000 Hz 10-9 to 1012 Hz
Chemosenses 5 tastes, 1,000 smells Millions of compounds
Touch 3,000 nm 0.1 nm
Heat sensing 200 to 400 K 3 to 105 K
Midput
Memory span 20 years 5,000 years
Memory content 109bits 1017 bits
Cell therapy 0 Many tissues
Heat tolerance 270 to 370 K 3 to 103 K
Output
Locomotion 50 km/h 26,720 km/h
Ocean depth 75 meters 10,912 meters
Altitude 8x103 meters 3x109 meters
Voice 300 to 3,500 Hz 10 to 20,000 Hz
Our technologies have already given us more than a few transhuman qualities, and the trend toward transhumanism is only likely to accelerate in the future. Kurzweil’s Law of Accelerati
ng Returns notes that the progress of certain technologies is not linear but exponential (see Figure 7.1), and Kurzweil himself holds that future technological change will be so rapid and profound that it will constitute “a rupture in the fabric of human history.”
Of course a healthy dose of scientific skepticism is always prudent in the face of such extrapolations. For example, an event or series of events may occur that suddenly derails the rate of accelerating returns. The antipredictionist Nassim Nicholas Taleb, in his book The Black Swan, presents a vivid example of how a prolonged and steady trend may come to an abrupt halt unexpectedly: “Consider a turkey that is fed every single day. Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed every day by members of the human race . . . On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.”
On a more scientific level, physicist Stephen Wolfram has claimed that certain systems are so complex that it’s inherently impossible to predict their future behavior reliably by any means. He regards such systems as “computationally irreducible.” It can be argued that the body of technologies we have today forms a complex system that is computationally irreducible, especially given the fact that these technologies are products of millions of human minds, free agents, and innumerable decisions. If that complex, dynamic network is in fact computationally irreducible, then in Wolfram’s words, “there can be no way to predict how the system will behave except by going through almost as many steps of computation as the evolution of the system itself.”