Longevity science research laboratory biological coordination aging systems | Healthcare Discovery
| | |

Adrian Woolfson and the Battle Over the Future of Genome Writing

Adrian Woolfson has helped make genome writing a conversation that serious people are having. He has also helped make it one of the most charged arguments in modern biomedicine.

Presented By Our Partners

That is not a contradiction. It is the usual shape of a frontier, and it is arguably the most reliable signal that the frontier is real. When a new category of science starts to matter, the first phase is legitimation. Part I of this profile described that phase for Woolfson, his training at Oxford and Cambridge under a Nobel laureate, his two decades inside the world’s largest pharmaceutical companies, his years leading research and development at Sangamo Therapeutics during the clinical readout of the first in vivo human genome-editing trial, and his three books and 160 papers translating synthetic biology into language the rest of us can follow. The second phase is the one where the questions get harder. What are the limits of the science? What are the risks? Who decides when an intervention becomes too ambitious to be attempted? Those questions do not come from outside the field. They come from inside it. And they are the questions now being directed, with increasing frequency and increasing seriousness, at the project Woolfson has helped define.

Those questions deserve careful handling. Woolfson has not been evasive about them. He has engaged many of them in his book, in his public appearances, and in his scientific writing. The issue is not whether he has addressed the concerns. The issue is whether the concerns, in their most serious form, can be addressed at the pace his company and the broader field are moving. That is what this part of the story is about.

The optimism problem

The most pointed critique comes from inside the tent. In February 2026, the cardiologist and longtime medical commentator Eric Topol hosted Woolfson for a long-form conversation on his Ground Truths podcast. Topol had written a warm jacket blurb for On the Future of Species, calling it “a very thoughtful and thorough assessment of the profound implications of editing and rewriting our code.” But during the conversation, and in the written summary he posted afterward, Topol drew a clear line.

“I disagreed with Adrian about the bright prospects for curing diseases,” Topol wrote, “but there are many possibilities for positive impact, such as intervening vs the climate crisis and sustainability.” The disagreement was specific. Woolfson has argued in public interviews and in the book that genome writing, paired with Artificial Biological Intelligence, could meaningfully expand the category of treatable disease and, in the long run, produce what he calls a “disease-free organism.” Topol, who has spent his career inside cardiology and clinical medicine, is more skeptical. His view, implicit but consistent in his body of work, is that the complexity of human disease is not primarily a genome-writing problem, and that even generous interpretations of what AI-designed therapeutics will deliver should not license the rhetoric of disease elimination.

This is the kind of disagreement that does real work. Topol is not arguing that Woolfson is wrong about the science. He is arguing that Woolfson is wrong about the translation of the science into clinical outcomes. The distinction matters because it draws a line between what the laboratory can do and what a patient in front of a physician actually experiences. Most of the hardest problems in modern medicine live on the wrong side of that line.

What the models can and cannot do

A second, more technical critique comes from the synthetic biology community itself. In February 2026, the University of Minnesota biologist Kate Adamala published a review of Woolfson’s book in Nature. Adamala is not an outsider. She is a cofounder of the international Build-a-Cell Initiative and a cofounder of the synthetic cell company Synlife. Her research is adjacent to Woolfson’s company’s in ways that matter. Her review praised the book’s scope and clarity. It also drew a careful boundary around what the science can currently do.

Adamala’s central observation was that the ability to predict and manipulate biological patterns at the level of individual genes, proteins, and even small genomes has advanced substantially in recent years. The ability to predict what a complete, novel organism will actually do, what she has described in other settings as the leap from components to systems, remains open. This is a technical observation rather than a rhetorical one. Current genome foundation models can design functional proteins with increasing reliability. They can generate plausible regulatory sequences. They can predict some functional consequences of mutations. They have not yet demonstrated that a fully designed synthetic genome, inserted into a cell, will produce the organism the designer intended, outside of extremely constrained cases like Syn61, where the starting material was an existing bacterial genome recoded to use fewer codons.

Woolfson acknowledges this in his own writing. In the book, he describes the current state of the field as “the scribbling phase,” meaning that researchers can now write the genomes of viruses, bacteria, and yeast, but are still far from writing the genomes of extinct or never-before-realized species, let alone humans. That is a measured framing. It is also the framing that most of his cofounders and scientific collaborators use when they are speaking carefully rather than evangelically. The tension in the public reception of Woolfson’s work comes from the gap between these careful framings and the larger claims, about a disease-free organism, about authoring humans, about the molecular Gutenberg press, that are doing the heavier narrative work. Both framings are present in his writing. Readers tend to remember the bigger one.

The biosafety problem nobody wants to name

The hardest critique of genome writing is not about whether it will work. It is about what happens when it does.

That critique has a name. Its best-known public voice is Kevin Esvelt, an associate professor at the MIT Media Lab and director of the Sculpting Evolution group. Esvelt helped pioneer CRISPR gene drive technology and then, having seen what that technology could do, made biosecurity the primary focus of his career. His core argument is that the cost of synthesizing DNA has fallen faster than the infrastructure that screens what is being synthesized, and that the gap, if left unclosed, is the single most likely vector for a deliberate or accidental pandemic this century.

Featured Partner

Invest in the Infrastructure Behind Modern Medicine

As healthcare expands beyond hospital walls, the buildings and campuses supporting that shift are generating compelling returns for investors who move early. The Healthcare Real Estate Fund offers qualified investors direct access to a curated portfolio of medical office, outpatient, and specialty care facilities.

Learn More →

The evidence is documented. In 2016, synthetic biologists reconstructed horsepox virus, a close relative of smallpox, using commercially ordered DNA, for about one hundred thousand dollars. In research presented in 2025, Esvelt’s laboratory found that 36 of 38 DNA synthesis companies in the United States would ship sequence fragments sufficient to recreate the 1918 influenza virus, which killed an estimated fifty million people, to a pseudonymous buyer without authorization. The International Gene Synthesis Consortium, which set up voluntary screening of orders in 2009, includes many of the largest DNA synthesis providers, but it is voluntary, it is not universal, and it can be bypassed by ordering fragments from multiple suppliers and assembling them offline.

Generative AI has tightened the timeline considerably. In 2025, as Kate Adamala noted in her Nature review, an AI program was used, for the first time, to create an entirely synthetic virus. The virus in question was not itself dangerous. The capability it demonstrated is not controversial. The policy implications are serious enough that they have become a recurring theme at the Hertz Foundation’s biosecurity forums and at federal advisory hearings in Washington.

Genyro is not in the pandemic pathogen business. Its declared mission is to apply synthetic genomics to human health, biomanufacturing, and allied commercial applications. That is an important distinction. But the tools the company is helping to mature, AI-guided design, high-throughput synthesis, programmable genome assembly, are the same tools a less scrupulous actor would need. The question the field now faces, and that Woolfson himself has written about, is whether the oversight architecture can be built fast enough to keep pace with the capability.

The Adamala moratorium and what it signals

One answer came in December 2024, when Kate Adamala joined 37 other scientists, including the Nobel laureates Greg Winter and Jack W. Szostak, in a perspective piece published in Science calling for a moratorium on the creation of fully synthetic mirror-image microorganisms. Mirror life refers to organisms built from the chemical mirror images of standard biological molecules, an area of research that has been under development for more than a decade.

The moratorium was notable for several reasons. The signatories included scientists who had themselves helped build the field. Adamala has publicly stated that she was originally excited about the line of research and has since changed her position. The rationale, laid out in the Science paper, was that the risks of mirror-image organisms, including potential immune invisibility and antibiotic resistance, could not yet be characterized well enough to proceed responsibly. The call was not to abandon the science. It was to pause until the risk-benefit calculation could be made with more confidence.

Mirror life is not what Genyro is building. But the logic of the moratorium, that some capabilities require a pause before they are pursued, because the downside risks cannot be adequately modeled, is directly relevant to a broader conversation about synthetic genomics. The Science paper’s argument rests on a principle that many working scientists have been reluctant to articulate in public. It is the principle that scientific capability and scientific responsibility are not the same thing, and that when the gap between them becomes large enough, the responsible move is sometimes to stop. Woolfson’s field is not there yet. The Adamala paper argues that part of it might be sooner than its practitioners realize.

What authoring life actually means

The philosophical critique is older. In 2010, J. Craig Venter and his collaborators at the J. Craig Venter Institute reported the creation of a bacterial cell whose genome had been entirely synthesized from chemical components and inserted into a cell whose original DNA had been removed. The organism, informally called Synthia, was the first entity in the history of biology to have a designed genome rather than an inherited one. The news was covered around the world. The ethical conversation that followed was substantial, occupied a White House bioethics commission for much of 2010, and ultimately concluded, with some qualifications, that the work should proceed under conditions of careful oversight.

Sixteen years later, the oversight conditions remain, by most serious assessments, underdeveloped. There is no dedicated regulatory pathway in the United States for the approval of a therapeutic synthetic organism. The FDA has frameworks for small-molecule drugs, biologics, gene therapies, and cell therapies. A synthetic organism that functions as a living therapeutic, an engineered microbe, for example, that produces a missing enzyme in a patient’s gut, sits at the edge of these categories and is evaluated on a case-by-case basis. International frameworks are thinner. The Biological Weapons Convention prohibits the development of biological weapons but lacks a verification regime and has no mechanism for enforcement. The Cartagena Protocol on Biosafety, which addresses the movement of living modified organisms across borders, was written before large-scale synthetic biology was feasible and has not been comprehensively updated.

The intellectual property question is equally unresolved. If a private company designs a novel organism using AI-augmented sequence design and proprietary synthesis infrastructure, who owns the organism? Under what terms can it be released, studied, or modified by third parties? The Supreme Court’s 2013 decision in Association for Molecular Pathology v. Myriad Genetics held that naturally occurring DNA sequences are not patentable, but synthetic, designer sequences generally are. The legal architecture for organisms whose genomes are, in Woolfson’s framing, authored rather than discovered is still being written. This is not a rhetorical question. It will determine who gets access to the first generation of therapeutic synthetic organisms and at what price.

The regulatory void

Esvelt’s proposed answer is a system called SecureDNA, which would apply cryptographic techniques to screen every commercial DNA synthesis order against a constantly updated database of sequences of concern. The system is technically feasible. It has been piloted. The obstacle, as Esvelt has said repeatedly in public appearances, is that implementation requires policy coordination that has not yet materialized. Synthesis companies would need to require it. Legislatures would need to mandate it. International agreements would need to align. None of those conditions currently holds.

In the absence of those conditions, the field is effectively operating on a gentlemen’s agreement. Most responsible synthesis providers screen their orders. Most responsible researchers handle sequences of concern with appropriate care. Most responsible companies, including Woolfson’s, operate within the existing voluntary frameworks. The problem is the word “most.” The gap between most and all, in a world where DNA synthesis costs continue to fall and AI-guided design continues to become more accessible, is the gap that keeps Esvelt awake at night. It is also the gap that Adamala’s moratorium was designed to close for one specific line of research, in the hope that the precedent would spread.

Genyro is not the problem here. The problem is structural. The technologies the company is helping to mature are the same technologies that, distributed more broadly, will enable many more actors with varying levels of scruple. The responsible move, as several commentators have argued, is for the companies at the frontier of the capability to invest actively in the infrastructure that constrains its misuse. Whether they will, and on what timeline, is one of the more important unanswered questions in the current synthetic biology conversation.

What Woolfson himself says about the hard questions

None of this is a surprise to Woolfson. His book devotes substantial space to the ethical and regulatory questions, and closes, according to the former Wellcome Trust CEO John-Arne Rottingen’s endorsement, with “a scaffold for a manifesto” for how humanity should navigate the emerging capability. In public conversations, including the Ground Truths interview with Topol, he has argued that the risks of synthetic genomics are serious and that the frameworks to govern its use need to be built in parallel with the technology rather than after the fact.

The distinction between advocating for frameworks and having frameworks is nontrivial. Many scientists who have pushed the boundaries of their fields have argued, with similar seriousness, for oversight structures that were never actually built. The pattern is old enough to have a name in the science policy literature. It is called the ethical imagination gap, and it describes the distance between the responsible positions that working scientists articulate in public and the concrete rules that governing bodies eventually put in place, which are almost always weaker, later, and less comprehensive than the articulate positions would suggest.

Woolfson is not responsible for closing that gap. No single scientist is. But his case is distinctive in that the capability he is helping to build is one whose misuse modes are particularly consequential, and whose beneficial uses are genuinely transformative, which means the gap matters more than it usually does. The argument his book makes, carefully and at length, is that the frameworks should be built. The argument the rest of the field is beginning to make, with increasing urgency, is that the frameworks should be built now, before the first generation of therapeutic synthetic organisms enters the clinic and the commercial incentives harden around a fait accompli.

The battle that matters

One of the clearest signs of a field’s maturity is the quality of its internal disagreements. Aging science, as Part I of the Sinclair profile noted, no longer struggles for legitimacy. It struggles over interpretation, pace, and translation. Synthetic genomics is reaching the same point. The argument is not whether genome writing will become a significant branch of medicine. The argument is how fast it should move, which applications should be prioritized, what the oversight architecture should look like, and where the line sits between ambition and recklessness.

These are the arguments Woolfson’s work is forcing the field to have. His books, his papers, his public interviews, and the company he is building are all, in different ways, insisting that the field decide what it wants to be. That insistence is valuable in itself, even when, and perhaps especially when, it draws serious criticism from serious people.

Eric Topol’s disagreement is a useful disagreement. Kate Adamala’s caution is a useful caution. Kevin Esvelt’s alarm is a useful alarm. The future of genomic medicine will be better if all three voices are heard clearly, and if Woolfson’s advocates are asked the kinds of questions that these three critics raise. The worst outcome is not that the critics are wrong. It is that the conversation does not happen at all, and the field finds itself, five or ten years from now, trying to build the oversight architecture for technologies that have already shipped.

That is why Part II of the Woolfson story matters. He has helped make genome writing a serious scientific conversation. The next thing the conversation requires is for the people most capable of writing genomes, including Woolfson himself, to help ensure that the rules arrive before the organisms do. It is a hard problem. It is a solvable one. It deserves the kind of attention Woolfson has spent his career asking people to give it.

References and further reading

Free Daily Briefing

The Latest Longevity Science.
Delivered Every Morning.

Join researchers, physicians, and health professionals getting daily breakthroughs in AI-driven medicine, epigenetics, and longevity research.

Support the research that powers this editorial

No spam. Unsubscribe anytime. We respect your inbox.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *