Photorealistic title card for The Resolution Gap showing a physician-researcher reviewing glucose, omics, and cardiovascular data across modern clinical displays.
|

The Resolution Gap

Most healthcare AI claims rest on a data layer that is, by the standards of the biology it is meant to describe, decades out of date. The fix is the central project of the next decade in medicine.

Presented By Our Partners


In early 2009, a Stanford geneticist named Michael Snyder began to do something to himself that no one had done before. He started tracking his own biology in unusual detail. His full genome was sequenced. His blood was drawn at regular intervals to measure his transcriptome, the catalog of which genes were being expressed at any given moment, and his proteome, the catalog of which proteins those genes were producing. His metabolome, the small molecules circulating through his bloodstream as he ate, exercised, and slept, was characterized through repeated mass spectrometry. His microbiome, the bacterial communities living in his gut, was profiled at intervals. Over more than two years, Snyder and his collaborators assembled what eventually amounted to several billion individual measurements of a single human being’s biology, sampled densely enough across time that the data captured not just what his body looked like at any moment, but how it was changing.

The result, published in Cell in March of 2012, was unprecedented in scientific history. Buried in the data was the precise molecular onset of a disease. Snyder, who had no family history of diabetes and was at a healthy weight, developed Type 2 diabetes during the study. The data caught the disease’s beginning. After a respiratory infection on day 289 of the study, his blood glucose, which had been stable for the prior nine months, began to climb. His insulin signaling pathways shifted. His inflammatory markers spiked. The condition was detectable in his molecular profile months before any standard clinical test would have caught it, and well before he experienced any symptoms a doctor would have asked him about. In conventional medicine, Snyder’s diabetes would have been diagnosed during a routine annual exam, perhaps a year or more after it began. In his own data, the onset was visible in real time.

Snyder’s iPOP study, the abbreviation for Integrated Personal Omics Profile, has since been expanded to a longitudinal cohort of more than a hundred individuals followed across multiple years. The expanded study has detected cancers and pre-cancers, diabetes, and dozens of other conditions before clinical diagnosis would have occurred. It has produced what is now one of the most influential observations in modern precision medicine. Each person’s biology, when measured at sufficient resolution and over time, is more similar to themselves when sick than to another, healthy person. The implication is that the standard practice of comparing a single laboratory measurement to a population reference range is, in many cases, the wrong comparison. The right comparison is to the patient’s own dynamic baseline. The wrong comparison is invisible to most of clinical medicine, because most of clinical medicine has never been done with the resolution required to construct the right one.

This piece is about that absence. The conventional infrastructure of clinical measurement, the episodic doctor visit and the occasional laboratory panel, was designed in a different era, for different purposes, and produces a picture of human physiology that is, by the standards of what is now technologically possible, low resolution. Most of the healthcare AI being built in 2026, however sophisticated its algorithms, runs on a data substrate that this old infrastructure produces. The algorithms are getting better. The substrate, in most settings, is not yet keeping up. This is the resolution gap, and it is the rate limiting step for the next decade of healthcare AI. The reader who has absorbed the prior pillar on the bitter lesson knows that general models will continue to displace specialized ones at the algorithmic layer. The reader who now also absorbs the resolution gap will see that the durable bets in healthcare AI lie not at the algorithmic layer but at the measurement layer below it, and that the companies which solve the measurement problem are the ones that will define the next decade of patient care.

The astronomer’s problem

To understand the resolution gap, it helps to consider an analogy that has nothing to do with medicine. An astronomer who could only observe the night sky three times a year, for one hour at a time, could record what was in the sky at those moments. They could not, on the basis of that data, derive the orbital mechanics of planets, predict eclipses, or map the structure of the galaxy. The fundamental movements that constitute astronomy occur on timescales the sampling does not capture. The same astronomer, given continuous telescopic observation, can derive the entire structure of celestial motion from data they could not access before.

Clinical medicine, for most of its history, has operated under the astronomer’s first condition. A patient sees a physician occasionally, usually when something is already wrong. A blood test is drawn at that moment. A blood pressure is taken. A weight is recorded. The result is a sparse set of single time point measurements, scattered across years, captured against backgrounds of poorly characterized variation in stress, sleep, hydration, diet, time of day, and dozens of other factors that the system has no way to control for. The patient’s biology, between visits, is a black box. What the data captures is closer to a series of postcards than to a continuous record of physiological function.

This worked, for a long time, because the diseases medicine could treat were largely diseases of acute, dramatic departure from normal. A patient with appendicitis arrived at the emergency room with abdominal pain. A patient with pneumonia arrived with fever and cough. A patient with diabetes arrived with thirst and weight loss after the disease had progressed far enough to produce symptoms. The infrequent snapshot was sufficient because the conditions being detected were, by the time they were clinically relevant, large enough to be visible in any single snapshot. The medicine of acute, late stage disease can operate on low resolution data because the signal of interest is, by the time it matters clinically, much larger than the noise the low resolution data contains.

The medicine of the present moment, and of the next decade, is increasingly the medicine of early stage disease, of chronic conditions in their pre symptomatic phases, of subtle physiological changes that predict clinical events months or years before those events occur, and of interventions designed to prevent diseases from ever progressing to the point where the snapshot would catch them. This medicine requires the astronomer’s second condition. It requires continuous observation. Without continuous observation, the signals are too small relative to the noise the sparse sampling captures. With continuous observation, the same signals become visible, and the diseases become tractable to interventions that the previous era could not support.

This is the resolution gap. The biology has moved. The measurement infrastructure has not yet caught up. Most healthcare AI is being asked to deliver insights that the underlying data simply does not contain. The algorithms are not the problem. The substrate is.

Five layers where the resolution gap is closing

The encouraging news is that the substrate is improving, in several layers simultaneously, faster than most observers of the field appreciate. The reader who understands which layers are advancing, and at what pace, will be better positioned to evaluate which healthcare AI claims are operating on real ground and which are operating on data that cannot yet support them.

The metabolic layer

The clearest demonstration of what the resolution gap looks like, and what closing it produces, is the story of continuous glucose monitoring in people without diabetes. For decades, the standard clinical test for glucose regulation was the fasting plasma glucose, drawn once at a clinic visit, compared to a reference range that defines normal as below 100 milligrams per deciliter. The hemoglobin A1c, which estimates average glucose over the prior three months, supplemented this single time point measurement. Both were episodic. Both averaged across periods that hid as much information as they revealed.

The introduction of continuous glucose monitors, originally for patients with diabetes and increasingly for the general population, revealed what the episodic measurements had been missing. People with apparently normal fasting glucose and apparently normal A1c values had, in many cases, substantial postprandial glucose excursions, periods of overnight hyperglycemia or hypoglycemia, and patterns of variability that correlated with cardiovascular risk in ways the episodic numbers could not capture. The same person could appear metabolically normal on a standard panel and exhibit, in continuous data, patterns of dysglycemia that would have suggested early intervention. Studies in normoglycemic populations have repeatedly found that glycemic variability in adults considered healthy by traditional measures is heterogeneous in ways the traditional measures did not anticipate.

The implication is that the diagnostic categories of normal, prediabetic, and diabetic, which have been the operating framework of clinical glucose medicine for sixty years, were constructed against the resolution of episodic measurement. They are not wrong, but they are coarse. They miss patterns of glucose dysregulation that continuous monitoring can detect, and they miss them in ways that probably matter for the long term cardiovascular and metabolic health of the people being categorized. The next decade of metabolic medicine will be built on continuous data, with diagnostic and therapeutic categories that reflect what continuous data shows. The current categories will be revised, not because the science of diabetes has changed, but because the resolution at which the science can be observed has improved.

The cardiovascular layer

The same dynamic is now well underway in cardiology. The standard clinical measurement of cardiac function for most of the last century was the electrocardiogram performed during an office visit, supplemented by occasional Holter monitors and event recorders for patients with suspected arrhythmias. The data was episodic. The episodes captured what was happening during the brief window of measurement and missed what was happening the rest of the time.

The arrival of wearable cardiac monitoring at consumer scale has changed this picture substantially. The Apple Watch, the Oura ring, the Whoop strap, and similar devices have made continuous heart rate and rhythm data available at population scale for the first time in medical history. The atrial fibrillation detection that this publication discussed in its earlier pillar on clinical validation, despite its real limitations in positive predictive value, is identifying episodes of arrhythmia that would have been entirely invisible to the previous infrastructure of cardiac care. The continuous data also captures heart rate variability patterns that predict autonomic dysfunction, blood pressure surrogates derived from photoplethysmography, and recovery markers that reflect cardiovascular fitness in ways that episodic measurement could not approach.

The cardiology subspecialty has been ambivalent about this shift, in part because the consumer data is noisier than clinical data and in part because the implications for the structure of cardiac care are large. The shift, in any case, is now well underway. The cardiology of the next decade will be built on continuous data, and the diagnostic and therapeutic categories of cardiac medicine will be revised in the same way the metabolic categories are being revised. The resolution is catching up to the biology.

The neurological layer

The third layer where the resolution gap is closing, and the one most poorly understood outside specialist circles, is neurology. The brain’s function has historically been assessed through clinical examination, occasional cognitive testing, and structural imaging that captures anatomy but not function. The dynamic data that would describe neurological function over time has not, until recently, existed.

The voice is now emerging as one of the most accessible windows into neurological function that medicine has ever had. Almost ninety percent of patients with Parkinson’s disease have measurable speech alterations, including changes in pitch variability, frequency modulation, amplitude, and vocal stability. These changes are produced by the same neurological circuits that produce the disease’s motor symptoms, and in many cases they appear before the motor symptoms become clinically obvious. Parkinson’s disease, by the time it is conventionally diagnosed, has typically already destroyed sixty to eighty percent of the dopamine producing neurons in the relevant brain regions. Voice biomarkers can, in some studies, detect the disease years before that threshold is crossed, when the population of remaining neurons is much larger and when intervention might be more meaningful.

The same approach is now being applied to Alzheimer’s disease, where subtle changes in vocabulary, speech rate, and linguistic complexity appear years before clinical diagnosis. To depression, where voice prosody, pitch patterns, and conversational dynamics shift in measurable ways. To early stage cognitive impairment more broadly. The voice, captured passively by a phone or a smart speaker, contains far more clinically meaningful information than the conventional clinical examination has ever extracted from it. The resolution gap, in this layer, is closing in real time, and the implications for early detection of neurodegenerative disease are substantial.

The multi-omic layer

The fourth layer, which Snyder’s iPOP work has pioneered, is the multi-omic profile. The standard laboratory panel that most patients see today, with perhaps two dozen analytes covering basic metabolic and hematologic function, is to the patient’s actual biochemical state what a single radio station is to the electromagnetic spectrum. The full proteome of a person’s blood contains thousands of proteins, the metabolome contains thousands of small molecules, the transcriptome contains tens of thousands of expressed genes, and the microbiome contains hundreds of thousands of bacterial species. Each of these layers carries information about the person’s underlying health that the standard panel cannot access.

Featured Partner

Invest in the Infrastructure Behind Modern Medicine

As healthcare expands beyond hospital walls, the buildings and campuses supporting that shift are generating compelling returns for investors who move early. The Healthcare Real Estate Fund offers qualified investors direct access to a curated portfolio of medical office, outpatient, and specialty care facilities.

Learn More →

The cost of measuring these layers has been falling for two decades and continues to fall. Whole genome sequencing, which cost roughly three billion dollars for the first human genome completed in 2003, now costs in the low hundreds of dollars per sample. Proteomic profiling at scale, while still expensive, is moving in the same direction. The price of multi-omic profiling at the depth that would allow personalized baselines, of the kind Snyder pioneered, is now within the range that consumer health companies have begun to offer. Function Health and similar services are providing extensive blood panels at consumer prices. Decentralized biobanks are accumulating longitudinal samples. The data that, in 2010, was available only to a single Stanford geneticist tracking himself is, in 2026, beginning to become available at population scale.

The implications for healthcare AI are direct. A general model trained on multi-omic time series across a large population, with personal baselines available for each individual, is a fundamentally different tool from a model trained on the sparse, episodic, population averaged data that has been clinical medicine’s standard substrate. The kinds of inferences such a model can support, and the kinds of interventions it can guide, are in many cases categorically different from what the previous data layer could deliver.

The microbiome and immune layer

The fifth layer is the most recent to come into measurement range and probably the one with the most undiscovered implications. The human microbiome, the collection of bacterial, viral, and fungal communities living in and on the body, was understood as an important biological system long before it could be measured at any useful resolution. The combination of next generation sequencing and longitudinal sampling has begun to characterize how these communities change over time, how they respond to diet, illness, antibiotics, and other interventions, and how their changes correlate with the state of the host. Snyder’s group has published extensively on longitudinal microbiome dynamics in prediabetes, finding that the microbial communities of people developing diabetes shift in ways that can be detected before the disease itself becomes clinically apparent.

The immune system is undergoing a similar transition. Single cell sequencing of immune cells, longitudinal characterization of antibody repertoires, and dynamic measurement of cytokine signaling are producing pictures of immune function that the old infrastructure of clinical immunology could not approach. The clinical implications of this layer are still emerging. The fact that the layer exists at measurable resolution at all is new in the last decade.

Why the resolution gap matters for healthcare AI

The reader who has absorbed the prior pillars in this series is now in a position to see what the resolution gap means for the evaluation of healthcare AI claims.

The first implication is that the algorithmic layer, on its own, is no longer the binding constraint for most healthcare AI applications. The frontier general models discussed in the prior pillar on the bitter lesson can pass medical licensing exams, reason at graduate level science on standardized benchmarks, and process clinical text at a level comparable to expert humans. The model is not, in most cases, the rate limiting step. The data the model has access to is.

The second implication is that the durable competitive advantage in healthcare AI, over a five to ten year horizon, will sit at the measurement layer rather than at the algorithm layer. The companies that solve the continuous measurement problem in metabolism, cardiology, neurology, multi-omics, or the microbiome, and that establish the data infrastructure required to build personal baselines at scale, are building assets that the next generation of foundation models will need and that cannot be readily replicated by compute spend. A foundation model fine tuned on episodic clinical data is bounded, in what it can deliver, by what the episodic data contains. A foundation model fine tuned on continuous physiological data linked to longitudinal outcomes is operating on a categorically different evidence base. The latter is what will produce the clinically meaningful tools of the next decade. The former, however sophisticated, will continue to deliver insights bounded by the resolution of the data feeding it.

The third implication, the one most consequential for the careful reader of healthcare AI claims, is that the evaluation question now needs an additional layer. The earlier pillars in this series have taught the reader to ask whether a claim is supported by retrospective or prospective validation, whether it has been independently reproduced, whether the surrogate endpoints used are validated against clinical outcomes, and whether the relative framings hide small absolute effects. The resolution question is now added. What is the underlying data this claim depends on? Is that data layer resolved enough to support the inference being made? Or is the algorithm, however good, working with a low resolution image that fundamentally cannot deliver the precision the marketing implies?

A healthcare AI claim about preventing cardiovascular disease, built on data drawn from episodic blood pressure readings and occasional cholesterol panels, is operating on a low resolution image. The same claim, built on continuous heart rhythm data, continuous blood pressure monitoring, ambulatory ECG, lipid profiling at multiple time points, and dietary intake logged through wearable sensors, is operating on a fundamentally different evidence base. The two should not be evaluated identically. The first is, in most cases, asking algorithms to deliver insights the data cannot support. The second is asking algorithms to deliver insights the data has actually captured.

Why the gap persists

The resolution gap is closing, but the closing is slower than it could be, and the reasons matter for the careful reader.

The first is that the existing infrastructure of healthcare delivery, regulation, and reimbursement was designed for episodic care. The fifteen minute office visit, the lab panel ordered at that visit, the imaging study scheduled separately, and the specialty consultation that follows are an architecture that produces sparse, irregular data by design. Continuous monitoring requires a different architecture, with different reimbursement structures, different regulatory categories, different liability frameworks, and different patient education. The transition is not just technical. It is structural, and structural transitions in healthcare are slow.

The second is that the data, when it is collected at continuous resolution, raises privacy, liability, and consent questions that the episodic system did not face. A patient whose data is collected only at office visits and lab draws produces a small, well defined dataset that the existing legal frameworks know how to handle. A patient whose data is collected continuously by wearables, ambient sensors, and longitudinal multi-omic profiling produces a dataset that the existing legal frameworks do not yet know how to handle. The privacy questions, the consent questions, the questions about who owns the data and who is liable for what it shows, are real and not trivially resolved.

The third is that the cost of acquiring high resolution data at scale, while falling rapidly, is still meaningful. Continuous glucose monitors are not free. Multi-omic profiling, while cheaper than it was, is not in the price range of standard laboratory panels. Wearables, while inexpensive at the device level, require infrastructure to collect, store, and analyze the data they produce. The companies and health systems that have moved aggressively into continuous monitoring have done so on the basis of conviction about future value, not on the basis of established reimbursement economics.

The fourth is that the regulatory and clinical workflow infrastructure to incorporate continuous data into care has not yet been built at scale. A clinician who receives a continuous glucose monitor report from a patient does not, in most current practice, have the time, training, or workflow tools to integrate that data into the patient’s care in real time. The data exists. The infrastructure to act on it does not yet, in most settings.

The reader’s stance

For the reader who wants to evaluate healthcare AI claims with the resolution gap in mind, a short method applies.

When you encounter a healthcare AI claim, ask first what data layer it depends on. Episodic clinical data, continuous physiological data, longitudinal multi-omic data, or something else. The answer determines the inferential power available to the claim.

When the answer is episodic clinical data, treat the claim as bounded by the resolution of that data. An algorithm working on sparse, irregular, population averaged data can produce insights that are useful within the limits of what the data shows. It cannot, regardless of algorithm quality, produce insights that the data does not contain.

When the answer is continuous physiological data, ask what kind, at what frequency, and over what duration. Continuous data from a single thirty day window is different from continuous data over multiple years. Single channel data is different from multi channel data. The specifics matter for what the algorithm can deliver.

When the answer is multi-omic data, ask whether the data is cross sectional or longitudinal, whether it includes a personal baseline, and whether it captures the dynamic changes the patient’s biology actually exhibits. The Snyder framing applies. A single point in time multi-omic snapshot is more informative than a standard panel, but it is much less informative than a longitudinal profile that captures how the same person’s biology changes over time.

When the answer is something the marketing does not specify, treat the claim as preliminary. A healthcare AI product that does not specify the data layer it operates on is, in most cases, operating on data the developers cannot fully describe. The reader’s correct response is to ask, and to weight the claim according to the answer.

The decade ahead

The pieces of the resolution revolution are now visible. Continuous glucose monitoring is moving from a diabetes management tool to a general metabolic health tool, with growing direct to consumer adoption. Wearable cardiac monitoring, voice biomarker analysis, ambient sensing in the home, and multi-omic profiling at consumer prices are all on trajectories that look exponential. The data substrate that healthcare AI has been operating on is, slowly and unevenly, being replaced by a substrate that can actually support the precision the field’s marketing has been promising for years.

The companies that build this substrate are not always the companies running the most visible algorithms. Some of them are sensor manufacturers, some are data infrastructure providers, some are health systems building longitudinal cohorts, some are consumer health companies that have learned to make continuous monitoring useful to the people wearing the devices. The valuable assets in healthcare AI over the next decade will, in many cases, sit in these less visible places. The reader who has learned to look there will be better positioned to evaluate which products are operating on the new substrate and which are still operating on the old.

The closing of the resolution gap is, in this publication’s view, the central project of the next decade in medicine, and the technical foundation on which most of the durable advances of that decade will be built. Healthcare AI without the resolution upgrade is healthcare AI bounded by the limits of the previous era’s data. Healthcare AI with the resolution upgrade is something different. The reader who has internalized this distinction will see the field, over the next ten years, in a different light than the reader who has not.

The next pillar in this series will examine a darker pattern, one this publication has been preparing the reader for since the anchor essay. It is the pattern of financial incentives that shape what healthcare AI claims travel and what claims do not. The resolution gap is closing because the technology is improving. The next question is whose interests determine what gets measured, what gets published, and what reaches the patient. That question has its own answers, and they are not always congruent with what the patient would want.

Sources and further reading

Chen R, Mias GI, Li-Pook-Than J, et al. Personal omics profiling reveals dynamic molecular and medical phenotypes. Cell. 2012;148(6):1293 to 1307.

Zhou W, Sailani MR, Contrepois K, et al. Longitudinal multi-omics of host-microbe dynamics in prediabetes. Nature. 2019;569(7758):663 to 671.

Schüssler-Fiorenza Rose SM, Contrepois K, Moneghetti KJ, et al. A longitudinal big data approach for precision health. Nature Medicine. 2019;25(5):792 to 804.

Mishra T, Wang M, Metwally AA, et al. Pre-symptomatic detection of COVID-19 from smartwatch data. Nature Biomedical Engineering. 2020;4(12):1208 to 1220.

Li X, Dunn J, Salins D, et al. Digital health: tracking physiomes and activity using wearable biosensors reveals useful health-related information. PLOS Biology. 2017;15(1):e2001402.

Topol EJ. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books, 2019.

Topol EJ. The Patient Will See You Now: The Future of Medicine Is in Your Hands. Basic Books, 2015.

Hall H, Perelman D, Breschi A, et al. Glucotypes reveal new patterns of glucose dysregulation. PLOS Biology. 2018;16(7):e2005143.

For voice biomarkers and digital phenotyping in neurodegenerative disease, see the systematic review literature in Brain Sciences, Sensors, and Journal of Medical Internet Research, 2023 through 2026, including Wright H, Aharonson V on Parkinson’s progression monitoring and the recent scoping reviews on AI-driven audio biomarkers in geriatric health.

Snyder MP and colleagues, ongoing iPOP cohort publications, Stanford Snyder Lab, accessed 2026.

For the broader treatment of measurement layer transitions in healthcare, see also the work of Eric Topol, Daniel Kraft, and the digital health teams at Verily, Apple Health, and the Stanford Healthcare Innovation Lab.


Verification Intelligence for Healthcare AI

This article is part of the Verification Intelligence for Healthcare AI series. For the practical capstone, read How to Read a Healthcare AI Press Release. To move from the data layer back to the full verification checklist, continue to How to Read a Healthcare AI Press Release.

  1. The Literature Is a Debate, Not a Record
  2. What “Clinically Validated” Actually Means
  3. FDA Cleared, FDA Approved, FDA Authorized
  4. Prospective vs Retrospective
  5. The Reproducibility Crisis Healthcare AI Refuses to Talk About
  6. The Relative Risk Trick
  7. Surrogate Endpoints and the Long Wait for Truth
  8. After the Bitter Lesson
  9. The Resolution Gap
  10. How to Read a Healthcare AI Press Release

Free Daily Briefing

The Latest Longevity Science.
Delivered Every Morning.

Join researchers, physicians, and health professionals getting daily breakthroughs in AI-driven medicine, epigenetics, and longevity research.

Support the research that powers this editorial

No spam. Unsubscribe anytime. We respect your inbox.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *