“What I love about [Tempus Loop] is that it flips the traditional, linear drug discovery process on its head. Instead of starting with target ID and moving towards patients 20 years later, this begins with large, aggregated real-world patient datasets derived from how they behaved in the real world when exposed to therapeutic intervention.”
– Rafael Rosengarten, PhD, CEO & Co-founder, Genialis
|
Advancing Research with Tempus Loop
|
Justin Guinney: One of the latest strategies helping transform oncology research is Tempus Loop, a platform that embodies the concept of a lab in the loop. This proprietary platform integrates real-world data (RWD), patient-derived organoids (PDOs), and AI to support the identification and validation of actionable targets. By leveraging Tempus’ RWD, Tempus Loop aims to address the challenges of traditional drug discovery, such as identifying patient subpopulations with shared clinical and molecular characteristics. Through systems biology and CRISPR screens, the platform provides continuity between RWD and PDOs, allowing for rapid hypothesis testing in models that closely mirror patient biology. This iterative process is intended to support target discovery and validation and may offer a strategy for advancing personalized cancer therapeutics. |
Justin Guinney, PhD: Considering Tempus Loop’s approach to developing hypotheses and testing targets in patient-derived organoids, do you see this as a promising strategy? Are there any potential blind spots or limitations for a platform like this?
|
Sandip P. Patel, MD: I think these multimodal strategies are going to be very important. Often, models are trained on highly selected data that doesn’t reflect real-world use, which we sometimes see in electronic medical records (EMR). This approach, leveraging data while also investigating the biology through organoids or broader datasets, is really smart. We definitely need biological validation for our computational outputs, and to aggregate noisy biological datasets and understand their context with other clinical datasets. This integration can create a virtuous loop, a flywheel effect — combining these datasets helps us better understand human biology to treat disease.
Rafael Rosengarten, PhD: What I love about this concept is that it flips the traditional, linear drug discovery process on its head. Instead of starting with target ID and moving toward patients 20 years later, [Tempus Loop] begins with large, aggregated real-world patient datasets derived from how they behaved in the real world when exposed to therapeutic intervention. We can learn so much by mining this, especially from patients not responding to standard care, to understand their shared therapeutic vulnerabilities. The exciting part about patient-derived organoids is that they let us test hypotheses directly in models much closer to actual patient biology, giving strong signals on a hypothesis’s validity. I think it’s really promising, and we’re excited to collaborate to further validate the insights generated from this approach. |
Jason Guinney, PhD: Sandip, given your work in immuno-oncology, how do you approach capturing the right biology in these model systems, like Tempus Loop, and projecting it back and forth with patient information?
|
Sandip P Patel, MD: One of the issues we have with immunotherapy biomarkers is the subtle, critical differences in immunobiology between mouse models and human models, even humanized mice. That’s why real-world data sets are so meaningful. The flywheel concept can help in two ways: it supports validation of computational observations biologically, and it allows cross-system validation – assessing how well mouse versus human data map to mouse versus human integrations biologically. This type of 2×2 analysis is key because a lot can be lost in translation. The depth of multimodal data within systems like Tempus Loop can be significant, encompassing transcriptomics, radiomics, digital pathology, genomics, and clinical phenotyping. This bench-to-bedside-to-cloud analysis truly helps translate discoveries more efficiently, especially since we’re often developing drugs before the biomarker is fully elucidated, so these approaches may help us better aim for those targets. |
“It’s hard to overstate the potential benefits [of foundation models]… An important aspect is training a foundation model on massive, multimodal datasets, like Tempus’, to understand real-world cancer biology as it occurs in patients. Instead of training individual models for specific responses, we’re training a model to understand all molecular and clinical biology interactions. This allows us to probe the foundation model using simpler analytes and learn the best way to treat patients in ways previously impossible.”
– Rafael Rosengarten, PhD, CEO & Co-founder, Genialis
|
The Role of AI-Driven Algorithms in Oncology R&D
|
Justin Guinney: It’s clear we’re entering a new era of medicine, which we like to call ‘precision medicine 2.0.’ This is largely driven by the integration of multimodal datasets, moving beyond the traditional reliance on DNA alone. Among the developments in this area are algorithms like Tempus’ Immune Profile Score (IPS) and Genialis’ krasID. IPS is designed to provide a more nuanced prediction of patient responses to immune checkpoint inhibitors, going beyond conventional markers like PD-L1 and TMB. Meanwhile, krasID offers an approach to stratifying KRAS patients by clinical response, using RNA data to predict and monitor drug efficacy. These advancements highlight the potential of AI to refine treatment strategies and help improve patient outcomes, and allow us to explore how these tools, and how AI in general, is being leveraged in clinical practice and drug discovery today. |
Justin Guinney, PhD: Sandip, as a practicing lung cancer oncologist, how do you value the deployment and utility of algorithms, especially considering that physicians may not want to withhold therapy if no other good options exist?
|
Sandip P. Patel, MD: Until we have validated interventions, these datasets are really helpful for breaking ties. For example, in non-small cell lung cancer with PD-L1 greater than 1%, both PD-1 alone and chemo plus PD-1 are reasonable options, and we’re generally working with limited information when we make those decisions. An algorithm could potentially help; if a PD-L1 high patient has low IPS, I might lean more towards a chemotherapy combination, as I’d have less confidence in their response to PD-1 alone. For KRAS inhibitors, we’re seeing combination strategies emerge, and even compound biomarkers like krasID plus IPS may be important, especially for STK11/KEAP1 co-mutated patients where immunotherapy is less effective. There are many possible use cases. Even without prospective validation, these algorithms may be useful for guiding decisions among approved options, as some data is always better than none. |
Justin Guinney, PhD: Rafael, with Tempus recently announcing its collaboration with AstraZeneca and Pathos for the development of a multimodal foundation model, and Genialis having its foundation model in RNA and other modalities, how would you characterize the benefits these kinds of foundational models may bring to discovery?
|
Rafael Rosengarten, PhD: It’s hard to overstate the potential benefits. Despite my caution against overhyping AI, I truly believe in the science. Genialis has been developing these foundational AI tools for 15 years, long before we even had a vocabulary for the technology. With that said, these multimodal models may have a substantive impact. An important aspect is training a foundation model on massive, multimodal datasets, like Tempus’, to understand real-world cancer biology as it occurs in patients. Instead of training individual models for specific responses, we’re training a model to understand all molecular and clinical biology interactions. This allows us to probe the foundation model using simpler analytes and potentially learn new ways to treat patients in ways previously impossible. It’s incredibly fulfilling to see these long-held dreams become a reality and race towards actual practice. |
Justin Guinney, PhD: Sandip, what’s your reaction to AI moving to the clinic, particularly regarding the interpretability of these complex, “black-box” models? How do you see AI in the clinic today, and what are your worries or excitement about its future opportunities?
|
Sandip P. Patel, MD: AI is here and increasingly used clinically for things like radiomics, pathology, and augmenting notes. While use cases are growing, we need to be cautious. We sometimes see issues with model overfitting, lack of generalizability, and insufficient prospective biological validation. A model might perform well on similar data, but clinical settings may require performance across varied, non-linear contexts. I’m hopeful AI can aid in linear, discrete tasks. |
Justin Guinney, PhD: There’s potential for RNA to capture more complex biological states like gene overexpression, molecular subtypes, and immune ecotypes. Yet, despite two decades of intense study, few RNA-based tools capturing these elements are in the clinic. What are the key challenges you see in advancing RNA from the lab into the clinic?
|
Sandip P. Patel, MD: There are a couple of challenges. First, logistical: ensuring sufficient, high-quality material for analysis, which varies by disease context. Second, biological: Does RNA-seq truly inform treatment decision-making or research plans? Its utility may depend on the use case; it may be more helpful for pathway inhibitors or immunotherapy than for proteomic targets.
Finally, and often underappreciated, RNA-seq data is complex. How do you make it interpretable for a busy clinician? Clinicians need clear “yes” or “no” answers, not continuous variables. DNA sequencing is straightforward in this way – you either have a variant or you don’t. So, how do you make these complex RNA-seq datasets interpretable and actionable in a molecular report? That “last mile” for RNA-seq, ensuring a clean decision for clinicians, is a significant, underappreciated aspect, despite promising algorithms like IPS and krasID.
Rafael Rosengarten, PhD: Adding to Sandeep’s points, bulk RNA-seq is now commoditized and deployable globally, and it’s possible to obtain good-quality RNA from archival samples, which has helped clear many logistical hurdles, even though RNA is still a trickier molecule than DNA. Years ago, if you searched for gene expression biomarkers in PubMed, you’d find countless publications, but on the FDA’s website for approved CDx’s, there were hardly any. This gap is closing thanks to improved data harmonization, including efforts to address heterogeneity, bias, and batch effect correction, which are crucial for gene signatures to work clinically.
Another notable change is in the use of machine learning and AI tools. Typical simplistic gene signature scores may not be predictive in real- world settings. However, by using numerous gene signature scores as input to a machine learning algorithm, it is possible to build more sophisticated models that learn pathway interactions and that may prove predictive across independent clinical datasets. The challenge then becomes interpretability, as clinicians often need clear, binary answers. We may be able to provide that green light/red light output, but also offer the ability to peel back layers to show the underlying biological signatures, supporting explainable AI. This level of interpretability is where algorithmic science has advanced, potentially solving problems or doing a much better job than before. |
|
|