The Hidden Variability in Genotoxicity Data and the Future of NAMs
- Elena Fioravanzo

- 44 minutes ago
- 2 min read

Ready to Master Genotoxicity Prediction?
Understanding the uncertainties of reference data is the first step toward building more robust and scientifically sound NAM strategies.
ToxNavigation is at the forefront of this shift. We are excited to announce the 2026 release of our new in-depth training course:
"NAMs - QSAR and read-across to predict genotoxicity"
This course is designed to equip expert toxicologists with the practical skills to apply in silico methods for genotoxicity assessment under regulatory guidelines (like ICH M7), leveraging the knowledge of data variability to build stronger, evidence-based predictions.
➡️ Learn more and register your interest for 2026:
Why Uncertainty in Traditional Genotoxicity Tests Matters for Computational Toxicology
Genotoxicity assessment is a cornerstone of chemical safety regulation, yet a new, critical review sheds light on a rarely discussed but vital topic: the significant variability and uncertainty inherent in data generated from even gold-standard, Test Guideline (TG)-conform genotoxicity assays.
The paper, "Variability and uncertainty of data from genotoxicity Test Guidelines: What we know and why it matters," provides a comprehensive analysis that is essential reading for every toxicologist preparing for the New Approach Methodology (NAM) era.
Read the preprint here:
Key Findings: High Variability in Genotoxicity Calls
The authors reviewed major genotoxicity databases (ECVAM, ISSMIC, OASIS) and, most tellingly, the highly-curated EFSA genotoxicity database. Their findings are eye-opening:
Low Replicate Similarity: Depending on the assay, an estimated 22–77% of compounds exhibit a similarity of replicate results below 85%. This means that for a large proportion of chemicals, running the same test again, even under similar TG conditions, has a high chance of yielding a different result—a classic challenge for reproducibility.
Ames Test is Best, But Still Variable: While the Ames test showed the lowest variability (around 20% of compounds had low result similarity upon replication), other key tests, including the in vivo micronucleus test and in vitro chromosomal aberration test, showed low similarity for over 50% of tested compounds.
The Fundamental Challenge for NAMs and IATAs
The core takeaway is profound: Non-Animal Methods (NAMs) and computational approaches (QSAR, Read-Across) cannot be expected to correlate better with animal or in vitro reference data than those reference data correlate with themselves upon replication.
This intrinsic variability in traditional data sets a qualitative baseline uncertainty for all new approach methodologies. The review argues that this variability must be explicitly considered when developing and validating NAM-based Integrated Approaches for Testing and Assessment (IATAs).
The Way Forward: Computational Toxicology as the Solution
The review concludes that due to these inherent data uncertainties, low variability, mechanistic relevance, and complementarity should be the most robust criteria for integrating an intelligent combination of NAMs into an IATAs.
This is precisely where computational toxicology—specifically QSAR and Read-Across—offers indispensable value. By using well-defined, robust models, we can leverage the mechanistic and structural data to make predictions that are less susceptible to the complex, biological variability of the reference assays.



Comments