Black box machine learning models can be dangerous for high-stakes decisions. They rely on untrustworthy databases, and their predictions are difficult to troubleshoot, explain and error check for real-time predictions. Their use leads to serious ethics and accountability issues.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
Development and application of explainable artificial intelligence using machine learning classification for long-term facial nerve function after vestibular schwannoma surgery
Journal of Neuro-Oncology Open Access 11 October 2024
-
Machine Learning in Lithium-Ion Battery: Applications, Challenges, and Future Trends
SN Computer Science Open Access 22 July 2024
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
27,99 € / 30 days
cancel any time
Subscribe to this journal
Receive 1 digital issues and online access to articles
118,99 € per year
only 118,99 € per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
References
Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019).
Rudin, C. et al. Interpretable machine learning: fundamental principles and 10 grand challenges. Stat. Surv. 16, 1–85 (2022).
Ledford, H. Millions of Black people affected by racial bias in health-care algorithms. Nature 574, 608–609 (2019).
Kan-Tor, Y., Ben-Meir, A. & Buxboim, A. Can deep learning automatically predict fetal heart pregnancy with almost perfect accuracy? Hum. Reprod. 35, 1473 (2020).
Badgeley, M. A. et al. Deep learning predicts hip fracture using confounding patient and healthcare variables. NPJ Digit. Med. 2, 31 (2019).
Flores, A. W., Bechtel, K. & Lowenkamp, C. T. False positives, false negatives, and false analyses: a rejoinder to “Machine bias: there’s software used across the country to predict future criminals. And it’s biased against Blacks.”. Fed. Probat. 80, 38–46 (2016).
Barnett, A. J. et al. A case-based interpretable deep learning model for classification of mass lesions in digital mammography. Nat. Mach. Intell. 3, 1061–1070 (2021).
Rudin, C. & Radin, J. Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition. Harv. Data Sci. Rev. https://doi.org/10.1162/99608f92.5a8a3a3d (2019).
Afnan, M. A. M. et al. Ethical implementation of artificial intelligence to select embryos in in vitro fertilization. in AIES ’21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society 316–326 (AIES, 2021).
Semenova, L., Rudin, C. & Parr, R. On the existence of simpler machine learning models. in FAccT ’22: 2022 ACM Conference on Fairness, Accountability and Transparency 1827–1858 (ACM, 2022).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Rights and permissions
About this article
Cite this article
Rudin, C. Why black box machine learning should be avoided for high-stakes decisions, in brief. Nat Rev Methods Primers 2, 81 (2022). https://doi.org/10.1038/s43586-022-00172-0
Published:
DOI: https://doi.org/10.1038/s43586-022-00172-0
This article is cited by
-
Development and application of explainable artificial intelligence using machine learning classification for long-term facial nerve function after vestibular schwannoma surgery
Journal of Neuro-Oncology (2025)
-
Machine Learning in Lithium-Ion Battery: Applications, Challenges, and Future Trends
SN Computer Science (2024)
-
Multiple stakeholders drive diverse interpretability requirements for machine learning in healthcare
Nature Machine Intelligence (2023)
-
The use-the-best heuristic facilitates deception detection
Nature Human Behaviour (2023)