Generative AI programs can produce high-quality written and visual content that may be used for good or ill. We argue that a credit–blame asymmetry arises for assigning responsibility for these outputs and discuss urgent ethical and policy implications focused on large-scale language models.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
Ctta: a novel chain-of-thought transfer adversarial attacks framework for large language models
Cybersecurity Open Access 01 June 2025
-
Meeting the AI achievement challenge: collective and vicarious achievements
Ethics and Information Technology Open Access 18 May 2025
-
A hybrid framework for creating artificial intelligence-augmented systematic literature reviews
Management Review Quarterly Open Access 29 April 2025
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
27,99 € / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
118,99 € per year
only 9,92 € per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
References
Nat. Mach. Intell. 4, 1055–1056 (2022).
Weidinger, L. et al. ACM FAccT 214–229 (2021).
Shanahan, M. Preprint at arXiv https://arxiv.org/abs/2212.03551 (2022).
Bender et al. ACM FAccT 610–623 (2021).
Nyholm, S. Sci. Eng. Ethics 24, 1201–1219 (2018).
Maslen, H., Savulescu, J. & Hunt, C. Australas. J. Philos. 98, 304–318 (2019).
Nyholm, S. This is Technology Ethics: An Introduction ch. 6 (Hoboken: Wiley-Blackwell, 2023).
Meckler, L. & Verma, P. Teachers are on alert for inevitable cheating after release of ChatGPT. The Washington Post (28 December 2022); https://www.washingtonpost.com/education/2022/12/28/chatbot-cheating-ai-chatbotgpt-teachers/
Danaher, J. & Nyholm, S. AI Ethics 1, 227–237 (2021).
Rini, R. Philos. Impr. 20, 1–16 (2020).
Lazer, D. M. et al. Science 359, 1094–1096 (2018).
Radford, A. et al. Technical report, OpenAi https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf (2019).
Kreps, S. & McCain, M. Not Your Father’s Bots. Foreign affairs https://www.foreignaffairs.com/world/not-your-fathers-bots (2019).
Stokel-Walker, C. Nature 613, 620–621 (2023).
Nature 613, 612 (2023).
Bosher, H. et al. WIPO Impact of Artificial Intelligence on IP Policy Response from Brunel University London, Law School & Centre for Artificial Intelligence (Brunel University London, 2020).
Eshraghian, J. K. Nat. Mach. Intell. 2, 157–160 (2020).
Ginsburg, J. C. & Budiardjo, L. A. Berkeley Tech. L. J. 34, 343–448 (2019).
Miernicki, M. Ng, I. & (Huang Ying), I. AI Soc. 36, 319–329 (2021).
Rennie, D., Yank, V. & Emanuel, L. JAMA 278, 579–585 (1997).
Coghlan, S., Miller, T. & Paterson, J. Phil. & Tech. 34, 15811–606 (2021).
Archambault, L., Leary, H. & Rice, K. Educ. Psychol. 57, 178–191 (2022).
Syme, P. A Princeton student built an app which can detect if ChatGPT wrote an essay to combat AI-based plagiarism. Business Insider https://www.businessinsider.com/app-detects-if-chatgpt-wrote-essay-ai-plagiarism-2023-1?international=true&r=US&IR=T (2023).
Wiggers, K. OpenAI’s attempts to watermark AI text hit limits. TechCrunch https://techcrunch.com/2022/12/10/openais-attempts-to-watermark-ai-text-hit-limits/?guccounter=1 (2022).
McKinsey & Company. Jobs lost, jobs gained: workforce transitions in a time of automation. www.mckinsey.com/~/media/mckinsey/industries/public%20and%20social%20sector/our%20insights/what%20the%20future%20of%20work%20will%20mean%20for%20jobs%20skills%20and%20wages/mgi-jobs-lost-jobs-gained-executive-summary-december-6-2017.pdf (2017).
Korinek, A. The Oxford Handbook of Ethics of AI (eds Dubber, M.D., Pasquale, F. & Das, S.) 475–492 (Oxford Univ. Press, 2020).
Srikumar, M. et al. Nat. Mach. Intell. 4, 1061–1064 (2022).
Acknowledgements
We wish to thank an anonymous reviewer for very helpful and timely suggestions for improvements to an earlier version of this article.
Author information
Authors and Affiliations
Contributions
B.D.E. and SPM conceived of the paper. B.D.E., S.P.M., J.D., S.N. & N.M. produced first drafts. All authors provided paragraphs of text and edited drafts.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Machine Intelligence thanks Carolyn Ashurst for their contribution to the peer review of this work.
Supplementary information
Supplementary Information
Supplementary Information 1,2
Rights and permissions
About this article
Cite this article
Porsdam Mann, S., Earp, B.D., Nyholm, S. et al. Generative AI entails a credit–blame asymmetry. Nat Mach Intell 5, 472–475 (2023). https://doi.org/10.1038/s42256-023-00653-1
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-023-00653-1
This article is cited by
-
Ctta: a novel chain-of-thought transfer adversarial attacks framework for large language models
Cybersecurity (2025)
-
The impact of labeling automotive AI as trustworthy or reliable on user evaluation and technology acceptance
Scientific Reports (2025)
-
Das definierende Sprachmodell (LLM): Anthropomorphisierung in der Mensch-Maschine-Interaktion
Zeitschrift für Literaturwissenschaft und Linguistik (2025)
-
When can we Kick (Some) Humans “Out of the Loop”? An Examination of the use of AI in Medical Imaging for Lumbar Spinal Stenosis
Asian Bioethics Review (2025)
-
AI as artist: agency and the moral rights of creative works
AI and Ethics (2025)