The Epistemic Authorship Crisis in the Age of Generative AI: Overcoming the Responsibility Gap through Hyper Justification Obligations

Authors

  • Rizky Fahmi Saputra Universitas Negeri Malang
  • Mohammad Isa Wibisono Universitas Negeri Malang
  • Agung Winarno Universitas Negeri Malang
  • Subagyo Subagyo Universitas Negeri Malang

DOI:

https://doi.org/10.62951/ijecm.v3i1.1090

Keywords:

Algorithmic Gettier Cases, Epistemic Luck, Ethical Obligation, Large Language Models, Responsibility

Abstract

The use of Large Language Models (LLMs) in scientific research is becoming increasingly widespread, but presents epistemic risks that are not yet fully understood. This article discusses how the probabilistic mechanisms of LLM can produce outputs that appear correct and justified but are actually dependent on epistemic luck, thus resembling the Gettier case pattern. Through a conceptual study approach, this research clarifies concepts, analytically reconstructs the generative structure of LLM, and conducts a normative analysis of its implications for scientific accountability and authorship. The results of the analysis show that Algorithmic Gettier Cases (AGCs) occur when linguistic coherence deceives users and creates the impression of justification, even though the truth that emerges is statistical coincidence and is not supported by valid causal relationships. This condition poses a serious challenge to the attribution of knowledge and author responsibility in the production of academic texts. To address this issue, this article proposes the principle of Hyper-Justification Obligation, which is the ethical obligation for researchers to actively verify and causally reason every AI output before using it in scientific works. This research provides a theoretical contribution to understanding the epistemic risks of LLM and offers an ethical foundation for academic practice in the era of generative AI.

Downloads

Download data is not yet available.

References

Bender, E. M., Gebru, T., McMillan Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21), 610–623. https://doi.org/10.1145/3442188.3445922

Bommasani, R., Hudson, D. A., Adeli, E., et al. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.

Dignum, V. (2018). Ethics in artificial intelligence: Introduction to the special issue. Ethics and Information Technology, 20(1), 1–3. https://doi.org/10.1007/s10676-018-9450-z

Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1

Fredrikzon, M. (2025). Epistemic opacity and probabilistic reasoning in generative AI. Journal of Artificial Intelligence Research, 72(1), 55–78. https://doi.org/10.1007/978-3-031-83526-1_5

Hosseini, M., Sühr, T., & Bender, E. (2023). Accountability in human-AI collaboration: Rethinking authorship and responsibility. AI Ethics, 3(4), 921–938.

Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., ... & Fung, P. (2023). Hallucinations in large language models: A taxonomy and survey. Transactions of the Association for Computational Linguistics, 11, 1–23. https://doi.org/10.1145/3571730

Kim, Y. (2025). Statistical truth without understanding: On LLM hallucinations and epistemic risk. Journal of Information, Communication & Ethics in Society, 23(1), 14–29.

Levy, N. (2024). Moral responsibility in the age of generative AI. Ethics and Information Technology, 26(2), 221–234.

Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., & Riedel, S. (2020). Retrieval-augmented generation for knowledge-intensive NLP. Advances in Neural Information Processing Systems (NeurIPS 2020).

Marcus, G., & Davis, E. (2020). Rebooting AI: Building artificial intelligence we can trust. Pantheon.

Maynez, J., Narayan, S., Bohnet, B., & McDonald, R. (2020). On faithfulness and factuality in abstractive summarization. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), 1906–1919. https://doi.org/10.18653/v1/2020.acl-main.173

O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

Pritchard, D. (2024). Epistemic luck revisited: Implications for AI-generated content. Erkenntnis.

Rajesh, R., Ganesh, P., & Sharma, V. (2024). Understanding and mitigating hallucination in large language models. Journal of Artificial Intelligence Research.

Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology, 34(4), 1057–1084. https://doi.org/10.1007/s13347-021-00450-x

Yuan, H., Earp, B. D., Koplin, J., & Mann, S. P. (2024). Can ChatGPT be an author? Generative AI and perceptions of authorship and responsibility. AI & Society. Advance online publication. https://doi.org/10.1007/s00146-024-02081-0

Zagzebski, L. (1994). The inescapability of Gettier problems. Philosophical Quarterly, 44(174), 65–73. https://doi.org/10.2307/2220147

Downloads

Published

2026-01-05

How to Cite

Rizky Fahmi Saputra, Mohammad Isa Wibisono, Agung Winarno, & Subagyo Subagyo. (2026). The Epistemic Authorship Crisis in the Age of Generative AI: Overcoming the Responsibility Gap through Hyper Justification Obligations. International Journal of Economics, Commerce, and Management, 3(1), 18–23. https://doi.org/10.62951/ijecm.v3i1.1090