Tort Liability Analysis of Generative Artificial Intelligence in Judicial Application
An Example of Generative Large Models
DOI:
https://doi.org/10.62051/ijsspa.v7n2.12Keywords:
Artificial Intelligence, Tort Liability, Legal Framework, Tort Risk Avoidance, Algorithmic RegulationAbstract
The judicial application of generative artificial intelligence technology has triggered core legal issues such as subject matter eligibility disputes, definition of service attributes and allocation of tort liability. This paper takes Tencent Dreamwriter case and other typical cases as an entry point to systematically explore the difficulties in determining the tort liability of generative artificial intelligence under the current legal framework, focusing on the disputes over the liability of service providers, the dilemma of determining the negligence caused by algorithmic defects, and the ambiguity of defining the infringement behaviours. Through comparative analysis of domestic and international legislative practices, the study reveals the "instrumental" nature of generative AI and the core of the dispute over its legal nature: service providers, as the controllers of the technology, are required to undertake the legal obligations of data compliance review, generation of content labelling, and filtering of illegal information. The study further proposes a functionalist approach to the determination of tort liability, combining the "equivalent causation theory" with the principle of judicial classification to provide a dynamic framework for the division of liability. At the level of risk regulation, China has formed a governance system that combines policy guidance and technical regulation, and has strengthened algorithmic transparency and content traceability through the Measures for the Labelling of Artificial Intelligence-Generated Synthesised Content. The purpose of this paper is to provide both theoretical support and practical guidance for the judicial discretion and institutional improvement of generative AI infringement disputes.
Downloads
References
[1] Liang, Y. (2024). Contextual classification and liability determination of infringement responsibilities of generative AI service providers. Journal of Shenzhen University (Humanities & Social Sciences), 41(5), 115-124.
[2] Ding, W. (2023). Logical regression of copyright law from a general artificial intelligence perspective: From "instrumentalism" to "contribution theory". Eastern Law Review, 95(5).
[3] Li, Y., & Li, X. (2018). Discussion on copyright issues of artificial intelligence creations from the perspective of Kantian philosophy. Law Review, 39(295), 9.
[4] Hu, P. (2023). A new perspective on legal subjects. Gansu Social Sciences, 267(6).
[5] Yuan, Z. (2017). An examination of the limited legal personality of artificial intelligence. Eastern Law Review, (5), 50-57.
[6] Wang, Q. (2024). Third discussion on the positioning of content generated by artificial intelligence in copyright law. Studies in Law and Business, 41(221), 3.
[7] American Law Institute. (1997/1998/2000). Restatement of the Law, Third, Torts: Products Liability/Apportionment of Liability. St. Paul, MN: American Law Institute Publishers.
[8] American Law Institute. (2006). Restatement of the Law Third, Torts: Products Liability (Xiao, Y., et al., Trans.). China Legal Publishing House. (p. 396).
[9] Guo, S. (2023). Theoretical dilemmas and responses to legal acts in the intelligent era: Taking ChatGPT as the background. Dongyue Tribune, 6, 179.
[10] Wang, L. (2020). Academic Works of Wang Liming: Tort Liability Section. Peking University Press. (p. 334).
[11] Wang, Z. (2016). Torts. Peking University Press. (p. 246).
[12] Zheng, L. (2021). Causation theory in Japanese tort law and its implications. Japanese Law Review, 7, 27.
[13] Bi, W. (2023). The dilemma of risk regulation of generative artificial intelligence and its resolution: From the perspective of ChatGPT regulation. Comparative Law Research, (3), 155-172.
[14] Yueng, K. (2017). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12-13.
[15] Sommerer, L. (2015). Taming algorithmic oracles: Transparency requirements for the use of predictive analytics by government agencies. LL.M. Dissertation Paper, 4.
[16] Wang, Q. (2020). Multiple dimensions of algorithmic transparency and algorithmic accountability. Comparative Law Research, (6), 163-173.
[17] Kahn, L. M. (2018). Sources of tech platform power. Georgetown Law Technology Review, 2, 325-334.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Journal of Social Sciences and Public Administration

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.






