Preview

Moscow Journal of International Law

Advanced search

Principle of Responsibility of the Controlling Person as Approach to Eliminating the “Responsibility Gap” for Harm Caused by AI Systems and AI Applications

https://doi.org/10.24833/0869-0049-2024-4-132-145

Abstract

INTRODUCTION. The use of artificial intelligence technologies (hereinafter referred to as “AI”) is characterized by the mediation of human actions by autonomous processes, which leads, in the case when technical expertise is unable to identify the causer of harm, to a “responsibility gap” is an undesirable legal phenomenon in which the imposition of responsibility for harm caused by the use of AI on a specific person (persons) for the rules of tort liability are impossible.

MATERIALS AND METHODS. The research used general scientific and special methods, including the historical method, methods of formal logic, analysis, synthesis, as well as systemic and comparative legal methods.

RESEARCH RESULTS. To eliminate the “responsibility gap”, the article proposes a mechanism that allows to fill in the missing elements of a tort committed using AI when the error that led to harm cannot be attributed de lege lata to any participant in the life cycle of an AI system or application. The starting point for the development of this mechanism was the theory of “guidance control” over the use of AI. A legal understanding of the philosophical foundations of the theory of “guidance control” allows us to substantiate the general legal principle of allocating responsibility for harm caused by AI, according to which the legal responsibility is borne by the person obliged to exercise human control over the use of the AI system or application, unless other perpetrators are identified. This principle is gradually being accepted by the international legal doctrine, which is expressed in the designation of the need to control the use of AI in a number of international documents.

CONCLUSIONS. Provided that the protocol to the Treaty on the EAEU enshrines the general legal principle of responsibility of the controlling person for harm caused by AI, it can acquire the significance of a regional international legal principle, and thereby become the basis for the formation of regulatory regulation in the EAEU of the distribution of responsibility for harm caused by AI. The proposed toolkit is convenient for legal consolidation through supranational legal regulation. 

About the Author

E. N. Melnikova
Saint Petersburg University
Russian Federation

Elena N. Melnikova, Post-Graduate student, Saint Petersburg University

7–9, Universitetskaya Emb., Saint Petersburg, 199034



References

1. Amoroso D., Tamburrini G. 2020. Autonomous Weapons Systems and Meaningful Human Control: Ethical and Legal Issues. – Current Robotics Reports. Vol. 1. Р. 187-194. DOI: 10.1007/s43154-020-00024-3.

2. Bovens M., Miceli M.P. 1998. The quest for responsibility: Accountability and citizenship in complex organisations. – Cambridge University Press. DOI: 10.2307/2667065. URL: https://www.researchgate.net/ publication/275840305_The_Quest_for_Responsibility_Accountability_and_Citizenship_in_Complex_Organisations (accessed date: 02.12.2023).

3. Burri T. 2017. International Law and Artificial Intelligence. – German Yearbook of International Law. Vol. 60. P. 91-108.

4. Cavalcante S.L., Lupetti M.L., Aizenberg E. 2023. Meaningful human control: actionable properties for AI system development. – AI Ethics. Vol. 3. P. 241-255. DOI: 10.1007/s43681-022-00167-3.

5. Čerka P., Grigienė J., Sirbikytė G. 2015. Liability for damages caused by artificial intelligence. – Computer Law & Security Review. Vol. 31. Issue 3. P. 376-389.

6. Chernilovsky Z.M. Prezumpcii i fikcii v istorii prava [Presumptions and fictions in the history of law]. – Sovetskoe gosudarstvo i pravo [The Soviet State and law]. 1984. № 1. SPS Consultant Plus. (In Russ.).

7. Collingridge D. 1980. The Social Control of Technology. London: Frances Pinter. 200 p.

8. Danaher J. 2016. Robots, law and the retribution gap. – Ethics and Information Technology. Vol. 18. Р. 299-309. DOI: 10. 1007/s10676- 016- 9403-3.

9. De Sio S.F., Mecacci G. 2021. Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them. – Philos and Technol. Vol. 34. P. 1057-1084. DOI: 10.1007/s13347-021-00450-x. URL: https://link.springer.com/article/10.1007/s13347-021-00450-x (accessed date: 02.10.2023).

10. De Sio F.S., van den Hoven J. 2018. Meaningful Human Control over Autonomous Systems: A Philosophical Account. – Front. Robot. AI 5:15. 28 Feb. DOI: 10.3389/frobt.2018.00015.

11. Fischer J., and Ravizza M. 1998. Responsibility and Control: A Theory of Moral Responsibility. – Cambridge University Press. 277 p. DOI:10.1017/CBO9780511814594.

12. Grazhdanskoe parvo. Uchebnik. V 2-h t. T. 1 [Civil law. Textbook. In 2 vol. Vol. 1]. Pod red. E.A. Suhanova. 2-e izd. Moscow: BEK. 2000. 704 р. (In Russ.).

13. Matthias A. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. – Ethics and Information Technology. Vol. 6(3). Р. 175-183. DOI: 10.1007/s10676-004-3422-1.

14. Melnikova E., Surov I. 2023. Legal Status of Artificial Intelligence from Quantum-Theoretic Perspective. – BRICS Law Journal. Vol. X. Issue 4. Р. 5-34.

15. Melnikova E.N. 2024. Perspektivy sblizheniya zakonodatel'stva stran EAES v chasti pravovogo regulirovaniya iskusstvennogo intellekta [Prospects for the convergence of the legislation of the EAEU countries in terms of the legal regulation of artificial intelligence]. – Evrazijskaja integracija: jekonomika, pravo, politika [Eurasian integration: economics, law, politics]. Vol. 18. No. 2(48). P. 95-103. (In Russ.)

16. Meloni C. State and individual responsibility for targeted killings by drones. – Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on Re-motely Controlled Weapon. E. Di Nucci & De Sio F.S. (Eds.). Routledge. 2016. URL: https://www.academia.edu/77047662/ State_and_individual_responsibility_for_targeted_killings_by_drones (дата обращения: 30.12.2023).

17. Michon J. A. 1985. A critical view of driver behavior models: what do we know, what should we do? – Human behavior and traffic safety. Ed. by Evans L., Schwing R.C. General Motors Research Laboratories. New York: Plenum Press. Р. 485-524.

18. Heyns C. Report of the Special Rapporteur on Extra-Judicial, Summary or Arbitrary Executions. – Geneva: United Nations, 1 Apr. 2014. URL: https://digitallibrary.un.org/record/771922?ln=ru (дата обращения: 30.12.2023).

19. Pesch U. 2015. Engineers and Active Responsibility. – Science and Engineering Ethics. Vol. 21(4). Р. 925-939. DOI: 10. 1007/s11948-014-9571-7.

20. Robbins S. 2023. The many meanings of meaningful human control. – AI Ethics. DOI.10.1007/s43681-023-00320-6.

21. Sparrow R. 2007. Killer robots. – Journal of Applied Philosophy. Vol. 24(1). P. 62-77. DOI: 10.1111/j.1468-5930.00346.x.

22. Van de Poel I., Sand M. 2018. Varieties of responsibility: two problems of responsible innovation. – Synthese. DOI:10.1007/s11229-018-01951-7.


Review

For citations:


Melnikova E.N. Principle of Responsibility of the Controlling Person as Approach to Eliminating the “Responsibility Gap” for Harm Caused by AI Systems and AI Applications. Moscow Journal of International Law. 2024;(4):132-145. (In Russ.) https://doi.org/10.24833/0869-0049-2024-4-132-145

Views: 257


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 0869-0049 (Print)
ISSN 2619-0893 (Online)