نوع مقاله : مقاله پژوهشی

نویسنده

استادیار حقوق بینالملل عمومی، گروه حقوق، دانشکدۀ علوم انسانی و اجتماعی، دانشگاه کردستان، سنندج، ایران.

چکیده

توسعه و استفاده از فناوری هوش مصنوعی، علیرغم افزایش بهره‌وری و کارایی در بخشهای مختلف زندگی، با ایجاد ضرر و زیان و ارتکاب جرم،مشکلات حقوقی بیشماری در باب مسئولیت کیفری فردی/بین‌المللی ایجاد کرده است.این مقاله درصدد پاسخ به این سؤال است که چه کسی می‌تواند در قبال جرایم ارتکابی سیستم‌های مبتنی بر هوش مصنوعی مسئول شناخته شود؟آیا امکان اعطای وضعیت شخص الکترونیکی به این سیستم‌ها وجود دارد تا بتوان مسئولیت کیفری بر آنها بار نمود؟نگارنده معتقد است اعطای شخصیت الکترونیکی و مسئولیت به هوش مصنوعی با توجه به خودمختاری،توانایی تجزیه و تحلیل،تصمیم‌گیری مستقل و درجات خاصی از هوشیاری به عنوان رکن معنوی به اندازه سایر افراد غیرانسانی قانع‌کننده است.با این حال،مبتنی بودن سیستم عدالت کیفری بین‌المللی بر مسئولیت کیفری فردی،شرط تقصیر در احراز مسئولیت کیفری و یافتن مجازات مناسب برای هوش مصنوعی به صورت‌بندی منحصر به فرد نیاز است،زیرا شخصیت هوش مصنوعی را نمی‌توان با شخصیت شخص حقیقی یا نهاد حقوقی یکسان تلقی کرد.از آنجا که تحمیل مسئولیت کیفری بر خود هوش مصنوعی به دلیل فقدان شخصیت حقوقی و ذهن گناهکارانه هنوز از نظر عملی و تئوری امکان‌پذیر نیست،تعداد زیاد افراد حقیقی و حقوقی دخیل در ساخت،طراحی،برنامه‌نویسی،آموزش و استقرار آنها را می‌توان مسئول عملکرد نادرست این سیستم‌ها دانست.

کلیدواژه‌ها

موضوعات

عنوان مقاله [English]

Criminal Accountability of Artificial Intelligence Systems for International Crimes and the Attributability of Combatant Status or E-Personhood: Necessities, Obstacles, and Solutions

نویسنده [English]

  • Heidar Piri

Assistant Professor of Public International Law, Law Department, Faculty of Humanities and Social Sciences, University of Kurdistan, Sanandaj, Iran.

چکیده [English]

Introduction
The rapid development and proliferation of Artificial Intelligence (AI) systems, particularly in military and security domains, represent a paradigm shift with profound implications for international law. While offering potential benefits in efficiency and capability, the autonomous nature of advanced AI systems raises acute legal and ethical challenges, especially concerning accountability for serious violations of international law. The deployment of AI in armed conflict, such as through Lethal Autonomous Weapon Systems (LAWS), and its use in contexts that may facilitate international crimes, like genocide, war crimes, and crimes against humanity, necessitates a fundamental re-examination of traditional legal frameworks. Current international criminal law is fundamentally anthropocentric, built upon the principle of individual criminal responsibility which requires both a physical act (actus reus) and a guilty mind (mens rea). This framework struggles to accommodate entities that can act independently, learn from their environments, and cause significant harm without direct, predictable human intervention at the moment of the act. This article delves into the core legal dilemma: who can and should be held accountable when an AI system commits an act that constitutes an international crime? It explores the feasibility and necessity of attributing criminal liability directly to AI systems themselves, potentially by granting them a novel legal status such as electronic personality (e-person), as opposed to, or in conjunction with, holding the various human actors in their chain of development and deployment responsible.
 
Research Question(s)
This research is guided by the following primary questions:

In the context of international crimes, who can be held accountable for proscribed acts or crimes committed by AI systems? Are there any grounds for labeling them as criminals or granting the status of e-person?
What are the legal and philosophical grounds for, and obstacles against, granting autonomous AI systems a form of legal personality (e-personhood) to bear rights and obligations, including criminal liability?
Can the existing requirements for establishing criminal responsibility under international law, particularly the actus reus and mens rea, be satisfied by AI systems in their current or foreseeable state of development?
What alternative models of liability, such as holding programmers, manufacturers, military commanders, or states responsible, are available and effective under current international law, and what are their limitations?
What are the potential solutions and necessary legal reforms, including at the level of the ICC Statute, to address the accountability gap posed by AI systems capable of committing international crimes?

 
Methodology
The article has been performed based on the descriptive and analytical research method. The necessary data has been collected by library method. The research adopts a critical and forward-looking approach, analyzing the coherence and sufficiency of existing legal doctrines, identifying conceptual gaps, and proposing normative solutions based on logical reasoning, comparative analysis of analogous legal constructs (e.g., corporate criminal liability), and the functional demands of international justice.
 
Results
The investigation yields several key findings:

The Case for Electronic Personality and Direct AI Liability:Arguments for granting AI systems a form of legal personality is compelling, drawing parallels with the historical extension of legal personhood to corporations. Proponents argue that highly autonomous AI, capable of independent analysis, decision-making, and learning, possesses a functional equivalence to the rational agency required for responsibility. The concept of "electronic responsibility" is presented as a necessary tool to prevent human actors from evading liability by hiding behind the complexity and autonomy of machines.
Significant Legal Obstacles:The path to direct AI criminal liability is fraught with major hurdles under current law:

Anthropocentric Foundations:The Rome Statute and the general principles of international criminal law are firmly rooted in human agency. Terms like "person" are interpreted as natural persons.
The Mens ReaRequirement: The most formidable barrier is the mental element. While an AI system can arguably satisfy the actus reus (the physical act), attributing intentknowledge, or recklessness—subjective mental states tied to consciousness, moral understanding, and foresight—to a machine remains deeply problematic both legally and philosophically.
Lack of Legal Personality:AI systems currently lack recognized legal personality in international law, a prerequisite for being a subject of rights and duties, including criminal liability.
Punishment Incommensurability:Traditional penal theories (retribution, deterrence, rehabilitation) lose meaning when applied to non-human entities that cannot feel guilt, suffer, or be morally reformed.


Analysis of Alternative Human Liability:In the absence of direct AI liability, the focus shifts to human actors. However, attributing responsibility to programmers, manufacturers, operators, or military commanders is often hampered by practical and legal difficulties: the problem of many hands, the challenge of proving individual mens rea for unforeseeable autonomous actions, and the potential lack of "effective control" required for command responsibility when dealing with learning systems.

The article concludes that a multi-pronged approach is needed:

Strict Liability Models:The ICC and international criminal law may need to embrace forms of strict or no-fault liability for situations involving autonomous systems, moving away from mens reaas an absolute central pillar for certain contexts.
Regulation and Prohibition:Strengthening IHL compliance through rigorous legal reviews of new weapons (Article 36, AP I), enhancing precautionary measures, and potentially negotiating treaties to limit or ban certain types of autonomous weapons.
Statutory Reform:For direct AI liability to become viable, the Rome Statute would require amendment. Articles 1 and 25(1) could be revised to explicitly extend the Court's personal jurisdiction to legal persons or electronic persons, and a new framework for electronic responsibility would need to be codified.
Ethical and Technical Safeguards:Implementing robust ethical guidelines for developers, incorporating IHL rules directly into AI training (law encoding), and creating reliable fail-safe mechanisms for deactivation.

 
Conclusion
The advent of AI systems with significant autonomy presents one of the most profound challenges to the international criminal justice system. While the theoretical appeal of holding AI directly accountable, grounded in its capacity for autonomy, independent analysis, decision-making, and a functional approximation of intentionality, is as compelling as arguments for extending legal personality to other non-human entities, it is currently precluded by foundational legal principles. The core impediments are the irreconcilable absence of a mens rea in AI and the lack of an established legal personality, making the imposition of direct criminal responsibility neither theoretically coherent nor practically feasible under the extant anthropocentric framework. Consequently, in the near term, the most viable accountability mechanisms must focus on reinforcing the responsibility of the myriad natural and juridical persons involved in the construction, programming, training, and deployment of AI systems for their malfunction or unlawful outcomes. However, this human-centric approach is itself severely constrained by traditional mens rea requirements, creating a significant liability gap and risking an anomaly where international criminal law becomes ineffective in addressing harms caused by this technology, thereby undermining human rights protection and international justice.
To bridge this gap, a dual-path strategy is essential. Immediately, it necessitates strengthening preventive International Humanitarian Law (IHL) regulations and rigorously applying existing models of human responsibility across the AI lifecycle. Simultaneously, for the future, the international legal community must engage in proactive and principled reform. As AI evolves towards greater sophistication, the pressure to reconceptualize legal personhood will intensify. The ICC will only be able to effectively prosecute crimes involving autonomous AI agents if it embraces legal innovations such as strict liability and other alternative models of fault-based liability, which have so far been marginalized. Ultimately, ensuring accountability and preventing impunity for the gravest crimes may require international criminal law to transcend its strict anthropocentrism and incorporate a functional, graduated model of electronic responsibility. This represents a seismic shift in legal philosophy, demanding careful preparation and reasoned debate to develop the unique formula necessary for a future where the law keeps pace with technological agency.

کلیدواژه‌ها [English]

  • Electronic responsibility
  • AI
  • Autonomous weapons
  • International Criminal Court
  • Legal personality
  • International crimes
  • فارسی

    • ابوذری، مهرنوش؛ برزگر، محمدرضا، نادری، زهرا. (1402). «امکان‌سنجی مسئولیت کیفری سلاح‎های جنگی مبتنی‌بر هوش مصنوعی و مسئله بی‌کیفرمانی در دادگاه کیفری بین‌المللی»، حقوق فناوری‌های نوین، دورۀ 4، شمارۀ 8. 122133/mtlj.2023.389496.1184
    • بصیری، عباس. (1398). «مسئولیت‌پذیری در به‌کارگیری جنگ افزارهای رباتیک مستقل»، تعالی حقوق، شمارۀ 4. 22034/thdad.2019.240068
    • پیری، حیدر. (1404). «استفاده نظامی از سیستم‌های تسلیحاتی خودمختار مبتنی‌بر هوش مصنوعی در مخاصمات مسلحانه؛ تعهدات حفاظتی دولت‌ها در چارچوب حقوق بین‌الملل بشردوستانه»، دوفصلنامۀ حقوق بشر، دورۀ 19(2). 22096/hr.2024.2019027.1645
    • پیری، حیدر. (1404). «مسئولیت بین‌المللی دولت‌ها در ارتباط با کاربست سلاح‌های مبتنی‌بر هوش مصنوعی در درگیری‌های مسلحانه»، حقوق فناوری‌های نوین، شمارۀ 6(12). 22133/mtlj.2025.449001.1353
    • رنجبریان، امیر حسین؛ بذار، وحید. (1397). «رعایت حقوق بین‌الملل بشردوستانه از سوی روباتهای نظامی خودفرمان و مسئولیت ناشی از اقدامات آن‌ها»، مجلۀ حقوقی بین‌المللی، شمارۀ 59. 22066/cilamag.2018.31884
    • عرب‌چادگانی، رضا، و بهرام مرادیان. (1402). «مشروعیت به‌کارگیری سامانه‌های نظامی هوشمند در مخاصمات مسلحانه»، فصلنامۀ مطالعات حقوق عمومی، دورۀ 53، شمارۀ 4. https://doi.org/10.22059/jplsq.2021.326075.2807

     

    Translated References into English

     

    References

    Books & Articles

    • Abbott, Ryan (2020), The Reasonable Robot: Artificial Intelligence and the Law, Cambridge University Press.
    • Abbott, Ryan; Sarch, A (2019), ‘‘Punishing Artificial Intelligence’’, UC Davis Law Review, Vol.53.
    • Acquaviva, Guido (2022), ‘‘Autonomous Weapons Systems Controlled by Artificial Intelligence: A Conceptual Roadmap for International Criminal Responsibility’’, The Military Law and the Law of War Review, 60, Issue.1.
    • Albadi, Nuha; Kurdi, Maram & Shivakant Mishra (2019), Hateful People or Hateful Bots? Detection and Characterization of Bots Spreading Religious Hatred in Arab Social Media, Proceedings of the ACM on Human-Computer Interaction, Vol.3.
    • Beard, JM (2014), ‘‘Autonomous Weapons and Human Responsibilities’’, Georgetown Journal of International Law, Vol.45(3).
    • Bo, M; Bruun, L. & V. Boulanin (2022), ‘‘Retaining Human Responsibility in the Development and Use of Autonomous Weapon Systems: On Accountability for Violations of International Humanitarian Law Involving AWS’’, Stockholm International Peace Research Institute.
    • Bonfim, Tany Calixto (2022), Criminal Liability of Artificial Intelligent Machines: Eyeing into AI’s Mind, Lund University.
    • Casey-Maslen, Stuart (2014), Weapons Under International Human Rights Law, Cambridge University Press.
    • Cerka, Paulius; Grigiene, Jurgita and Gintare Sirbikyte (2017), ‘‘Is It Possible to Grant Legal Personality to Artificial Intelligence Software Systems?’’, Computer Law & Security Review, Vol.33(5).
    • Chengeta, S (2016), ‘‘Accountability Gap. Autonomous Weapon Systems and modes of Responsibility in International Law’’, Denver Journal of International Law and Policy, Vol.45.
    • Chesterman, Simon (2020), ‘‘Artificial Intelligence and the Limit of Legal Personality’’, International and Comparative Law Quarterly, Vol.69.
    • Dafni, Lima (2018), ‘‘Could AI Agents be held Criminally Liable? Artificial Intelligence and the Challenges for Criminal Law’’, South Carolina Law Review, Vol.69(3).
    • Dimitrova, R (2022), ‘‘Criminal Liability Associated with Artificial Intelligence Entities under the Bulgarian Criminal Law’’, In 2022 XXXI International Scientific Conference Electronics, IEEE
    • Ellamey, Yasser; Elwakad, Amr (2023), ‘‘The Criminal Responsibility of Artificial Intelligence Systems: A Prospective Analytical Study’’, Corporate Law & Governance Review, Vol.5(1).
    • Endsley, M. R; Garland, D (2000), Situation Awareness Analysis and Measurement, CRC Press.
    • Ford, CM (2017), ‘‘Autonomous Weapons and International Law’’, South Carolina Law Review, Vol.69.
    • Gordon, John-Stewart (2021), ‘‘Smart Technologies and Fundamental Rights’’, Value Inquiry Book Series, Vol.350, Brill.
    • Hallevy, Gabriel (2013), When Robots Kill: Artificial Intelligence under Criminal Law, UPNE.
    • Hallevy, Gabriel (2015), Liability for Crimes Involving Artificial Intelligence Systems, Springer.
    • Hallevy, Gabriel (2016), ‘‘The Criminal Liability of Artificial Intelligence Entities’’, Akron Intellectual Property Journal, Vol.4(2), 2016.
    • Hawking, Stephen (2018), Brief Answers to the Big Questions, John Murray.
    • Bryson, M. Diamantis and T. Grant (2017), ‘‘Of, For, and By the People: The Legal Lacuna of Synthetic Persons’’, Artificial Intelligence Law, Vol.25.
    • Kelsen, Hans (2000), Pure Theory of Law, Lawbook Exchange.
    • King, Thomas C., et al (2020), ‘‘Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions’’, Science and Engineering Ethics, Vol.26(1).
    • Lagioia, Francesca; Sartor, Giovanni (2020), ‘‘AI Systems Under Criminal Law: A Legal Analysis and a Regulatory Perspective’’, Philosophy & Technology, Vol.33.
    • Lior, Anat (2020), ‘‘AI Entities as AI Agents: Artificial Intelligence Liability and the AI Respondeat Superior Analogy’’, Mitchell Hamline Law Review, Vol.46.
    • Simmler and N. Markwalder (2019), ‘Guilty Robots? Rethinking the Nature of Culpability and Legal Personhood in an Age of Artificial Intelligence’, Criminal Law Forum, Vol.30.
    • Mahardhika, Vita; Astuti, Pudji; Mustafa, Aminuddin (2023), ‘‘Could Artificial Intelligence be the Subject of Criminal Law?’’ Yustisia Jurnal Hukum, Vol.12, No.1.
    • Malik, S (2018), ‘‘Autonomous Weapon Systems. The Possibility and Probability of Accountability’’, Wisconsin International Law Journal, Vol.35.
    • Mazzacuva, F (2021), ‘‘The Impact of AI on Corporate Criminal Liability’’, Revue Internationale de Droit Pe´nal, Vol.92.
    • McFarland/ McCormack (2014), Mind the gap. Can developers of Autonomous Weapon Systems be liable for war crimes?, International Law Studies, Vol.90.
    • Mozur, Paul (2018), A Genocide Incited on Facebook, with Posts from Myanmar’s Military, New York TIMES,.
    • Mulligan, Christina (2018), ‘‘Revenge Against Robots’’, S.C. Law Review, Vol.69.
    • Naučius, Mindaugas (2018), ‘‘Should Fully Autonomous Artificial Intelligence Be Granted Legal Capacity?’’ Teises Apzvalga Law Review, Vol.17(1).
    • Ngaire, Naffine (2003), ‘‘Who are Law’s Persons? From Cheshire Cats to Responsible Subjects’’, Modern Law Review, Vol.66.
    • Novelli, Claudio; Giorgio Bongiovanni & Giovanni Sartor (2022), ‘‘A Conceptual Framework for Legal Personality and its Application to AI’’, Jurisprudence, Vol.13.
    • Ohlin Jens David (2016), ‘‘The Combatant’s Stance: Autonomous Weapons on the Battlefield’’, International Law Studies, Vol.92.
    • Osmani, Nora (2020), ‘‘The Complexity of Criminal Liability of AI Systems’’, Masaryk University Journal of Law and Technology, Vol.14.
    • Ozdemir, Gloria Shkurti (2019), Artificial Intelligence Application in the Military: The Case of United States and China, Istanbul: Seta.
    • Pagallo, U (2011), ‘‘Killers, Fridges, and Slaves: A Legal Journey in Robotics’’, AI & Society, Vol.26.
    • Platvoet, Veerle (2020), ‘‘The Attribution of Limited Legal Personality to Nonhuman Species’’, Journal of Animal Ethics, Vol.10.
    • Prosperi, Luigi; Terrosi, Jacopo (2017), ‘‘Embracing the Human Factor”, Journal of International Criminal Justice, Vol.15(3).
    • Rousseau, Bryant (2016), In New Zealand, Lands and Rivers Can Be People (Legally Speaking), N.Y. TIMES.
    • Saria, Onur; Celik, Sener (2021), ‘‘Legal Evaluation of the Attacks Caused by Artificial Intelligence-based Lethal Weapon Systems within the Context of Rome Statute’’, Computer Law & Security Review, Vol.42.
    • Sehrawat, Vivek (2017), ‘‘Autonomous Weapon System: Law of Armed Conflict (LOAC) And Other Legal Challenges’’, Computer Law & Security Review, Vol.33(1).
    • Stanton, Gregory (2016), The Ten Stages of Genocide, Genocide Watch.
    • Swart, Bert (2009), ‘‘Modes of international Criminal Liability’’, in The Oxford Companion to International Criminal Justice, Antonio Cassese(ed.), Oxford University Press.
    • Swart, Mia (2023), Constructing “Electronic Liability” for International Crimes: Transcending the Individual in International Criminal Law, German Law Journal, Vol.24.
    • Weigend, Thomas (2023), ‘‘Convicting Autonomous Weapons? Criminal Responsibility of and for AWS under International Law’’, Journal of International Criminal Justice, Vol.21(5).
    • Wendehorst, Christiane (2020), Strict Liability for AI and Other Emerging Technologies, J. EUR. Tort Law.
    • Ying, Hu (2019), ‘‘Robot Criminals’’, University of Michigan Journal of Law Reform, Vol.52.

     

    Cases & Documents

    • European Commission, Proposal for a Regulation of The European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (2021).
    • Civil Law Rules on Robotics, European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)).
    • France v. Goering, 22 I.M.T. 411, 466 (1946).
    • Human Rights Council, Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar, at 323, U.N. Doc. A/HRC/39/CRP.2 (Sept. 28, 2018)
    • International Criminal Court, Elements of Crimes art. 8 intro., U.N. Doc. PCNICC/2000/1/Add.2 (June 30, 2000).
    • Orangutan Sandra Granted Personhood Settles into New Florida Home, GUARDIAN, (Nov. 7, 2019),
    • N. Office, The C.C.W. Informal Meeting of Experts on L.A.W.S., The communication of the International Committee of the Red Cross to the Conference on Disarmament convened by United Nations 5 (Apr. 11-15, 2016).
    • Council of the European Union, Artificial Intelligence Act, Brussels, 2 February 2024.
    • Prosecutor v. Halilovid, IT-01-48-T T.Ch. 1, Judgment, T 61 (Nov. 16, 2005).
    • European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics, Feb. 16, 2017, 2017 O.J. (C 252).
    • Prosecutor v Bemba, ICC-01/05-01/08, Pre-Trial Chamber II, Decision on the Confirmation of Charges, 15 June 2009.