Close X

By subscribing to our newsletter you agree to receive periodic e-mails from Dubé Latreille Avocats Inc.

Thank you for subscribing!

An error has occurred. Please try again later.


Will Robots Always Do What They Are Told? Prosecuting Machines Under Canadian Law

Droit de la cybersécurité
Will Robots Always Do What They Are Told? Prosecuting Machines Under Canadian Law

Will Robots Always Do What They Are Told? Prosecuting Machines Under Canadian Law1

The Digital Age has brought an unprecedented surge of innovations in Information Technologies which have drastically reshaped the environment we live in and our rapport with machines. As robots through AI take on new roles and responsibilities, this raises a fundamental question: could smart robots eventually be held criminally liable for their actions?

In this paper, I hypothesize that Canadian courts are currently unable to prosecute such“machines” without first reviewing certain fundamental dispositions and concepts of theLaw. To test this hypothesis, I have reviewed certain legal publications indexed inQuebec’s Legal Information Access Center (CAIJ) and similar Canadian databases, as well as online media and literature.

My recommendation is that, while it would be possible to prosecute robots under the Criminal Code under certain conditions, it would be more beneficial overall for our society to establish a distinct set of rules specifically and uniquely designed for robots.




  • Decision-making process in AI: emulating human reasoning?
  • What are smart robots?
  • The need for balance and accountability


  • Review of core principles and definitions
  • Canadian Charter of Rights and Freedoms
  • Robots as subjects of law


  • Legal personhood for machines: a bridge too far?
  • The elusive notion of “mens rea”
  • Due process and attribution
  • Machine sentencing: an illusion of justice?


  • International initiatives
  • The need for updating present legislation
  • The implementation of a distinct set of rules for robots



Logo Dubé Latreille Avocats

The Digital Age has brought an unprecedented surge of innovations in InformationTechnologies which have drastically reshaped the environment we live in and our rapport with machines. Increasingly, robots and AI are taking on new roles and responsibilities with remarkable autonomous decision-making capacities. This, combined with a much-anticipated breakthrough in quantum computing, makes it possible to envisage that, in our lifetime, robots will have a “mind” of their own. This raises a fundamental question: could robots eventually be held criminally liable for their actions?

In this paper, I hypothesize that, at present, Canadian courts cannot prosecute such “machines” without first reconsidering and adapting certain fundamental dispositions of the Criminal Code and the Canadian Charter of Rights and Freedoms. To test this hypothesis, I have reviewed legal publications on the matter indexed in Quebec’s LegalInformation Access Center (CAIJ) and similar Canadian databases, as well as online media and literature.

The first part of this paper will focus on the emergence of AI technology and the relevance to envisage criminal liability for smart robots2. The second part will review the main features of criminal law, while the third part shall consider some of the challenges that extending criminal liability to robots will entail. 

My recommendation is that, while it could be possible and pertinent to prosecute smart robots in Canada under certain conditions, it would be more beneficial overall for society to establish a distinct set of rules specifically and uniquely designed for robots. 

Over the centuries, through technical innovations, humans have strived to adapt to their environment in order to survive and to improve their life conditions. In that respect, the Digital Age has disrupted human society as never before. Thanks to Information Technologies, computers now process instantly vast amounts of data which enables them to perform simultaneously complex tasks and functions that were previously unattainable. This astounding capacity – combined with interconnectivity, interoperability and access to Internet – has enabled Artificial Intelligence to thrive and pave the way for robotics. 

To better understand the issues inherent to AI in terms of criminal liability, it is important to determine what constitutes “Artificial Intelligence”, how it works, and what distinguishes smart robots. 

a. Decision-making process in AI: emulating human reasoning? 

In the early 60’s and thereafter, while researchers were already hard at work to develop the technology, the notion of Artificial Intelligence was a popular topic in sci-fi movies (Star Trek, Star Wars, etc.) but, for the most part, it was considered a fictitious concept. Since the early 2000, however, the progress achieved by IT developments and the renewed interest generated for AI have been such that Artificial Intelligence is being discussed as if it was an accomplished fact. Has artificial intelligence been achieved? And what does it consist of? 

First, there is a fundamental question: what is intelligence? The Oxford dictionary defines intelligence as “the ability to learn, understand and think in a logical way about things”3

In other words, it is the capacity to process information in an evolutive manner. Comparatively, artificial intelligence is described as “the capacity of computers or other machines to exhibit or simulate intelligent behaviour”4

Given that, historically, the notion of intelligence had been reserved or assimilated to humans (and to animals to a lesser extent), are machines capable of intelligence? According to John McCarthy, that is the very purpose of AI, that is “the science of making intelligent machines, especially intelligent computer programs »5. Through the use of algorithms, software and data sets, computers can achieve remarkable results in analyzing large volumes of information. Given their memory and the lightning speed at which they can process data, they now significantly outperform humans in many areas. As a result, even though computers do not reproduce or explain how the human mind works, it can be argued that they are intelligent constructs6 that imitate human logical thinking. 

And this is only the beginning. As Jill R. Presser & al. points out, “With continuing increases in computing power, storage capacity, algorithmic sophistication, and the quantity and accessibility of training data, the cognitive acuity of AI is only bound to grow”7. Not surprisingly, reality is beginning to catch up with science-fiction as various kinds of autonomous robots are beginning to appear in homes, businesses, and industries. 

b. What are smart robots? 

Embodied, disembodied, and sometimes referred to as “artificially intelligent non-human entities”8, smart robots9 have been described in a number of ways that suggest some form of autonomy, that is, the ability to perform some of their tasks or functions without human assistance or supervision10. In one definition, robots are represented as “machines that can sense their environment, process the information they sense, and act directly upon their environment”11. Though the level of autonomy of robots varies greatly, it is worthwhile to note that the concept of consciousness (also known as general artificial intelligence or GAI) has not been achieved though some believe this technology is imminent12

c. The need for balance and accountability 

As was seen above, the pervasive use of AI in the environment and the fact it is being entrusted increasingly with critical tasks and functions that used to be accomplished by humans alone is not exempt from situations that could inflict harm, directly or indirectly, including death. As Presser points out, the harm might be caused by accident, by design, by autonomous choice or action, through an external source or some other unforeseen cause13

This in turn may raise the thorny and complex issue of criminal liability: who will be held accountable for unlawful or criminal acts committed by an AI entity acting on its own command?14 The developer, the programmer, the user, the robot itself, or a combination thereof? Notwithstanding the technical complexity that this entails for lawyers and the courts in terms of attribution, how will the hurdle of proprietary rights (IP rights and trade secrets) be circumvented to allow for greater understanding, transparency and due process? 

And if the harm has been caused by the AI alone (deliberately or not), and through no fault or negligence attributable to a human, can justice be rendered if no one is to blame?


The purpose of Criminal law is “to help maintain public safety, security, peace and order in society”15. In other words, it is meant to deter legal subjects from conducts that are undesirable or harmful in society. In the Canadian justice system, as in all Common Law jurisdictions, criminal liability rests on a handful of fundamental rights and principles that are protected by legal instruments such as the Criminal Code16 and the Canadian Charter of rights and freedoms17 and upheld by the courts. In order to determine whether robots can be prosecuted under our court system, we shall review some of them.

a. Review of core principles and definitions

First and foremost amongst the principles of criminal justice is the presumption of innocence. It is the cornerstone of Canadian Criminal Law and can be found in Sect. 6(1)of the Criminal Code:

6. (1) Where an enactment creates an offence and authorizes a punishment to be imposed in respect of that offence,

(a) a person shall be deemed not to be guilty of the offence until he is convicted or discharged under section 730 of the offence;


As a result, in order to obtain a conviction, the burden of proof entirely rests on the prosecution as it must establish beyond a reasonable doubt that the accused has indeed committed a crime.

To achieve this, two elements must be proven, that is, the actus reus (the perpetration of an unlawful act by the accused) and the mens rea (that is, the state of mind or willful intent by the accused to commit same). 

b. Canadian Charter of Rights and Freedoms 

The Canadian Charter of rights and freedoms is an important part of our Constitution that regroups rights deemed to be essential in a free and democratic society. 

In order to ensure the principle of equality before the law, the Charter contains rights such as the right to life and liberty18, the right to a lawyer19, and also the right to be presumed innocent until proven guilty20. These rights accrue to all Canadians (and, to some extent, to non-citizens), while a limited portion thereof have been recognized to corporations. 

c. Robots as subjects of law? 

It is worth noting that neither the Criminal Code, nor the Canadian Charter provide any definition for the notion of person

While the Criminal Code endeavors to regulate the behavior of individuals, the pertinence of its interpretation and application must take into account the historical, political, cultural and contextual factors of society as they evolve over time, a process that originates from and is expressly designed to meet the needs and expectations of humans 

That said, it is true that the Criminal Code does contain provisions pertaining to legal persons which, hitherto, have been restricted to corporations. This, in itself suggests that it could eventually embrace AI entities as we will see further. 

Comparatively, and although it refers on occasion to the notion of citizen, the Canadian Charter is a little ambiguous in that respect considering that some of its rights apply exclusively to humans21 while some others can be invoked by legal persons22

In light of the above, considering that smart robots do not benefit from any specific or general form of legal status, they cannot be prosecuted in their own rights. Rather, they must be assimilated to an object or thing (inanimate or not) under the responsibility of their owners23 whose liability will be engaged as the case may be. 

As a result, as AI evolves into more autonomous entities with a mind of their own,24 the issue of criminal liability may eventually expose the Canadian legal system to situations where attribution or conviction becomes impossible25, a scenario that is unacceptable in a society where one’s actions are regulated by the principle of accountability. 


From the onset, the mere prospect of prosecuting a machine (certainly from a human point of view) is unnatural. It challenges several values and concepts generally accepted hitherto such as the primacy of mankind in the universe and the superiority of humans over machines. More importantly, it elevates robots to a legal status that is similar to that of humans, a revolutionary and very unsettling prospect to say the least. In addition, while this possibility threatens deeply rooted cultural beliefs, it calls into question the very significance of free will, consciousness, freedom, and, ultimately, the notion of life itself and what it means to be human

a. Legal personhood for machines: a bridge too far? 

There is much debate in literature about the desirability of criminal liability for AI entities. While some argue that this would help resolve some practical problems of accountability inherent to self-aware technology (in particular in circumstances where humans are not to blame), others, like Abbott, claim that the trade-offs would be too costly for society whereas other options grounded in civil liability would arguably yield similar results26

In any event, if our society decides to grant AI legal personhood, what will be the preconditions? While “all humans are created equal” in principle, the same cannot be said about AI entities which are the end result of a combination of different suppliers, developers and programmers. This raises the complex questions of technical criteria: what functions, processes, and algorithms would be required? Also, what degree of self- awareness and consciousness would this entail? Ultimately, should there be different classes of legal personhood for smart robots? 

Moreover, if smart robots are to be criminally prosecuted, this suggests that they would be entitled to due process and, consequently, to some rights as well. How far should the legislator go in granting such rights? Should they be limited in scope in a manner similar to those conceded to corporations? 

In granting legal personhood to smart robots, Presser warns that this could unduly anthropomorphize and enhance the place of AI (that is, more strikingly, non-human non- living machines) in society27 and, thereby, bring about dire consequences for humanity.28 

b. The elusive notion of “mens rea” 

If the Canadian legislator confers a degree of legal personhood to robots to subject them to the Criminal Code, the latter will therefore be prosecuted similarly to individuals and corporations under the same fundamental principles as seen above. The concepts of actus reus and mens rea will therefore be applicable to them. How will this work? 

The demonstration that an unlawful act was committed voluntarily (the actus reus) may present, from the onset, a pointed difficulty when applied to AI. By definition, an act is voluntary if it is “proceeding from the will or from one's own choice or consent”29. While this is often implied for humans, such is not the case for machines. As Presser points out, if an act is the result of programming or coding, there cannot be an expression of choice or will (even if the robot can learn autonomously). Consequently, the constitutional requirement for the actus reus cannot be satisfied.30 To resolve this problem, some authors suggest that the notion of voluntariness should not be applied to robots but reduced to “a material performance with a factual-external presentation”31

The second element required to obtain the conviction of a smart robot, the guilty state of mind (mens rea), also presents considerable difficulties for the prosecution. To begin with, it is important to recall that Criminal Law has evolved over time to meet the needs of humans and that it is based on a distinct set of human values derived from physical and psychological experience32. As a result, the idea of transposing this regime to a non-human entity, no matter how advanced, seems highly perilous on many accounts. 

Assuming that self-awareness can be achieved technologically, what level of consciousness would be required from a machine to meet the mens rea requirement? Similarly, how will the blameworthiness of the machine be demonstrated considering the inherent complexity of algorithm33 programming? In that regard, Ying Hu suggests the adoption of a “less human-centric” approach to morality similar to what is already in place for corporate entities34. While this approach may be convenient, this proposition is problematic because corporations35 are incorporated entities where criminal liability is limited to certain crimes and sentencing whereas robots, as autonomous self-aware entities, should have a criminal liability identical to humans’ (notably to account for serious offenses such as murder, harassment, etc.). 

c. Due process for AI: 

If robots are to be convicted under the Criminal Code, one can expect they would be entitled to due process and, necessarily, some rights. Considering that their criminal liability would not be as limited as is the case for corporations (subject to the level of autonomy involved), they could presumably be entitled to the rights36 and freedoms provided by the Canadian Charter for natural persons, a precedent with unfathomable repercussions. 

According to author Adou, a robot would have access to the same traditional means of defense that humans rely upon (necessity, self-defense, etc.), in addition to sui generis37 defenses in the event, for example, the robot is hacked or if it becomes the victim of a singularity38

d. Machine sentencing: an illusion of justice? 

Canadian Criminal Law hinges on consequentialist and retributivist approaches. The former, founded on utilitarianism (the greater good), seeks to promote the prevention, deterrence and rehabilitation of crime whereas the latter focuses on punishment according to the severity and the blameworthiness of the crime at hand39

Applied to AI entities, the concept of sentencing raises several questions. First, even if the principles applicable to humans in the determination of a sentence40 were followed, certain sentences might appear to be meaningless, inapplicable or inconsequential if applied to non-human entities. In response to this, Adou contends that, similarly to corporations, the issue of sentencing can be resolved with certain adjustments that take into account both the above-mentioned principles and the specificities inherent to AI.41 As a result, various punishments such as deactivation, destruction, imprisonment, decommissioning, community services or even fines42 could be envisaged. 

Somehow, this perspective fails to answer the legitimate expectation humans have in the application of Criminal Law, that is a sense of justice. While the achievement of consequentialist objectives is arguable in some respects, the imposition of such punishments43 to non-human non-living entities appear inappropriate and ineffective in light of the fact that only humans will suffer the rea life experience of these penalties. 


For the time being, considering AI possesses neither the autonomy, the consciousness, nor the legal personhood required to be held criminally liable, the responsibility for its unlawful actions will continue to accrue to humans44 (or corporations). However, as technology continues to evolve while incidents involving autonomous robots multiply, lawmakers will need to address this issue. 

a. International initiatives 

Considering the fundamental questions and the risks raised by autonomous and self-aware AI globally45, there is a pressing need for the international community to control the evolution of this industry. In 2017, the European Parliament adopted a resolution entitled “Civil Law Rules on Robotics” which included a Charter for Robotics46. More recently, the UN adopted the Draft Text of the Recommendation on the Ethics of Artificial Intelligence47 aiming to provide “a universal framework of values, principles and actions to guide States in the formulation of their legislation, policies or other instruments regarding AI”. In both instruments, however, the concept of criminal liability for AI remains an open question. 

b. The need for updating current legislation 

Meanwhile, Canada also endeavors to catch up on AI with Bill C-27 (Digital Charter Implementation Act)48, which contains in Part 3 the Artificial Intelligence and Data Act (AIDA) designed “to regulate international and interprovincial trade and commerce in artificial intelligence systems”49. Similar to the Draft Text adopted by the UN here above50, there are no provisions pertaining to the criminal liability of AI. 

In light of the findings made earlier in this paper, could AI be subject to criminal liability in Canada? As the case may be, how could this be accomplished? 

  • 1)  In its current state, Canadian criminal Law does not allow the prosecution of AI entities because they are things and, as such, cannot be held accountable for the harm they might cause. Consequently, legal personhood will need to be conferred upon robots that will meet certain cognitive requirements to remedy this.
  • 2)  In accordance with the Draft Text adopted by the UN regarding AI51, it should be made clear in the law that failure to integrate and execute (as reasonably appropriate) human values and related principles in the developing, programming52 and/or use of a machine expected or, foreseeably, likely to develop autonomy, self- awareness and/or consciousness, will constitute an indictable offence.
  • 3)  While the distinction between legal persons (corporations) and natural persons are quite intuitive, the same cannot be said between humans and robots, especially considering that the latter are often designed to resemble and closely emulate the former for greater interaction. The law should therefore specify and provide that robots, while they can be entitled to some rights, cannot, under any circumstances, be considered equal to humans under the law.
  • 4)  Further, for greater clarity, the Canadian Charter should expressly exclude, one way or another, the “right to life” for robots so that the capacity to terminate them will always be available upon sentencing.
  • 5)  Finally, if we agree on the principle that Justice is, first and foremost, a human concept meant and designed to serve human interests, the appearance of justice in sentencing must reflect this, at least from a human point of view53. As a result, the sentences applicable to robots should be designed accordingly54.

c. The implementation of a distinct set of rules for robots 

Given the legal complexity and the multi-layered controversy that the perspective of legal personhood for AI within the Criminal Code would inevitably lead to, it might be safer and wiser to avoid altogether this Pandora’s Box in favor of the creation of a distinct criminal liability regime specifically designed for smart robots, as advocated by Hu55. The advantage of this approach would be socially more acceptable and provide similar benefits while enabling humans to set distinct moral standards for all robots.56 


The Digital Age has set the stage for, arguably, a time of reckoning in human History where intelligent machines will become increasingly autonomous and an integral part of our environment to a point where, ultimately, “machine self-consciousness” will be achieved57. The repercussion of this technological achievement is hard to fathom. While the benefits and convenience that AI can provide are undeniable, the trade-offs for humans have yet to be ascertained and raise serious concerns.58 

To date, while technological innovations in AI seem to get generally the thumbs up to proceed at the speed of light, fundamental legal issues compel us to take a hard look at the implications inherent to machine self-awareness and at the anticipated effects and consequences it will have on how we define ourselves as humans. 

Considering the potential risks that AI presents for individuals and for humanity, the rule of law (in Canada and globally) needs to be revised preemptively59 to ensure accountability in the Robotic Age and to deter uses that would be foreseeably nefarious to human society. 


  1. This paper was written for the LL.M program in Privacy and Cybersecurity at Osgoode Hall (YorkUniversity) with the generous insight provided by Dr. Elizabeth Kirley.
  2. There are several expressions and definitions in literature to designate the AI technology that ought to be subjected to criminal liability. In this paper, the expression “smart robots” will refer to the description suggested by author Ying Hu (see note 9 on page 7 hereafter).
  3. Oxford online dictionary;
  4. Ibid,
  5. John McCarthy, “What is Artificial Intelligence” (12 nov. 2007), pp. 2-3;
  6. That said, an important distinction must be made between intelligence and consciousness. Among other things, intelligence can occur without consciousness whereas consciousness cannot be conceived withoutsome form of intelligence.
  7. Jesse Beatson, Jill R. Presser, and Gerald Chan, Litigating Artificial Intelligence, Toronto, Emond Montgomery Publications Ltd., 2021, p. 5.
  8. See Jill R. Presser (Do Androids Dream of the Electric Chair? Questions About Criminal Liability for AIAgents), Chapter 10, p. 362, in Beatson et al., supra note 7.
  9. See Ying Hu, “Robot Criminals” (2019) 52 U Mich JL, p. 490; she so labels AI entities that answer the following prerequisites: 1) they are equipped with a moral algorithm that allows them to make moral decisions, 2) they can communicate those decisions to human, and 3) they can operate without the supervision of a human;
  10. Ibid, p. 364, note. 10. For Presser (citing Kate Darling) “autonomy” is the ability “to perform tasks without continuous human input or control”. More specifically, it allows robots “to make (limited) decisions about what behaviour to execute based on perceptions and internal states, rather than following a pre-determined action sequence based on perceptions and internal states, rather than following a pre-determined action sequence based on pre-programmed commands”.
  11. Ying Hu cites author Ryan Calo in her paper “Robot Criminals” (2019) 52 U Mich JL, pp. 494-495, note25;
  12. See Presser, supra, note 6, p. 363.
  13. Ibid, pp. 363-364. 
  14. Kevin Moustapha Adou, Robotum criminalis: analyse prospective de l’application des concepts de droit pénal aux robots intelligents, Montreal, Les Éditions JFD inc., 2020, pp. 124-125 (no. 197). 
  17. Canadian Charter of Rights and Freedoms; 
  18. Ibid, Sect. 7.
  19. Ibid, Sect. 9.
  20. 20 Ibid, Sect. 11d).
  21. Examples of rights reserved to individual under the Canadian Charter: freedom of conscience and religion, freedom of thought, freedom of expression, etc., Ibid.
  22. Examples of rights applicable to legal persons under the Canadian Charter: protection against unreasonable search and seizure, presumption of innocence, right to a fair trial, etc.; Ibid
  23. This liability can be shared or transferred to the developer, the programmer and/or the user.
  24. The situation will be further complicated if such entites become “self-owned”, a concept that is difficult to imagine for the time being but that must be considered, given the implications.
  25. Ryan Abbott explains the difficulty this entails: “Sometimes, however, it may be difficult to reduce AI crime to an individual due to AI autonomy, complexity, or lack of explainability. A large number of individuals may contribute to development of an AI over a long period of time.”; See Ryan Abbott, “The Reasonable Robot: Artificial Intelligence and the Law”, Doctoral Thesis, University of Surrey, Nov. 17, 2020, p. 57; 
  26. Abbott, ibid, p 63.
  27. Presser, supra, note 7, p. 370.
  28. This raises an interesting question about life. What is life? The Oxford dictionary defines it as “the ability to breathe, grow, produce young, etc. that people, animals and plants have before they die and that objects do not have”. Could AI technology eventually bring us to a new concept, that is, a recognizable form of “artificial life”?
    Comparatively, the Cambridge definition is as follows: “The period between birth and death, or the experience or state of being alive”; 
    What is artificial life? On this fascinating topic, please see Wendy Aguilar et al., “The past, present, and future of artificial life”, Review Article, Frontiers, Oct. 10, 2014: “As ALife progresses and its applications permeate into society, how will society be transformed as living artifacts are used? Will we still distinguish artificial from biological life?”;
  30. Presser, supra, note 7, pp. 368-369. To support his argument, Presser quotes the Supreme Court of Canada in note 29: “the act must be the voluntary act of the accused for the actus reus to exist”.
  31. This is the approach suggested by Gabriel Hallevy in his book, When Robots Kill: Artificial Intelligence Under Criminal Law, Boston, Northeastern University Press, 2013, pp. 34-37 (cited by Presser, supra, note 7, p. 372-374); however, unless a special regime is designed for robots, the absence of voluntariness would preclude any conviction under the Criminal Code
  32. “From the point of view of experience, a subject is conscious when she feels visual experiences, bodily sensations, mental images, emotions (Chalmers, 1995).” See Antonio Chella et al., “Editorial: Consciousness in Humanoid Robots”, Frontiers, March 22, 2019; 
  33. On that topic, see Musser, supra, note 1: “Though these systems may be powerful, they are opaque. They work by relating input to output, like a test where you match items in column ‘A’ with items in column ‘B’. The AI systems basically memorize these associations. There’s no deeper logic behind the answers they give. And that’s a problem.” 
  34. Hu, supra, note 9, p. 492.
  35. Definition of corporation by Oxford dictionary: “An incorporated entity with the capacity to act as a legal person, having an identity in law distinct from those of the individual or collection of individuals of which it is comprised at any point in time”;
  36. Some rights may be incompatible with the very nature of robots such as the right to “life”. 
  37. Adou, supra, note 14, pp. 171-179.
  38. The expression “AI singularity” is described as “an event where the AIs in our lives either become self- aware, or reach an ability for continuous improvement so powerful that it will evolve beyond our control”; see Nisha Talagala, “Don't Worry About The AI Singularity: The Tipping Point Is Already Here”, Forbes, Jun 21, 2021; point-is-already-here/?sh=48da106a1cd4
  39. Presser, supra, note 7, pp. 387-388 (see notes 127 to 133).
  40. Section 718 of the Criminal Code provides that “The fundamental purpose of sentencing is to protect society and to contribute, along with crime prevention initiatives, to respect for the law and the maintenance of a just, peaceful and safe society by imposing just sanctions (...)”.
  41. Adou, supra, note 13, pp. 180-185.
  42. Ibid. 
  43. With a possible exception for the equivalent of the death penalty, even though this option is excluded forhumans in Canada.
  44. Adou, supra, note 13, p. 123.
  45. This includes, among others, killer robots and similar technologies.
  46. European Parliament resolution of 16 February 2017 with recommendations to the Commission on CivilLaw Rules on Robotics;
  47. See the “Draft Text of the Recommendation on the Ethics of Artificial Intelligence”, Unesco, June 3, 2021,p. 7;
  48. The Digital Charter Act (Bill-27) Tabled in the House of Commons, November 4, 2022; 
  49. Ibid.
  50. Supra, note 45.
  51. Ibid.
  52. That is, codes, algorithms, and/or the equivalent and/or any type of conditioning that may impact on AI processes. 
  53. As Ying Hu observes: “the key question is not whether a treatment is considered unpleasant by the robot, but whether it is considered unpleasant for the robot by general members of our community”...; supra, note 9, p. 529.
  54. A reflexion will be required to determine whether the consequentialist or retributivist approach (or a mix thereof) in that regard is more appropriate.
  55. Hu, supra, note 9, pp. 500-502.
  56. Ibid. 
  57. Though there is no consensus on the concept of consciousness or self-awareness, the unprecedented interest and media attention generated by AI in that respect makes it more plausible than ever before.
    On this topic, see the following article: 
  58. See article from Bernard Marr, “Is Artificial Intelligence (AI) a Threat to Humans?”, Forbes, March 2, 2020, where he opines the current uses of AI (surveillance, social manipulation, AI-enabled terrorism, deepfakes, etc.) are the threats we should really be concerned about; humans/?sh=4c666960205d 
    For his part, Elon Musk, among others, consider AI to be the “biggest threat to humanity. See S. Akash, “AI, the Biggest Existential Threat to Humankind says Elon Musk”, Analytics Insights, July 14, 2021;
    In Emily Chung’s article (“AI could destroy humans, Stephen Hawking fears: Should you worry?”, CBC, Jan. 15, 2015), Canadian science-fiction author, Robert J. Sawyer, provides a more nuanced forecast by noting that "All the things that made us basically nasty, rapacious, competitive as a species are not necessarily hard-coded into whatever passes for the DNA of artificial intelligence"; 1.2864576 
  59. According to Ying Hu, now is the time to consider robot criminal liability considering that the technology might be achieved quicker than expected and in order to provide scientists the guidance to design robots accordingly; supra note 8, pp. 492-493. 


  1. Abbott, Ryan, “The Reasonable Robot: Artificial Intelligence and the Law”, Thesis, University of Surrey, Nov. 17, 2020;
  2. Adou, Kevin Moustapha, Robotum criminalis: analyse prospective de l’application des concepts de droit pénal aux robots intelligents, Montreal, Les Éditions JFD inc., 2020;
  3. Arsenault, Maj. J. M., “La légalité et l’éthique des robot intelligents – L’importance de l’humain dans le processus décisionnel ”, Master thesis in Defense Studies, Ottawa, HQ of Canadian Armed Forces, 2017;
  4. Barfield, Woodrow and Ugo Pagallo, Research Handbook on the Law of Artificial Intelligence, Massachusetts, Edward Elgar Publishing Inc., 2018;
  5. Beatson, Jesse, and Jill R. Presser, Gerald Chan, Litigating Artificial Intelligence, Toronto, Emond Montgomery Publications Ltd., 2021;
  6. Bensoussan, Alain and Jeremy Bensoussan (eds.), Comparative Handbook: Robotic Technologies Law : a Lexing Network Study, Bruxelles, Larcier, 2016;
  7. Benyekhlef, Karim, AI and Law: A critical Overview, Montréal, Thémis, 2020;
  8. Burke, Todd J., and Scarlette Trazo, “Questions juridiques émergentes dans un monde régi par l’AI ”, Articles, Gowling WLG, July 2019;
  9. Charney, R., “Can Androids Plead Automatism – A review of when Robots kill: Artificial Intelligence under the Criminal Law by Gabriel Hallevy ”, (2015) 73 U. Toronto Fac. L. Rev. 69, 69-72;
  10. Claypoole, Ted, The law of artificial intelligence and smart machines: understanding A.I. and the legal impact, Chicago, American Bar Association, 2019;
  11. Ellyson, Laura, “La responsabilité criminelle et l’intelligence artificielle : quelques pistes de réflexion ”, Les cahiers de propriété intellectuelle, Vol. 30, no. 3, pp. 879-893;
  12. Hallevy, G., “The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control ”, (2010), 4 Akron Intellectual Property Journal, pp.171- 199;
  13. Hu, Ying, “Robot Criminals” (2019) 52 U Mich JL; 
  14. Kelley, R., E. Shaerer, M. Gomez, and M. Nicolescu, “Liability in Robotics, An International Perspective on Robots as Animals”, Published online: April 2, 2012, pp. 1861-1871; 10.1163/016918610X527194 
  15. Lima, D., “Could AI Agents Be Held Criminally Liable: Artificial Intelligence and the Challenges for Criminal Law ”, (2018) 69 S.C.L Rev. 677; 
  16. Marr, Bernard, “Is Artificial Intelligence (AI) a Threat to Humans?”, Forbes, March 2, 2020; ai-a-threat-to-humans/?sh=4c666960205d 
  17. McCarthy, John, “What is Artificial Intelligence” (12 nov. 2007), pp. 2-3; 
Logo Dubé Latreille Avocats

Our Newsletter

Subscribe to our Newsletter and keep up to date with
DUBÉ LATREILLE’s news, events, and columns.

I wish to subscribe