Why Superficial Laws for Artificial Intelligence?

“The saddest aspect of life … is that science gathers knowledge faster than society gathers wisdom…” — Isaac Asimov

Artificial intelligence (AI) algorithms and robots serve human-like roles in our society, but they are not yet governed by human-like laws. AI increasingly provides companionship, develops art, and serves military tasks. These roles introduce ethical issues which contemporary law is not fully equipped to address which, in itself, is an ethical issue. The proliferation of AI requires us to develop new legal frameworks governing AI interactions from personal interactions, art, and warfare. For example, companion robots can enable unhealthy attachments and compromise user privacy. AI algorithms that produce digital content pose new copyright and intellectual property issues. Weaponized AI robots obscure our approach to responsibility in warfare. While these AI-specific ethical problems are new to law, science fiction authors have long explored the ethical implications of AI. Technology leads the law, and it is unethical to not regulate AI.

Autonomous companion robots are widely available and enter users’ homes in many forms. As we welcome these devices into our personal lives, evidence suggests that people personify these robots which can encourage unhealthy unidirectional attachments (Lin et al., 2011). P.K. Dick’s novel “Do Androids Dream of Electric Sheep?” explores the dangers of such human-android relationships. The protagonist, Deckard, falls in love with an android Rachel whom he should be “retiring” (Schrader, 2019). In this fictional world, androids are programmed to absorb information from humans and appear to care for them. In reality, tech companies routinely collect user data and target advertisements through their products. We can imagine these same corporations could exploit the vulnerable human-robot relationship to manipulate robot owners for profit. This unethical dynamic is not unambiguously governed by law. To address the manipulation of users by companionate robots , legal systems might enforce mandatory disclosure statements to remind users the robot is a machine devoid of emotions (Lin et al., 2011). Such statements may reduce the extent to which humans form emotional bonds with robots and the potential for these relationships to be abused.

AI produces art in many different forms, adding new possibilities to the millennia of traditional art created by humans. AI algorithms operate with minimal direct input and are now being used to create films, poetry, novels, and digital images. These algorithms are authored by many individuals, shifting how the law should conceive of creative ownership. In the movie “Her”, the main character Theodore dictates letters to his AI virtual assistant Sam who submits them to a publisher without his knowledge. In this scenario, who should receive credit for these letters? Should it be given to the programmer for making Samantha? Or should it go to Theodore, who dictated the letters? AI blurs our conceptions of copyright and intellectual property (Klaris & Bedat, 2017). Policy experts have posed at least two possible solutions for this issue. The first assigns ownership on individuals who apply AI, as in UK copyright law (Copyright, Designs and Patents Act (CDPA), 2003) which could potentially disenfranchise the human creators whose work makes AI-produced art possible. The second stipulates that output generated by AI should automatically enter the public domain. Support for the latter comes from people who believe machine learning is not creative or original (Vezina, 2020). Neither of these solutions appears fully adequate, yet the essential issue of copyright in context of AI production requires further attention.

Modern warfare commonly employs armed militarized robots or drones, which are increasingly fully autonomous with no direct operators (Yong, 2014). Such warfare lacks emotions like fear, empathy, and mercy. The short film “Slaughterbots” explores the possible consequences of AI warfare, depicting a mass assassination attempt by autonomous drones which falsely identify their targets and murder innocent civilians. Critics of militarized robots argue that “allowing weapons to decide to kill violates the ethical and legal norms governing the use of force on the battlefield” (Fryer-Biggs, 2019). Autonomous drones accelerate warfare and inhibit deliberating strategies and potential peace treaties. Similar to AI art, military robots are the product of a complex supply chain, partitioning responsibility between programmers, manufacturers, and governments (Lin et al., 2011). In this context, assigning culpability for collateral damage or civilian harm becomes challenging and requires new legal developments.

Current law does not adequately deal with ethical concerns regarding AI algorithms and autonomous robots. Society requires a reliable legal framework to protect users of companionate robots from exploitation and unethical data mining, to adequately assign copyright, and to establish culpability for accelerated warfare and collateral damage during conflicts. Solutions to the ethical and legal dilemmas posed by AI in modern society require careful legal analysis by policy experts. Popular sci-fi dystopias from provide us with foresight and it is our ethical responsibility to use this time wisely. Regulating the legal loopholes of AI is an insurance policy against the negative consequences of human-robot relationships, copyright attribution, and militarized warfare. As the sophistication of AI technology grows, society must gather the legal wisdom to handle it.

References

Fryer-Biggs, Z. (2019, September 03). Coming Soon to a Battlefield: Robots That Can Kill. From the Atlantic: https://bit.ly/35BUqgv

Klaris, E., & Bedat, A. (2017, November 16). Copyright Laws and Artificial Intelligence. From Law Technology Today: https://bit.ly/3npLH70

Lin, P., Abney, K., & Bekey, G. (2011). Robot ethics: Mapping the issues for a mechanized world. 175(5–6), 942–949. From Project Muse: https://bit.ly/3f9PzpN

Schrader, B. (2019). Cyborgian Self-Awareness: Trauma and Memory in Blade Runner and Westworld. Theory & Event, 22(4), 820–841. https://muse.jhu.edu/article/736564

UK Gov. (2003, October 31). Copyright, Designs and Patents Act (CDPA). From legislation.gov.uk: https://bit.ly/3nnyxY0

Vezina, B. P. (2020, February 20). Why We’re Advocating for a Cautious Approach to Copyright and Artificial Intelligence. From Creative Commons: https://bit.ly/3pyS2Pd

Yong, E. (2014, February 26). Autonomous drones flock like birds. Nature. From Nature International Weekly Journal of Science: https://go.nature.com/3f1hXKA

This was originally an assignment for Foundations of Digital Media at the Centre for Digital Media submitted by me in September 2020.

Graduate student at the Centre for Digital Media specializing in UX/UI Design and Research.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store