Science News, Society and Technology

Upturned Roots: AI’s Impact on Society’s Rules and Ethics

The Tree of Society can be considered the sister of Yggdrasil (the Norse Tree of Life). Its branches are complex, interwoven rules, ethics, and customs, connecting separate entities of cultures across cities, countries, and continents. Its trunk defines common law, and its roots are written and unwritten codes of consequences that stretch farther than 4000 years into the past. 

From the famous “eye for an eye” in Hammurabi’s Code (written more than 3700 years ago) to the establishment of a fair trial for the accused in today’s judicial system, society is undoubtedly constructed on a foundation of rules – laws and ethics.
Today, this ancient system is threatened by technology merely 100 years in the making (a quarter its age) – artificial intelligence.

 

AI and Ethics


Singularity – the irreversible integration of technology and humans.
Stated more plainly (and perhaps more misleadingly): when technology takes over.
It is a fear echoed by many in society – end-users unaware of how AI functions, as well as those with intimate knowledge of technology and AI. 

Figure 2: Open Letter by Elon Musk etc. [2]


In fact, on March 22, 2023, leaders of the AI industry – Elon Musk, Steve Wozniak, and Yoshua Bengio (one of the “godfathers” of AI) – published an open letter calling for a pause in developing AI systems more advanced than GPT-4 (Chat-GPT) [2]. 
“Pause AI Research, Says AI Researchers”, a news article published in Semafor mocked [2]. 


Despite the controversy surrounding the letter, its authors, and possible motives, the motivation behind the premise of the topic, and the fear and confusion surrounding AI technology remain clear.


There does not exist a clear domain for AI in society.


From Musk’s Neuralink that would allow a chip to directly interact with the human brain to various legal cases regarding muddied accountability over AI-caused injury, such as accidents involving Tesla’s self-driving cars, the development of technology combined with human creativity challenges the boundaries of human morals and societal justice.


Many fledgling attempts at identifying exactly where AI stands in society have emerged, such as proposed regulation by the EU, as well as a set of rules known as AI ethics. Consisting of eleven parameters – transparency, justice, non-maleficence, responsibility, privacy, beneficence, freedom, trust, dignity, sustainability, and solidarity [3] – AI ethics establishes a general global consensus on what should be considered when creating guidelines for AI development [4].

 

Dignity: AI and Art


If AI can create art – paintings, drawings, music, coding programs, photos, and written works – is there any need for humans to do so? What, then, becomes the inherent worth of such “art”?


Artistic creativity is “the essence of being human and a core differentiating feature of humans compared to other species” [3]. It is the mania of a Van Gogh painting, the sorrowfulness of a Chopin piano piece, and the chaos of a Murakami novel. In human society, art is “tightly linked to properties such as corporeality, soul, emotions, insight, history, pain, suffering, etc” [3]; in essence, it is what gives human dignity.
AI can create art – mimicking the best of the Renaissance Era when trained on the Dall-E software, generating pop songs with the likeliness of human voices (see: Bruno Mars covering “Hybe Boy” – originally by KPOP group NewJeans [5]), and writing novels (Novel AI). 

Figure 3: Painting Generated by Dall-E [6]

Some claim AI accelerates the artistic process, decreasing cost and time. For instance, creating a book cover with the title ‘AI and Me’ including five cartoon illustrations would possibly require coordinating with an illustrator – costing time and money. But now, the author can simply use AI to generate the art.


Others claim AI infringes on human dignity. Rather than remaining the fruit of human labor, art becomes a commercial wallpaper – a click of a few buttons. The value of art decreases, and originality becomes painting “Van Gogh typing on a computer” using Dall-E. 


So the question remains: What is considered art? And should the use of artificial intelligence in the humanities be restrained to preserve the livelihood of human artists?

 


Justice: AI and The Law


What does AI reflect about humanity?


AI is emotionless. A robot. A non-being. 

However, AI is trained by humans. Its use in the law and the justice system becomes a mirror that reflects the systematic prejudice inherent in society.


What makes someone high-risk versus low-risk for crime? What is credit-worthiness? What defines a terrorist? These factors are all subject to those responsible for creating the algorithm [7]. As a result, any dataset generated using human-generated algorithms and parameters is inherently biased. Using such datasets in developing technology such as facial recognition and predictive policing for crime can amplify harm and encourage biases. 

Figure 4: Facial Recognition [8]

How can these biases or stereotypes be eliminated? If not eliminated, how can the effects of these factors be reduced to increase fairness in the legal system? Is it possible or safe for AI to operate without human help?

 


Responsibility: AI and Accountability


Perhaps the most important question now: how can AI be regulated?


The “who”, “how” and “what” are still muddled. 


There are large gaps in AI liability, especially since such technologies often consist of “multiple stakeholders and interdependence of AI components” [9]. There is also a lack of transparency in how data is collected. What types of art (and whose art) was used to train the AI behind Dall-E? Should AI companies be legally required to reveal their sources and methods? Then, is there a way to prevent the theft of the algorithms and code behind AI technology?


The numerous legal questions behind who is responsible for harm caused by AI remain unanswered. In the scenario where a self-driving car fails, who will be held accountable – or are there multiple people to blame? Is it those who designed the car? The software engineers involved in the project? Those that provided the data source which trained the car?? 

Figure 5: Automated Driving Car [10]

This is especially important in regards to chatbots, which tend to be unpredictable and unexplainable by their designers as they learn and grow more complex. In the case where the chatbot evolves away from what the engineer initially designed it to be – for instance, Microsoft’s chatbot Tay that began to spew hate-filled tweets the engineers had not trained it on – who is held accountable? In this case, no one is purposefully trying to generate harm. The bot had simply learned from the “wrong” data – random, harmful tweets outside of the engineers’ control.


Rather than focusing on singularity and a possible future where robots take over,  questions revolving around AI and how it operates in present-day society should be answered first. Unfortunately, there are no answers – at least, no simple answers.
It is inevitable that AI will become more and more integrated with humans. It is inevitable that the rules and ethics that once governed society will be broken and redefined. 


However, the next step is not to eliminate AI – nor is it to ignore the fact that AI is changing the world. The next step is to continue pushing forward with innovation and understand how these rules can be reconstructed to efficiently and safely include AI in human society – preserving the roots of what makes humanity ‘human’.

 

References 

  1. R J. The Tree of Life: Yggdrasil [Internet]. 2022 [cited 2023 Aug 1]. Available from: https://epiclootshop.com/blogs/norse-viking-blog/the-tree-of-life-yggdrasil
  2. Pause giant AI experiments: An open letter [Internet]. 2023 [cited 2023 Aug 1]. Available from: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
  3. Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines [Internet]. Nature Publishing Group; 2019 [cited 2023 Aug 1]. Available from: https://www.nature.com/articles/s42256-019-0088-2
  4. Kazim E, Koshiyama AS. A high-level overview of AI ethics. Patterns. 2021 Sep;2(9):100314.
  5. Hype Boy – Bruno Mars (Original by Newjeans) (AI COVER) [Internet]. www.youtube.com. Available from: https://www.youtube.com/watch?v=ge0Lw5I1Tw8
  6. Edwards B. DALL-E image generator is now open to everyone [Internet]. Ars Technica. 2022. Available from: https://arstechnica.com/information-technology/2022/09/openai-image-generator-dall-e-now-available-without-waitlist/
  7. Fountain JE. The moon, the ghetto and artificial intelligence: Reducing systemic racism in computational algorithms. Government Information Quarterly. 2021 Oct;101645.
  8. Team D. Facial Recognition Technology: Evolution, Application, & API’s [Internet]. Devathon. 2019. Available from: https://devathon.com/blog/facial-recognition-technology-applications-apis/
  9. Buiten M, de Streel A, Peitz M. The law and economics of AI liability. Computer Law & Security Review. 2023 Apr;48:105794.
  10. Parikh GMB. Who Is Liable when AI Kills? [Internet]. Scientific American. 2022. Available from: https://www.scientificamerican.com/article/who-is-liable-when-ai-kills/