Thorns and Roses of AI: A Call for Ethical Imperatives

Date

Growing up, I was taught and instructed to tend to flowers. I grudgingly watered about twenty-five flowerpots almost daily, finding time to study how they blossomed, but what I dreaded most was having to pick the red roses because doing so occasionally had me spill drops of my blood from getting pricked by the thorns around them. Technological advancement is like the blossoming of red roses. It can be beautiful when it makes life better for humans or more entertaining when it allows the Beatles musical group to release an AI-powered song in John Lennon’s voice. Yet it can also be like the thorn around the rose when it replaces working humans with care bots or robocops, resulting in job losses for humans or in instances where it fails in life-rated mission-critical situations. Yes, it can be like the thorns when it ushers in consequences for humanity.

Governments of nations have since recognized the importance of AI-related technologies and have been strategizing to become global AI leaders. Canada is known to have led the way in 2017 with a 125 million Canadian dollar budget for AI research and talent investments (Dutton, 2018). Also, in 2017, China announced plans to be a global leader in AI theories, technologies, and applications. (Dutton, 2018) The United States formulated an AI initiative and has, over the years, increased investments in AI research and development, culminating in increased billion-dollar spending on unclassified AI-related R&D projects from 2015 to 2017, which led to former President Trump signing the five-principled AI Executive Order 13859 in 2019 (Dutton, 2018). The fourth principle of the United States AI Executive Order 13859 is heartwarming because it addresses the safeguarding of “civil liberties, privacy, and American values.” While acknowledging AI’s great transformative potential in schools and hospitals, the United Kingdom is taking the lead to ensure that AI is safe for humanity. In the recently concluded November 2023 London AI Safety Summit, Prime Minster Sunak showed that they have, indeed, put their money where their mouth is by investing up to 100 million British pounds in an AI safety task force (Gerken and Rahman-Jones, 2023).

In his book, The Four: The Hidden DNA of Amazon, Apple, and Google, Scott Galloway listed AI as one of eight critical factors that the world’s top media technology corporations are leveraging beyond visionary capital to achieve global market reach and targeted market growth. In this era of Cybertechnological convergence, it is not surprising to critically review the relationship between algorithm, man, and machine. The world has come a long way, from shifting from the agricultural age to the industrial age and now to the advanced information age. There is a lot for us all to examine, and rightly so, for the benefit of humanity—there is the need to create an ethical balance between technology and humanity—the need to instill the consciousness that technology is for humanity as against the growing hype that humanity is for technology. The continuous quest to transform has seen the world witness the introduction of autonomous machines, care bots, robocops, etc., to mention a few. And while we depend more on technology, there is the need to consider what will happen to human-to-human relationships. Some believe that over-dependency on AI-related technologies will result in a decline in human cognitive skills. AI undoubtedly allows for less physical burden of tasks, multitasking, and the provision of information. Regardless, we must plan to mitigate its severe impacts like loss of jobs and fast-growing skills gaps, etc.

Technology did not just happen, and it is important to stress that concerns about AI started as far back as the 1950s when one considers the developmental phases of Cybertechnology (Tavani, 2016). In other words, there was ample time for technology to develop, emerge, and converge–for technology to get to the stage of Ambient Intelligence (AmI), where man communicates and relates with devices and machines, with the latter functioning intelligently. Indeed, there was ample time for government, academia, and industry to work out ethical developmental guides that will always serve the interests of humanity. Though with varying interests, this tripartite stakeholder group (government, academia, and industry) appears to have a common denominator—the people. It is no accident that the preamble to the United States Constitution starts with “We, the people.” There will be no booming advertisement revenues for the top media tech companies without “the people.” There will be no government without “the people” and no research for academia without “the people.” And like Sheikh Maktoum of Dubai will always say, “It’s all about the people.” It, therefore, should be about humanity as a priority over financial profit, humanity before industry, and humanity before government. It should be government for humanity, academia for humanity, and industry for humanity. The human being is not a machine. Man makes the machine. Then, next, man applies intelligence to the machine, so it makes sense to place the onus of moral agency related to AI technology on man. Recent calls for enhanced cooperation with and transparency from AI firms are commendable. Indeed, the time is now for the world to address the complexities of the dynamic and increasing autonomy of AI entities and systems. One way to do so may be to establish a centralized AI governing body that will work within the pivot of a human-centric ethical AI development framework to encourage nations to develop ideal laws, regulations, and norms.

____________________________________________________________________________________

References

Dutton, T. (2018, June 28). An overview of AI national strategies. Medium.com. Retrieved from https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd

Executive Order 13859 (2019, February 11). Maintaining American Leadership in Artificial Intelligence. Presidential Documents. Federal Register. Vol. 84, No. 31. Retrieved from https://www.govinfo.gov/content/pkg/FR-2019-02-14/pdf/2019-02544.pdf

Galloway, S. (2017). The T Algorithm. In The four: The hidden DNA of Amazon, Apple, Facebook, and Google (1st ed., p. 176). London, UK: Random House.

Gerken, T. & Rahman-Jones, I. (2023, November 2). AI firms cannot ‘mark their own home work.’ BBC. Retrieved from https://www.bbc.com/news/technology-67285315

Tavani, H. T. (2016). Ethics and technology: controversies, questions, and strategies for ethical computing (5th ed.) [VitalSource Bookshelf version]. Retrieved from https://bookshelf.vitalsource.com/#/books/9781119186571/cfi/6/38!/4/2/2/2/6@0:0