The phrase “Have you lost faith in humanity?” resonates heavily with doubt and questioning, as the world witnesses escalating events and tragedies, with grim realities coming to light, our trust in humanity begins to gradually fade.
In Gaza, more than 2.3 million people are facing the plight of extermination and hunger. With some governments ceasing their support and funding for the United Nations Relief and Works Agency for Palestine Refugees (UNRWA), a harsh reality emerges: humanity has truly failed humans, prompting a shift in focus towards non-human entities known today as Artificial Intelligence.
Amidst these tragic events, Artificial Intelligence emerges as a controversial topic, sparking a mix of fear and hope. There is a pressing need to understand the nature and workings of AI. Only through this understanding can we determine whether it will take the reins someday to lead humanity towards a brighter future or hasten our demise.
The integration of Artificial Intelligence into military operations has played a pivotal role in recent conflicts, resulting in a significant rise in civilian casualties. The militarization of AI has serious implications for global security, including the development and deployment of lethal weapons systems that can operate without human intervention, enhancing the technological military dominance of advanced countries over third-world nations.
Traditional destructive weapons have become more intelligent through AI, utilized in tasks such as analyzing drone footage, intelligence gathering, target identification, guided missiles, and advanced surveillance systems. Israel, for instance, utilized an AI system called “Harbours” to select and expand its targets in the war on Gaza, accelerating the targeting process by extracting vast amounts of information from various sources like communication data, drone footage, and surveillance data, analyzing them, and providing target recommendations.
This shift raises deep questions about the impact of technological advancements on the essence of humanity and its appreciation for life. The use of AI in warfare poses a complex issue that requires a careful examination of its human, legal, ethical, and security implications.
This raises fundamental questions about the impact of destructive technology on our humanity. Does technology distort our humanity, or is the distortion inherent within us, with technology merely reflecting it?
Israel used an AI system called “Harbours” to accelerate targeting in the war on Gaza (Getty)
Building an Ethical Machine
Paula Ricketti, associate professor at the Berkman Klein Center for Internet & Society at Harvard University, argues that dominant AI has become a force capable of committing violence through three cognitive processes: data conversion via extraction and expropriation, algorithmic mediation and governance, and automation through violence and inequality, displacing responsibility.
Ricketti found that these detailed cognitive mechanisms lead to the development of global classification systems that reinforce cognitive, economic, social, cultural, and environmental inequalities among different peoples worldwide. While these issues pose a challenge to human adoption of AI, there seems to be an opportunity for progress and improvement in all aspects of human life, according to the computer scientist and inventor Ray Kurzweil, known for his work in AI and his technology predictions.
Kurzweil, one of the optimists regarding the future of Artificial Intelligence, believes that this technology will be key in addressing major global challenges threatening humanity. He sees integration with AI opening unlimited possibilities, allowing us to surpass biological limitations hindering significant enhancement of our capabilities.
Kurzweil believes that through continuous improvements in AI, humanity will achieve unprecedented accomplishments by leveraging the exceptional cognitive capacities AI can provide. While Kurzweil’s predictions may inspire hope for human salvation, the inevitable equation of increased human capacity with increased harm and destructive power persists. Nevertheless, ethical values remain the most effective deterrent to avoid destructive conflicts, necessitating awareness of ethical values.
Is There Conscious Machine?
Artificial Intelligence has seen significant advancement in various fields, such as industrial development, natural language processing, disease diagnosis and treatment, and robot manufacturing. However, these developments remain within the narrow realm of AI, with the concept of consciousness remaining a complex and intriguing subject.
Philosopher John Searle laid the foundation for his intellectual argument known as the “Chinese Room” to challenge claims that a machine capable of running a program through specific commands would possess “mind” or “consciousness” similar to humans. The experiment aims to refute opinions supporting the possibility of robust AI intelligence and consciousness.
Like the person inside the room who doesn’t know Chinese but can respond in Chinese through written rules, the computer using a smart chatbot program in Chinese also doesn’t understand the conversation. It extrapolates its answers based on rules and software that lack the ability to understand Chinese or impart consciousness or mind on the computer.
In the absence of consciousness, current AI lacks the ability to lead or eliminate humanity. Developing AI to its full potential remains challenging.
AI expert Yan concurs with this direction, noting that current AI is not as intelligent as pet animals, and existing systems are far from achieving certain aspects of consciousness that make them intelligent. Machines lack awareness of the nature of violence and do not have reasons to engage in it. The human harnesses them to serve troubled interests in ways that the machines themselves may not realize involve violence towards others. If we fear AI combatting humans, should we not stop ourselves from practicing this violence?
Even if current AI were capable of consciousness, there is a more critical factor it must possess to surpass and control humans: motivation.
Machine Motivation
Motivation, deriving from the Latin word “mover,” signifies preparing for action or movement, a physiological process that readies the organic system for mental work, fulfilling needs, desires, and motivations, acting as drivers for living beings to move and activate toward a goal. Based on the intensity of motivation, it may sustain, halt, or enhance efficacy.
In his book “Why Nations Fight… The Motivations for Wars in the Past and Future,” Richard Ned Lebow identifies four essential motivations for starting wars between countries, including fear of another threat, economic, political, and strategic interests, the desire to maintain and enhance status, and revenge for injustice. These motivations play significant roles in shaping today’s international political dynamics.
AI can influence the formation of political contexts that may ultimately lead to conflicts or wars, but it does not possess motivations in itself. AI is merely a technological tool relying on programming and data to execute tasks and make decisions. While advanced AI systems can filter decisions in military contexts, the final decision remains subject to human will.
The American novelist Isaac Asimov established ethical principles for AI, encapsulating them in three fundamental laws:
- The robot must not harm a human being or, through inaction, allow a human being to come to harm.
- The robot must obey the orders given by human beings, except where such orders would conflict with the First Law.
- The robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Though Asimov’s laws for robots are embedded in his stories and are not scientific laws, they have been crucial in discussions related to technology. Prominent AI scientists, including Ray Kurzweil, the founder of “I, Robot,” Rodney Brooks, and robotics expert Daniel Wilson, have debated and discussed them. Today, we are beginning to witness and feel the actual harm of Artificial Intelligence, particularly in the realm of military operations. As these challenges grow, establishing a strict legal and ethical framework to control AI applications becomes necessary, ensuring its use in a manner that guarantees justice and respects individuals’ fundamental rights.
No doubt, machines are becoming increasingly more like us, not just in appearance but also in their way of thinking. Herein lies the danger: it becomes challenging to distinguish them from us, and from our thoughts and desires.
Perhaps humans fear machines not because of their apparent desire to harm humanity, but because of our attempts to make them more like us. If anything, this indicates that humans fear entities resembling them, which, due to their complexities, may embark on an unpredictable path, either to eternity or annihilation.