Global concern about AI’s misuse
RECENTLY, Geoffrey Hinton – the Godfather and the pioneer of Artificial Intelligence (AI) — resigned from Google on account of his grave concern regarding the misuse of the AI technology as he thinks that there in no easy fix of its growing risks. Hinton says that Google has acted “very responsibly” in its deployment of AI. But that’s only partly true. Yes,’’ the company did shut down its facial recognition business on concerns of misuse, and it did keep its powerful language model LaMDA under wraps for two years in order to work on making it safer and less biased’’. Precisely, if not delimited, the concept of digitalization of intelligence may become more dangerous than climate change.
Digitalization of intelligence: Since AI grows more sophisticated and widespread, the voices warning against the potential dangers of artificial intelligence grow louder. “The development of artificial intelligence could spell the end of the human race,” according to Stephen Hawking, the renowned theoretical Physicist. AI has captured the attention of the world over the last 12 months. From AI chatbots to AI-generated art and inventions, AI has the potential to radically transform our economy, our society and humanity. Recently, the US-based Future of Life Institute published an open letter signed by over 1000 AI experts warning that AI could pose ‘profound risks to society and humanity’. The letter called on all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least 6 months.
Arguably, GPT-4 is better reasoning and language comprehension abilities as well as its long form text generation capability, can lead to an increase in sophisticated security threats. Cybercriminals can use the generative AI chatbot, ChatGPT, to generate malicious code, such as data-stealing malware. AI applications that are in physical contact with humans or integrated into the human body could pose safety risks as they may be poorly designed, misused or hacked. Poorly regulated use of AI in weapons could lead to loss of human control over dangerous weapons.
Most importantly, imbalances of access to information could be exploited. For example, based on a person’s online behaviour or other data and without their knowledge, an online vendor can use AI to predict someone is willing to pay, or a political campaign can adapt their message. Another transparency issue is that sometimes it can be unclear to people, whether they are interacting with AI or a person. If not done properly, AI could lead to decisions influenced by data on ethnicity, sex, age when hiring or firing, offering loans, or even in criminal proceedings.
Moreover, as AI technology becomes more integrated into the general economy and civilian sphere, existing legal and normative frameworks may need to be adjusted to cover novel forms of attack such as data poisoning and adversarial examples. Up to this point, data theft has been the main concern in cyberspace. Going forward, hostile actors will likely try to gain access to databases not only to obtain their information, but also to alter and manipulate them.
Data use risks: AI is truly a revolutionary feat of computer science, set to become a core component of all modern software over the coming years and decades. This presents a threat but also an opportunity. AI will be deployed to augment both defensive and offensive cyber operations. Additionally, new means of cyber-attack will be invented to take advantage of the particular weaknesses of AI technology.
AI has already been blamed for creating online echo chambers based on a person’s previous online behaviour, displaying only content a person would like, instead of creating an environment for pluralistic, equally accessible and inclusive public debate. It can even be used to create extremely realistic fake video, audio and images, known as deep fakes, which can present financial risks, harm reputation, and challenge decision making. All of this could lead to separation and polarisation in the public sphere and manipulate elections.
National security: In addition, advances in AI will affect national security by driving change in three areas: military superiority via an autonomous weaponisation programmer, information superiority, and economic superiority. For military superiority, progress in AI will both enable new capabilities and make existing capabilities affordable to a broader range of actors. And yet, there is growing indication that the global powers, US, China and Russia may use combat robots in future wars. Four groups of tasks can be distinguished in the development of AI in the military: information, tactical, strategic and economic. The widespread introduction of genuinely autonomous robots into the ground forces of various armies of the world, according to some predictions, can be expected in 2025-the 2030s, when autonomous humanoid robots will become advanced enough and relatively inexpensive for mass use in combat operations.
Nonetheless, given the context of growing risks of AI vis-à-vis humans, prudent governance-cum-legislation at the global level regarding its use is fundamentally essential. The legal definition of what constitutes a cyber-attack may need to be amended to cover these novel threats (Brundage et al. 2018, 57). AI and the UNESCO role: Since 2021, the IAWG-AI has successfully galvanized expertise from across the UN system as well as external stakeholder groups to advance the responsible development and use of AI in the UN, underpinned by ethics and human rights, while driving forward the 2030 Agenda on Sustainable Development. As part of the IAWG-AI, UNESCO and OICT have led the development of the Principles for the Ethical Use of Artificial Intelligence in the United Nations System.
In 2022, the AI for Good platform, organized by ITU in partnership with 40 UN Sister Agencies and co-convened with Switzerland, has reached over 260,000 online views, including an 81,000-strong multi-stakeholder community involving 180+ countries and has consistently attracted broad based international media coverage, making it the leading action-oriented, global and inclusive United Nations platform on AI. Recently, AI for Good has launched the Neural Network: an AI-powered community networking and content platform designed to help users build connections with innovators and experts. This Report was released at the AI for Good Meeting on 16 March 2023 at WSIS Forum 2023.
—The writer, an independent ‘IR’ researcher-cum-international law analyst based in Pakistan, is member of European Consortium for Political Research Standing Group on IR, Critical Peace & Conflict Studies, also a member of Washington Foreign Law Society and European Society of International Law. He deals with the strategic and nuclear issues.
Email: [email protected]