As Artificial Intelligence grows more sophisticated and widespread, the voices warning against the potential dangers grow louder.
Asia Thinkers examines the associated risks and consequences for society of the unregulated growth of Intelligent AI. What action should Asian governments be taking as AI takes on a life of its own, making sure it remains accountable and respects privacy? Singapore as a leader of AI development in SE Asia will be used as a case study.
According to the world-famous Physicist Stephen Hawking, the development of Artificial Intelligence could spell the end of the human race. Elon Musk, the founder of Tesla and SpaceX, said at the 2018 SXSW tech conference, “AI scares me, it’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.” Researchers agree that we are in the very early stages of understanding the capability of AI. Whether it’s increasing automation of jobs, gender and racially-biased algorithms or autonomous weapons that operate without human oversight, public unease about the unknown risks of its development is growing.
There has been a rapid increase in the adoption and development of Artificial Intelligence across the globe. The meteoric rise of AI-powered tools like open-source ChatGPT with the ability to generate fluent texts and provide voice response, has seen it emerge more powerful than ever and more accessible to everyone. AI systems powered with the help of machine learning are constantly evolving, but to appreciate how AI thinks, it is important to understand that it functions on a language based on the mathematical presentation of data and self-learns based on word prediction although recent developments allow AI to train on data scraped from the internet and are becoming better at understanding context. According to a developer based in Sydney, what has developed so far is a text-based conversational partner, often indistinguishable from a human, this science should not be confused with Artificial human intelligence ( AHI) which is a separate branch of research. An Assistant Professor of Linguistics and Data Science at New York University says “AI learns in a fundamentally different way from people making it very improbable they will ever think the same way as humans.” One of the more publicly discussed concerns regarding the development of AI is the removal of a large number of job opportunities from the job market. This has affected job opportunities in areas where AI can offer better efficiency and productivity.
Recent research from Open AI (ChatGBT developer) found that 80 per cent of the US workforce could see 10 per cent of their tasks affected by technology and those affected would be largely white-collar workers in areas such as accountancy, legal, medical and IT. A spokesman for Open AI said that “it is unlikely entire jobs will become automated as AI will focus on time-consuming and repetitive tasks like drafting documents and answering emails.” On the other hand, AI is estimated to create 97 million new jobs by 2025, many employees won’t have the skills needed for these technical roles and could get left behind if companies don’t upskill their workers. Generally, new technologies haven’t meant fewer jobs over the long run, but there is an active debate over whether taxpaying human jobs should be protected in the short term to minimise disruption to individual livelihoods. One proposal supported by Bill Gates and others is to tax robots and AI in the same way we do humans, in theory, this would level the playing field.
The promise of the ultimate virtual companion and guide who can assist us in writing, booking, querying, and planning has many dismissing the risks, but in doing so we ignore new ethical dilemmas, including the rise of propaganda and fake news. We are already aware generative AI has a noticeable fault of inventing data, ranging from the creation of events that never happened to the publication of legal cases that never existed. Social manipulation is identified as a major misuse of artificial intelligence. This became a reality with Russian interference in the 2015 US elections and Ferdinand Marcos, Jr, building a TikTok presence to capture the votes of younger Filipinos during the 2022 Philippines presidential election. As AI technologies become increasingly sophisticated, it raises several ethical challenges from the collection of large amounts of personal data, to data privacy and security. Hackers globally have demonstrated they can harness the power of AI to develop more advanced cyber attacks. A Cyber security consultant based in Singapore said “With the internet increasingly becoming everyone’s main source of information, this is our worst nightmare, where it can be nearly impossible to distinguish between creditable and faulty news”.
The Ukraine war has showcased how far lethal autonomous weapon systems have developed. Major powers are competing to develop the most advanced weapons, with Artificial Intelligence enhanced weapons being designed to make battleground decisions with zero human input. The USA is currently developing a new generation of fighter planes controlled entirely by AI to be deployed by 2030. The rise of AI- driven autonomous weaponry also raises concerns about the dangers of rogue states or non-state actors using this technology, especially when we consider the potential loss of human control in critical decision-making processes.
Growth in partnership with Humans
AI is still in the early stages of implementation but its growth is accelerating. A good example is the recent release of the first fictional recording artist developed with AI and released by Warner Records. AI can’t do a lot of things we can do, so it’s unlikely that we will be left behind any time soon, but experts do have some ideas about how to manage its rise.
To mitigate these security risks, governments and organizations need to develop best practices for secure AI development and deployment. As with most technologies, development is often hard to stop. One way is international cooperation to establish global regulations that protect against AI security threats. One example has been nuclear weapons, where a concerted international effort stopped their deployment decades ago. Countries can establish similar treaties and international agreements when it comes to AI technology.
On the other hand, developers have suggested several industry safeguards. One is that we should ban AI from writing computer code to develop AI. This would prevent a phenomenon known as ‘recursive self-improvement’ where a system repeatedly improves itself until it outsmarts humans. AI may also have internal self-regulation such as found with ChatGBT which will deny answering negative questions affecting society such as commenting on the gender and legitimacy of transgender women and ensure that a robot’s goals are in line with ours. Thus, hoping to prevent what many people fear, a doomsday scenario where robots take over. Because AI tools are relatively new, they have been allowed to operate in a largely regulation-free zone. Worldwide calls for regulation are growing louder as reality sets in. Once unleashed AI will only develop faster and regulators are starting to ask tough questions about the technology.
In the US, Vice President Kamala Harris said the companies have an “ethical, moral, and legal responsibility” to ensure that their products are safe. Senator Chuck Schumer of New York, the majority leader, has proposed legislation to regulate AI, which could include a new agency to enforce the rules. Meanwhile, in Europe, lawmakers are getting closer to a final agreement on the AI Act. The European Parliament signed off regulations that called for a ban on facial recognition technology in public places. It also bans predictive policing, emotion recognition, and the indiscriminate scraping of biometric data online. The EU is set to create more rules to constrain generative AI and parliament wants companies creating AI models to be more transparent. China has already developed draft rules designed to manage how companies develop generative AI products like ChatGPT.
Asia is one of the fastest-growing AI markets in the world according to a recent Channel Asia report and is certainly expected to emerge as a major market for AI-led initiatives. Most countries in the region are creating national AI strategies to enable increased development. So far Asian countries have been slow to respond to the fast pace of AI with oversight or regulation.
Photo: Blink news
Singapore, which has shown a strong commitment to the development of AI technology and is currently at the forefront of countries in the region, has taken a wait-and-see approach. Deputy Prime Minister Heng Swee Keat stated, “The Singapore Government is determined to lead the way in AI experimentation and adoption and believes AI deployment is for the public good at the same time striving to shield society from the most serious AI risks.” Key on Singapore’s agenda is making efforts to promote the responsible use of AI. Lee Wan Sie, director for trusted AI and data at Singapore’s Infocomm Media Development Authority said, “We are currently not looking at regulating AI. We will learn how AI is being used before we decide if more needs to be done from a regulatory front.”
Singapore’s AI governance road map
Singapore published its Model AI Governance Framework in 2019. It remains the first of its kind in Asia. In support of AI for the public good Singapore has decided to open source the world’s first AI testing toolkit called AI Verify. This enables users to conduct technical tests on their AI models. The AI Verify Foundation was launched to set the strategic directions and development roadmap of AI Verify. Current members include IBM, Google, Microsoft, Red Hat, Salesforce and Aicadium. This global open-source community was set up to discuss AI standards and best practices, as well as collaborate on governing AI. Brad Smith, president and vice chair at Microsoft commented, “Microsoft applaud the Singapore government’s leadership in this area. By creating practical resources like the AI governance testing framework and toolkit, Singapore is helping organizations build robust governance and testing processes.” Singapore has been described as positioning itself as an innovator steward in the region for responsible and trustworthy use of AI. These structured frameworks and testing toolkits will help guide AI governance policies to promote safe and trustworthy AI for businesses in a safe working environment.
The rise of AI has the potential to transform society from the way we live, work and play with a wide range of potential benefits – but at the same time, it is important to continue to consider the impact of those changes and implement clearly defined guidelines, ensuring AI benefits society as a whole.