Artificial Intelligence Could Pose Existential Threat To Humanity: Australian MP
Authored by Victoria Kelly-Clark via The Epoch Times,
The risks around artificial intelligence must be thoroughly investigated as it could pose an existential threat to human life, says one Australian MP.
In a speech in Parliament on Feb. 6, Labor MP Julian Hill said ChatGPT had the potential to revolutionise the world but warned that if AI were to surpass human intelligence, it could cause significant damage.
“It doesn’t take long, if you start thinking, to realise the disruptive and catastrophic risks from untamed AGI are real, plausible, and easy to imagine,” he said.
Hill said that risk analysts working on threats such as asteroids, climate change, supervolcanoes, nuclear devastation, solar flares or high-mortality pandemics are increasingly putting artificial general intelligence (AGI) at the top of their list of worries.
“AGI has the potential to revolutionise our world in ways we can’t yet imagine, but if AGI surpasses human intelligence, it could cause significant harm to humanity if its goals and motivations are not aligned with our own, ” he said.
“The risk that increasingly worries people who are far cleverer than me is what they call the ‘unlikelihood’ that humans will be able to control AGI or that a malevolent actor may harness AGI for mass destruction.”
Artificial intelligence is taking the world by storm as technology improves at a breakneck speed.
Hill also noted that militaries around the world were pursuing AGI development as it could transform warfare and render current “defensive capabilities obsolete.”
“An AGI-enabled adversary could conquer Australia or unleash societal-level destruction without being restrained by globally agreed norms,” he said.
AI programs have been banned in schools across New South Wales, Queensland, Tasmania, Victoria and Western Australia.
MP’s Speech Partly Written by ChatGPT
To illustrate his concerns, Hill said he had used ChatGPT to write parts of the speech he was delivering.
The program took just 90 seconds to summarise recent media reports about students using artificial intelligence in Australia to cheat and said the paragraph it produced was “pretty good.”
ChatGPT wrote, “Recently, there have been media reports of students in Australia using artificial intelligence to cheat in their exams. AI technology, such as smart software that can write essays and generate answers, is becoming more accessible to students, allowing them to complete assignments and tests without actually understanding the material. This is causing concern, understandable concern, for teachers, who are worried about the impact on the integrity of the education system.”
ChatGPT also wrote that students were effectively bypassing their education and gaining an unfair advantage by using AI.
“This can lead to a lack of critical thinking skills and a decrease in the overall quality of education. Moreover, teachers may not be able to detect if a student has used AI to complete an assignment, making it difficult to identify and address cheating. The use of AI to cheat also raises ethical questions about the responsibility of students to learn and understand the material they’re being tested on,” it wrote.
Screens displaying the logos of Microsoft and ChatGPT, a conversational artificial intelligence application software developed by OpenAI. (Lionel Bonaventure/AFP via Getty Images)
Hill warned the quality of the response meant humanity needed to be a step ahead.
“If humans manage to control AGI before an intelligence explosion, it could transform science, economies, our environment and society with advances in every field of human endeavour,” he said, calling for an inquiry or international cooperation on investigating the issue.
“The key message I want to convey is that we have to start now.”
AI Community Worried
Hill’s speech comes after a decision by The International Conference on Machine Learning to ban authors from using the chatbot to write scientific papers.
“During the past few years, we have observed and been part of rapid progress in large-scale language models (LLM), both in research and deployment. This progress has not slowed down but only sped up during the past few months. As many, including ourselves, have noticed, LLMs released in the past few months, such as OpenAI’s chatGPT, are now able to produce text snippets that are often difficult to distinguish from the human-written text,” the ICML said.
“Such rapid progress often comes with unanticipated consequences.
“Unfortunately, we have not had enough time to observe, investigate and consider its implications for our reviewing and publication process. We thus decided to prohibit producing/generating ICML paper text using large-scale language models.”
US Defence Puts AGI on Watch List
Meanwhile, the U.S. Defence Information System Agency (DISA) has placed AGI on its watch list.
The DISA watchlist is known for featuring items that later become pillars of U.S. defence such as 5G, zero-trust digital defence , quantum-resistant cryptography, edge computing, and telepresence.
DISA Chief Technology Officer Stephen Wallace told an event hosted by the Armed Forces Communications and Electronics Association International (AFCEA) that the organisation had taken an interest in the technology.
Participants at Intel’s Artificial Intelligence (AI) Day stand in front of a poster during the event in the Indian city of Bangalore on April 4, 2017. (Manjunath Kiran/AFP/Getty Images)
“We’ve heard a lot about AI over the years, and there’s a number of places where it’s already in play,” Wallace said on Jan. 25, according to Defence News. “But this sort, the ability to generate content, is a pretty interesting capability.
“We’re starting to look at: How does [generative AI] actually change DISA’s mission in the department and what we provide for the department going forward.”