Representational image of a chatbot achieving human-like capability.
Tech entrepreneur, and developer, Siqi Chen’s claims that GPT-5 “will” achieve Artificial General Intelligence (AGI) by the end of this year have caused a stir in the AI world.
This advancement in AI technology may have far-reaching effects if the claim is accurate.
“I have been told that gpt5 is scheduled to complete training this December and that OpenAI expects it to achieve AGI,” Chen tweeted on Monday.
“Which means we will all hotly debate as to whether it actually achieves AGI. Which means it will.”
This implies that with a GPT-5 upgrade, generative AI may be indistinguishable from a human.
Meanwhile, Chen commented that he didn’t mean achieving AGI with GPT5 is a consensus belief within OpenAI, “but non-zero people there believe it will get there.”
The ability of an AI to learn and comprehend any task or concept that humans can is referred to as AGI, whereas AI refers to a machine that can perform specific tasks. AGI is a higher level of AI that is not restricted to specific tasks or functions.
On the plus side, AGI might boost productivity by accelerating AI-enabled processes and relieving humans of repetitive work.
Giving an AI so much authority, though, can have unforeseen and even negative effects. This might make it possible for extremely convincing human-like bots to spread on social media platforms, enabling harmful misinformation and propaganda to spread covertly.
Repercussions of AI-power
Chen cites Yohie Nakajima’s recent tweet, in which he described an “AI founder” experiment that is “kind of blowing my mind.”
An AI is given a goal to create its next task in the experiment, and it continues to generate and reorder its own task list as it executes them.
Although this experiment is currently only connected to search, Chen believes that with the power of chatbot plugins and GPT-5, “we are so much closer” to achieving AGI than many people believe.
AGI raises significant concerns about the possible repercussions of granting ChatGPT-like AI that much power.
Elon Musk and over a thousand other tech leaders and researchers signed an open letter on Wednesday warning that AI development could accelerate the spread of false information and propaganda and that halting any progress beyond GPT-4 would be in humanity’s best interests.
“We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable and include all key actors,” said the letter.
“If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”