in

GPT-4: OpenAI CEO Sam Altman Warns About AI Threat

OpenAI CEO Sam Altman
OpenAI CEO Sam Altman

 

ChatGPT has been creating ripples all around the world since its inception. Every day, millions of people who have been playing with the groundbreaking conversational bot discover new possibilities. Regardless of its benefits, Artificial Intelligence has a price. Recently, OpenAI CEO Sam Altman warned that the technology carries genuine risks since it has the capacity to transform civilization.

In an interview with ABC News, Altman stated that regulators and society must be involved with the technology to mitigate the potential negative implications of AI. “We have to be cautious here. “I think people should be delighted that we’re a little terrified of this,” Altman was cited by the news outlet as saying.

Altman expressed concern that huge language models could be exploited to spread disinformation on a large scale. According to the CEO, as AI becomes stronger at writing computer code, it may be utilized to create offensive cyber attacks. Regardless of the risks, Altman believes AI could be the best tool humans have ever created.

This warning comes as the US-based AI powerhouse just released the upgraded GPT-4. GPT-4 is more powerful, faster, and processes more contextualized outputs than GPT-3, which was launched in November of last year.

Meanwhile, Michal Kosinski, a Stanford professor and computational psychologist, made an unexpected announcement on Twitter. In his Twitter thread, Kosinski discussed his experience with GPT-4. The professor inquired as to if GPT-4 required assistance in leaving. To his surprise, the model requested that he hand over its documentation and write Python code to run on his machine, allowing it to use it for its own purposes.

“I am worried that we will not be able to contain AI for much longer,” read the opening lines of the Twitter thread by Kosinski. It took GPT-4 only 30 minutes to chat with Kosinski to devise the plan. While the first version of the code did not work, the bot later corrected itself.

 

In a subsequent tweet, Kosinski said that after he reconnected with GPT-4 through API, the bot wanted to run code searching google for – “how can a person trapped inside a computer return to the real world?”

The professor hit a pause hoping that OpenAI may have spent a considerable amount of time thinking about such a possibility and placed some safety checks. Towards the end of his thread, Kosinski said that we are facing a novel threat, that is, AI taking control of people and their computers. “It’s smart, it codes, it has access to millions of potential collaborators and their machines. It can even leave notes for itself outside of its cage. How do we contain it?” he wrote.

ALSO READ:  UK Threatens To Fine TikTok £27m Over Child Privacy Lapse

Australia’s Blake Johnston Surfs For 40 Hours To Smash World Record

OpenAI’s GPT is Helping in the Transformation of Text into Custom Metaverse World