April 19, 2024

Euro Global Post- Latest News and Analysis | UK News | Business News

European news, UK news, political news, breaking news, lifestyle and entertainment news.

The creator of artificial intelligence, Geoffrey Hinton, has left Google.

A scientist who is referred to as “the founder of artificial intelligence” has left Google in order to warn the public about the risks posed by the technology, according to US media on Monday.

According to Geoffrey Hinton, who “developed the core technology underpinning” AI systems, the field’s developments pose “profound dangers to both society and humanity.”

According to Hinton, the rivalry among the major tech corporations is forcing businesses to disclose new AI technologies at risky rates, endangering jobs, and disseminating false information.

In 2022, Google and OpenAI, the start-up that created the well-known AI chatbot ChatGPT, began developing systems using significantly more data than before.

Because of the volume of data these algorithms were analysing, Hinton told the Times that he thought they were, in some respects, surpassing human intellect.

Although AI has been used to assist human workers, the quick rise of chatbots like ChatGPT may jeopardise jobs.

He told the Times that AI “takes away the drudge work” but might take away more than that.

In addition, the scientist cautioned against the potential for AI-generated false information to proliferate, warning the Times that the typical individual “will not be able to know what is true anymore.”

Last month, according to the Times, Hinton gave Google notice of his resignation.

Elon Musk, a tech entrepreneur, and a number of other specialists urged a halt to AI system development in March so that they could be tested for safety.

The introduction of GPT-4, a significantly more potent version of the technology utilised by ChatGPT, generated an open letter that was signed by more than 1,000 people, including Musk and Apple co-founder Steve Wozniak.

At the time, Hinton did not affix her signature to the letter, but she did tell The New York Times that researchers shouldn’t “scale this up more until they understand whether they may regulate it.”