[ad_1]
Geoffrey Hinton, a prominent computer scientist sometimes referred to as the “godfather of AI,” has quit Google and now says he regrets what AI could mean for misinformation and people’s livelihoods. Among many other tech luminaries, Hinton is concerned about the implications of artificial intelligence, according to an interview with The New York Times published Monday.
Hinton said he fears that average people won’t be able to tell the difference between real and AI-generated photos, videos and text and that AI might also kill jobs, upending not just rote work or number crunching but also more advanced careers.
“The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Taking the current AI trajectory a step further, Hinton fears that AI could generate its own computer code, become autonomous and weaponize itself. And now that AI has been unleashed, he said, there’s no way to really control or regulate it. While companies may agree to a set of terms, countries may continue developing AI tech in secret, not wanting to cede any ground.
Hinton, along with two of his students, built a neural network, or a mathematical system that can learn new skills by analyzing an existing data set, that could teach itself to identify objects in photos. Google acquired the company in 2013 for $44 million. Hinton, along with Yoshua Bengio and Yann LeCun, won the Turing award in 2019 for their work on neural networks.
“Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions at Google,” Jeff Dean, chief scientist at Google, told CNET in an emailed statement. “I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well!”
Dean went on to say that Google was one of the first companies to publish AI Principles and that it’s “continually learning to understand emerging risks while also innovating boldly.”
AI chatbots like ChatGPT took the world by storm late last year by being able to answer just about any question with human-like responses. From writing poems to resumes, each time generative AI gives a person unique and novel responses. It upends the internet search paradigm of typing in a query and filtering through a list of website links to find an answer. Generative AI does this by combing through massive datasets and putting together sentences that make the most sense. It’s been referred to as autocorrect on steroids.
With the launch of ChatGPT, many companies have integrated AI into their products. Microsoft revamped Bing to include the same tech powering ChatGPT. Apps like Photoshop, Grammarly and WhatsApp are also embracing AI. Google responded by releasing its own AI-powered chatbot named Bard, a launch that it fumbled. And when compared to Bing and ChatGPT, Bard hasn’t impressed, although Google is reportedly working on an AI-powered search engine. AI will likely be a key topic at this month’s Google I/O, where if the company doesn’t plant its flag firmly, it could be left behind.
Microsoft is also looking to ensure responsible use of AI, on Monday publishing a blog post about embedding guidelines within the company and investing in a diverse talent pool to help future development.
Editors’ note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.
[ad_2]
Source link