Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
The excitement around generative AI is a textbook example demonstrating the top of a technology hype cycle. The newest gauge on emerging technologies from Gartner shows gen AI near the “peak of inflated expectations.”
For example, McKinsey has stated that the technology could add up to $4.4 trillion annually in global GDP. Sequoia Capital believes that entire industries will be disrupted. The Organization for Economic Co-operation and Development (OECD) said the wealthiest economies are on the brink of an AI revolution. Countries are competing too, perhaps prompted by Vladimir Putin’s statement from several years ago that “whoever becomes the leader in [AI] will become the ruler of the world.”
Seemingly everyone who is anyone has compared the impact of AI to that of fire, the printing press, electricity or the internet. An Insider op-ed claims the “crescendo for this technological wave is surging.” As evidence, look no further than a Wall Street Journal report on the intense competition for AI specialists, with many companies offering mid-six-figure salaries.
On the brink of transformation or tragedy: The future of gen AI
Certainly, the transformative potential of gen AI is visible. Although it is simultaneously possible that there is more than a whiff of hubris, as there are concerning problems. These include the propensity for chatbots to hallucinate answers, a perpetuation of inherent bias from the training datasets, legal concerns regarding copyright, fair use, and ownership that have led to lawsuits, worries about the environmental footprint, concerns about creating torrents of disinformation, fears about potential job losses which in turn have led to strikes by several unions and distress over potential existential threats. All of these are considerable problems that will need to be overcome for widespread adoption.
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
New York University professor emeritus Gary Marcus has long been known for his dissenting views on deep learning generally and, most recently, gen AI. In his latest blog post, he posits that gen AI could be an economic “dud.” Beyond what he believes are limited use cases, he said, “The technical problems there are immense; there is no reason to think that the hallucination problem will be solved soon. If it isn’t, the bubble could easily burst.”
If the bubble — to use his term — were indeed to burst, it would lead to market disillusionment and slowing AI investments.
The threat of an AI winter: A historical perspective
If this scenario comes to pass, it will not be the first time AI has fallen from grace. Twice before, there have been “AI winters” where the promise fails to match reality. AI winters as experienced in the mid-1970s and the late 1980s, occur when promises and expectations greatly outpace reality and people become disappointed in AI and the results achieved.
In 1988, a New York Times article offered this analysis of AI: “People believed their own hype. Everyone was planning on growth that was unsustainable.”
It is unrealized or dashed promises that lead to AI winters. As projects flounder, people lose interest and the hype fades, as does research and investment. In 2023, the promises and expectations for AI could not be much higher. Could the predictions of massive impacts from gen AI similarly be overstated?
Is this time different?
Hardly a day passes without an announcement from an enterprise about how they are incorporating gen AI into their product offerings or new partnerships bringing the tech to market. However, companies are struggling to deploy AI.
In large part, this is because many of the products are still immature and businesses are attempting to understand use cases, data management requirements, risks, staff impacts and training needs, and how to incorporate the technology responsibly.
VentureBeat quotes Gartner analyst Arun Chandrasekaran: “Every vendor is knocking on the door of an enterprise CIO or CTO and saying, ‘We’ve got generative AI baked into our product,’” adding that executives are struggling to navigate this landscape.
It is a lot to assess. Nearly half (46%) of respondents in a recent global survey of IT leaders said their organizations are unprepared to implement AI. Furthermore, “more than half of surveyed respondents say they have not experimented with the latest AI natural language processing apps yet.”
The next generation of AI technologies
Even as significant problems remain and many companies are unprepared for widespread adoption, it is likely that AI technology will continue to advance. For example, Google DeepMind is expected to soon release its “Gemini” system which will combine the strengths of multiple systems, including large language models (LLMs) and those akin to their Alpha Go.
The net effect of Gemini, according to DeepMind cofounder and CEO Demis Hassabis, is to “add planning or the ability to solve problems” in addition to the language skills displayed in current models. Google hopes Gemini will surpass ChatGPT and other LLMs. For its part, OpenAI has not yet said anything about the availability of its next-generation GPT-5, although speculation has started since it filed a trademark application for the term several weeks ago.
Mitigating risks: Proactive measures in the AI industry
In a major step to address some of the problems with gen AI, the White House Office of Science and Technology Policy challenged hackers and security researchers to outsmart the top gen AI models. To their credit, eight companies, including OpenAI, Google, Meta, and Anthropic, agreed to participate.
Spanning three days, more than 2,000 people pitted their skills against the chatbots while trying to break them. As reported by NPR, the event was based on a cybersecurity practice called “red teaming:” Attacking models to identify their weaknesses by tricking them into creating fake news, defamatory statements and sharing potentially dangerous instructions.
As reported by CNBC, a White House spokesperson said, “Red teaming is one of the key strategies the Administration has pushed for to identify AI risks and is a key component of the voluntary commitments around safety, security and trust by seven leading AI companies that the President announced in July.”
The New York Times reported that the red-teamers “found political misinformation, demographic stereotypes, instructions on how to carry out surveillance and more.” The companies claim they will use the data to make their systems safer. Stress testing and patching found problems is a proactive way to identify and reduce risks in these AI systems.
Balancing gen AI promise and pitfalls
The continued advance of new gen AI features and capabilities, plus ongoing risk mitigation efforts, will, in turn, create greater urgency for companies to incorporate new AI products into their day-to-day operations.
As technological advances march forward, the specter of an AI winter looms, but so does the promise of transformative breakthroughs from maturing products. Whether we are witnessing the prelude to another AI winter or the dawn of a new era in technological advancement remains a complex question, which only time will tell.
Through continued collaboration, greater transparency and responsible innovation, we can ensure that AI’s potential is realized without succumbing to the pitfalls of the past. As long as the music keeps playing, the AI summer will continue.
Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!