Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
In what comes as welcome news to long-suffering Alexa users who can’t do much more than set alarms and check the local weather, Amazon is building a “more generalized and capable” large language model (LLM) to power the device, according to comments yesterday from CEO Andy Jassy in the company’s first-quarter earnings call with investors. And just like Google, Microsoft and Meta did in their earnings calls this week, Amazon placed a strong focus on its overall commitment to AI.
In a response to questions from Brian Nowak, managing director at Morgan Stanley, Jassy went into considerable depth about Amazon’s AI efforts around Alexa, which comes in the context of viral generative AI tools like ChatGPT and Microsoft 365 Copilot stealing Alexa’s thunder as a go-to personal assistant. Critics have said Alexa has stagnated — for example, last month The Information reported that Toyota planned to phase out its Alexa integration and is even considering integrating ChatGPT into its in-house voice assistant.
Generative AI ‘accelerates the possibility’ of improving Alexa
In the Amazon earnings call yesterday, Jassy said Amazon continues to have “conviction” about building “the world’s best personal assistant,” but that it is difficult to do across many domains and a broad surface area.
“However, if you think about the advent of large language models and generative AI, it makes the underlying models that much more effective such that I think it really accelerates the possibility of building that world’s best personal assistant,” he said.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Jassy added that the company starts from “a pretty good spot with Alexa, with its couple of hundred million of endpoints being used across entertainment and shopping and smart home and information and a lot of involvement from third-party ecosystem partners.” Amazon has had an LLM underneath it, Jassy explained, “but we’re building one that’s much larger and much more generalized and capable. And I think that’s going to really rapidly accelerate our vision of becoming the world’s best personal assistant. I think there’s a significant business model underneath it.”
Amazon CEO also focused heavily on AWS and AI
In response to another question from Nowak, Jassy also focused on key offerings from AWS around AI, emphasizing that Amazon has been heavily investing in LLMs for several years, as well as in the chips, particularly GPUs, that are optimized for LLM workloads.
“In AWS, we’ve been working for several years on building customized machine learning chips, and we built a chip that’s specialized for training — machine learning training — which we call Trainium. [It’s] a chip that’s specialized for inference or the predictions that come from the model called Inferentia,” he said, pointing out that the company just released its second versions of Trainium and Inferentia.
“The combination of price and performance that you can get from those chips is pretty differentiated and very significant,” he said. “So we think that a lot of that machine learning training, inference will run on AWS.”
And while he said Amazon will be one of the small number of companies investing billions of dollars in building significant, leading LLMs, Jassy also focused on Amazon’s ability to offer options to companies who want to use a foundational model in AWS and then have the ability to customize it for their own proprietary data, needs and customer experience. Companies want to do that in a way where they don’t leak their unique IP to the broader generalized model, he explained.
“That’s what Bedrock is, which we just announced a week ago or so,” he said. Bedrock is a managed foundational model service where people can run foundational models from Amazon, or leading LLM providers like AI21, Anthropic or Stability AI.
“They can run those models, take the baseline, customize them for their own purposes and then be able to run it with the same security and privacy and all the features they use for the rest of their applications in AWS,” he said. “That’s very compelling for customers.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.