As an Amazon Associate I earn from qualifying purchases.

Writing Alexa’s next chapter by combining engineering and science

[ad_1]

For many of us, using our voices to interact with computers, phones, and other devices is a relatively new experience made possible by services like Amazon’s Alexa.

But it’s old hat for Luu Tran.

An Amazon senior principal engineer, Tran has been talking to computers for more than three decades. An uber-early adopter of voice computing, Tran remembers the days when PCs came without sound cards, microphones, or even audio jacks. So he built his own solution.

“I remember when I got my first Sound Blaster sound card, which came with a microphone and software called Dragon Naturally Speaking,” Tran recalls.

With a little plug-and-play engineering, Tran could suddenly use his voice to open and save files on a mid-1990s-era PC. Replacing his keyboard and mouse with his voice was a magical experience and gave him a glimpse into the future of voice-powered computing.

Fast forward to 2023, and we’re in the the golden age of voice computing, made possible by advances in machine learning, AI, and voice assistants like Alexa. “Amazon’s vision for Alexa was always to be a conversational, natural personal assistant that knows you, understands you, and has some personality,” says Tran.

In his role, Tran has overseen the plan-build-deploy-scale cycle for many Alexa features: timers, alarms, reminders, the calendar, recipes, Drop In, Announcements, and more. Now, he’s helping Amazon by facilitating collaboration between the company’s engineers and academic scientists who can help advance machine learning and AI — both full-time academics and those participating in Amazon’s Scholars and Visiting Academics programs.

Tran is no stranger to computing paradigm shifts. His previous experiences at Akamai, Mint.com, and Intuit gave him a front-row seat to some of tech’s most dramatic shifts, including the birth of the internet, the explosion of mobile, and the shift from on-premise to cloud computing.

Bringing his three decades of experience to bear in his role at Amazon, Tran is helping further explore the potential of voice computing by spurring collaborations between Amazon’s engineering and science teams. On a daily basis, Tran encourages engineers and scientists to work together as one — shoulder-to-shoulder — fusing the latest scientific research with cutting-edge engineering.

It’s no accident Tran is helping lead Alexa’s next engineering chapter. Growing up watching Star Trek, he’d always been fascinated with the idea that you could speak to a computer and it could speak back using AI.

“I’d always believed that AI was out of reach of my career and lifetime. But now look at where we are today,” Tran says.

The science of engineering Alexa

Tran believes collaboration with scientists is essential to continued innovation, both with Alexa and AI in general.

I’m coming from the perspective of an engineer who has studied some theory but has worked for decades translating technology ideas into reality, within real world constraints.

“Bringing them together — the engineering and the science — is a powerful combination. Many of our projects are not simply deterministic engineering problems we can solve with more code and better algorithms,” he says. “We must bring to bear a lot of different tech and leverage science to fill in the gaps, such as machine learning modeling and training.”

Helping engineers and scientists work closely together is a nontrivial endeavor, because they often come from different backgrounds, have different goals and incentives, and in some cases even speak different “languages.” For example, Tran points out that the word “feature” means something very different to product managers and engineers than it does to scientists.

“I’m coming from the perspective of an engineer who has studied some theory but has worked for decades translating technology ideas into reality, within real-world constraints. For me, it’s been less important to understand why something works than what works,” Tran says.

Contextual-ASR compute window.jpeg

Related content

How Alexa scales machine learning models to millions of customers.

To realize the best of both worlds, Tran says, the Alexa team is employing an even more agile approach than it’s used in the past — assembling project teams of product managers, engineers, and scientists, often with different combinations based on the goal, feature, or tech required. There’s no dogma or doctrine stating what roles must be on a particular team.

What’s most important, Tran points out, is that each team understands from the outset the customer need, the use case, the product market fit, and even the monetization strategy. Bringing scientists into projects from the start is critical. “We always have product managers on teams with engineers and scientists. Some teams are split 50–50 between scientists and engineers. Some are 90% scientists. It just depends on the problem we’re going after.”

The makeup of teams changes as projects progress. Some start out heavily weighted toward engineering and then determine a use case or problem that requires scientific research. Others start out predominantly science-based and, once a viable solution is in sight, gradually add more engineers to build, test, and iterate. This push/pull among how teams form and change — and the autonomy to organize and reorganize to iterate quickly — is key, Tran believes.

“Often, it’s still product managers who describe the core customer need and use case and how we’re going to solve it,” Tran says. “Then the scientists will say, ‘Yeah, that’s doable, or no, that’s still science fiction.’ And then we iterate and kind of formalize the project. This way, we can avoid spending months and months trying to build something that, had we done the research up front, wasn’t possible with current tech.”

Engineering + science = Smarter recipe recommendations

A recent project that benefited from the new agile, collaborative approach is Alexa’s new recipe recommendation engine. To deliver a relevant recipe recommendation to a customer who asks for one — perhaps to an Amazon Echo Show on a kitchen counter — Alexa must select a single recipe from its vast collection while also understanding the customer’s desires and context. All of us have unique tastes, dietary preferences, potential food allergies, and real-time contextual factors, such as what’s in the fridge, what time of day it is, and how much time we have to prepare a meal.

This is not something you can build using brute force engineering, It requires a lot of science.

Alexa, Tran explains, must factor all parameters into its recipe recommendation and — in milliseconds — return a recipe it believes is both highly relevant (e.g., a Mexican dish) and personal (e.g., no meat for vegetarian customers). The technology involved to respond with relevant, safe, satisfying recommendations for every customer is mind-bogglingly complex. “This is not something you can build using brute-force engineering,” Tran notes. “It requires a lot of science.”

Building the new recipe engine required two parallel projects: a new machine learning model trained to look through and select recipes from a corpus of millions of online recipes and a new inference engine to ensure each request Alexa receives is appended with de-identified personal and contextual data. “We broke it down, just like any other process of building software,” Tran says. “We wrote our plan, identified the tasks, and then decided whether each task was best handled by a scientist or an engineer, or maybe a combination of both working together.”

Tran says the scientists on the team largely focused on the machine learning model. They started by researching all existing, publicly available ML approaches to recipe recommendation — cataloguing the model types and narrowing them down based on what they believed would perform best. “The scientists looked at a lot of different approaches — Bayesian models, graph-based models, cross-domain models, neural networks, and collaborative filtering — and settled on a set of six models they felt would be best for us to try,” Tran explains. “That helped us quickly narrow down without having to exhaustively try every potential model approach.”

The engineers, meanwhile, got to work designing and building the new inference engine to better capture and analyze user signals, both implicit (e.g., time of day) and explicit (whether the user asked for a dinner or lunch recipe). “You don’t want to recommend cocktail recipes at breakfast time, but sometimes people want to eat pancakes for dinner,” jokes Tran.

Word cloud.png

Related content

A new method based on Transformers and trained with self-supervised learning achieves state-of-the-art performance.

The inference engine had to be built to accommodate queries from existing users and new users who’ve never asked for a recipe recommendation. Performance and privacy were key requirements. The engineering team had to design and deploy the engine to optimize throughput while minimizing computation and storage costs and complying with customer requests to delete personal information from their histories.

Once the new inference engine was ready, the engineers integrated it with the six ML models built and trained by the scientists, connected it to the new front-end interface built by the design team, and tested the models against each other to compare the results. Tran says all six models improved conversion (a “conversion event” is triggered when a user selects a recommended recipe) vs. baseline recommendations, but one model outperformed others by more than 100%. The team selected that model, which is in production today.

The recipe project doesn’t end here, though. Now that it’s live and in production, there’s a process of continual improvement. “We’re always learning from customer behavior. Which are the recipes that customers were really happy with? And which are the ones they never pick?” Tran says. “There’s continued collaboration between engineers and scientists on that, as well, to refine the solution.”

The future: Alexa engineering powered by science

To further accelerate Alexa innovation, Amazon formed the Alexa Principal Community — a matrixed team of several hundred engineers and scientists who work on and contribute to Alexa and Alexa-related technologies. “We have people from all parts of the company, regardless of who they report to,” adds Tran. “What brings us together is that we’re working together on the technologies behind Alexa, which is fantastic.”

The Amazon Fire TV Stick 4K is shown lying on its side

Related content

A behind-the-scenes look at the unique challenges the engineering teams faced, and how they used scientific research to drive fundamental innovation to overcome those challenges.

Earlier this year, more than 100 members of that community convened, both in person and remotely, to share, discuss, and debate Alexa technology. “In my role as a member of the community’s small leadership team, I presented a few sessions, but I was mostly there to learn from, connect with, and influence my peers.”

Tran is thoroughly enjoying his work with scientists, and he feels he’s benefiting greatly from the collaboration. “Working closely with lots of scientists helps me understand what state-of-the-art AI is capable of so that I can leverage it in the systems that I design and build. But they also help me understand its limitations so that I don’t overestimate and try to build something that’s just not achievable in any realistic timeframe.”

Tran says that today, more than ever, is an amazing time to be at Alexa. “Imagination has been unlocked in the population and in our customer base,” he says. “So the next question they have is, ‘Where’s Alexa going?’ And we’re working as fast as we can to bring new features to life for customers. We have lots of things in the pipeline that we’re working on to make that a reality.”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

myrzone.com- Expect more Pay Less
Logo