Google Assistant is the best digital assistant in business right now, thanks to the company’s innovations in machine learning and the way it accesses every part of our lives, from web search to smart home gadgets. However, there is still room for improvement.
The company aims to make chatting with the Assistant as simple and fluid as chatting with a friend or relative, but that goal remains elusive despite the Assistant’s regular promotions.
In March 2021, Google began using Unified Learning across the Android operating system to improve the accuracy of the important word Hey Google to activate its voice assistant.
Unified learning is a machine learning method that trains an algorithm across multiple decentralized peripherals or servers containing local data samples, without exchanging them.
It now appears that the upcoming Personal Speech Recognition feature will help Google Assistant better recognize repeated words and names.
According to the code for recent versions of the Google Android app, personal speech recognition appears in the assistant settings.
The company describes the feature: Store audio recordings via this device to help the assistant better recognize what you’re saying. Voice recordings remain via this device and can be deleted at any time by turning off personal speech recognition. You can get more information.
Get More Info links to a current support article about the company’s use of federated learning to improve critical word activations by using voice recordings stored across users’ devices to improve models such as Hey Google Discovery.
The assistant learns how to adjust the model from the audio data. It also sends a summary of the form changes to the company’s servers. These summaries are aggregated across multiple users to provide a better template for everyone.
Google Assistant is getting better at understanding your voice
The upcoming feature looks to bring those machine learning-based improvements to actual Assistant commands. Especially associated with names and frequently spoken words.
Audio recordings of past speech are stored across the device and analyzed to make transcription more accurate in the future.
On devices like the second-generation Nest Hub and Nest Mini, Google uses a machine learning chip that locally processes the most common queries for a much faster response time. This concept may now expand beyond smart home devices to Android.
And given Google’s stance on the Assistant and voice privacy, this is likely to be a subscription feature, such as Help improve Assistant.
According to the company’s available description of the feature, the audio recordings remain within the device and are deleted when the feature is disabled.
Meanwhile, Google says about turning off personal speech recognition: If you turn this feature off, the Assistant becomes less accurate at recognizing the names and other words you say frequently. All sounds used to improve speech recognition are deleted from this device.
It is not clear when this feature will be launched or how much improvement it is getting. But Google demonstrated at I/O 2022 how conversations with Assistant might become more natural in the coming year. The assistant is supposed to basically ignore interruptions, normal stops, and other self-corrections.
And it looks like Google wants to make its Assistant better at understanding commands and words that are most specific to you with the new feature.