Apple announced that a new Siri is being trained to recognize custom speech patterns. This type of awareness can help tell the voice assistant whether to respond to user commands loudly or in a whisper, AppleInsider says.
Apple decided to investigate user speech imagery following complaints. iPhone users are frustrated by the lack of variety in Siri’s features compared to competing personal assistants.
According to the patent application, Apple is improving Siri so that it can detect variations in sounds in the environment. The new feature will allow the voice assistant to respond with appropriate intonation, voice, and volume depending on how noisy the environment is around the user.
The update will make Siri more competitive than Amazon’s Alexa and other personal AI assistants. Ideally, it would allow users to whisper Siri commands at night and receive answers in a whisper. Likewise, being in a noisy room or other place, the user can send commands to her in a loud voice and at the same time be heard, receiving a slower, but also loud response in response.
Apple’s Voice Assistant uses synthesized versions of various voice actors. The Neural Text to Speech Engine then translates the processor command into audible speech. IOS 14.5 and up users will be able to use Siri with two additional English voices, including more human speech patterns.