Created an app for the blind that reads product labels

Google’s artificial intelligence can now identify groceries in a supermarket. The new development is designed to help people with visual impairments. It is part of the Google Lookout app, which aims to help people with low vision or blindness better navigate in space, according to the BBC.

The new update adds the ability for a computer voice to say out loud what food, according to the application, a person is holding, depending on its appearance. Google says the feature “will be able to distinguish between a can of corn and a can of green beans.”

Many apps, like calorie trackers, have long used food barcodes to identify what you are eating. Google says Lookout also uses image recognition to identify a product by packaging.

The Android phone app contains about 2 million “popular products” in a database it stores on the phone and that directory changes based on where the user is, according to a Google AI blog post.

In a kitchen cabinet test by a BBC reporter, the app easily recognized a popular American hot sauce brand and another similar product from Thailand. The development was also able to read spices correctly.

Blind App

But it was worse with fresh food or irregularly shaped containers such as onions, potatoes, tubes of tomato paste, and bags of flour. If there was a problem, the voice of the application asked the user to rotate the package at a different angle, but on some points, this still did not work.

The Royal National Institute for the Blind (RNIB) has cautiously welcomed the new feature.

Ideally, we would like to have accessibility built into the label design process to make it easier for visually impaired people to navigate through them.

Robin Spinks of RNIB

But along with other similar apps such as Be My Eyes and NaviLens, which are also available on the iPhone, it “can help increase the independence of people with vision loss by quickly and easily identifying products.”

Lookout uses technology similar to Google Lens, an app that can detect what a smartphone’s camera is looking at and show the user additional information. After launching the app last year, Google recommended putting the smartphone in the front pocket of your shirt or on a string around your neck so that the camera can recognize objects in front of it.

Another new feature added in the update is the document scan feature, which takes photos of letters and other documents and sends them to a screen reader for reading aloud.

Google also says it has made improvements to the app based on feedback from visually impaired users.