Buzz, TC

Google Lens will let smartphone cameras understand what they see and take action


At Google’s I/O developer conference, CEO Sundar Pichai announced a new technology called Google Lens. The idea with the product is to leverage Google’s computer vision and A.I. technology in order to bring smarts directly to your phone’s camera. As the company explains, the smartphone camera won’t just see what you see, but will also understand what you see to help you take action.

During a demo, Google showed off how you could point your camera at something and Lens tells you what it is – like, it could identify the flower you’re preparing to shoot. In another example, Pichai showed how Lens could do a common task – connecting you to a home’s Wi-Fi network by snapping a photo of the sticker on the router.

In that case, Google Lens could identify that it’s looking at a network’s name and password, then offer you the option to tap a button and connect automatically.

A third example was a photo of a business’s storefront – and Google Lens could pull up the name, rating, and other business listing information in card that appeared over the photo.

This technology basically turns the camera from a passive tool that’s capturing the world around you to one that’s allowing your to interact with what’s in your camera’s viewfinder.

In addition, Pichai showed off how Google’s algorithms could clean up photos and enhance photos – like when you’re taking a picture of your child’s baseball game through a chain-link fence, Google could remove the fence from the photo automatically. Or if you took a photo in a low light condition, Google could automatically enhance the photo to make it less pixelated and blurry.

The company didn’t announce when Google Lens would be available only saying that it’s arriving “soon.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

twenty − 7 =

This site uses Akismet to reduce spam. Learn how your comment data is processed.