Google has announced several new ways to use artificial intelligence (AI) to improve its search functions.

AI has played a crucial role in Google’s search technology from the beginning, improving the company’s language understanding capabilities. Through further investments in AI, Google has expanded its understanding of information to include images, videos and even real-world insights.

Here’s how Google is using this intelligence to improve search.

Search on your screen using Google Lens
One of the new ways Google is using AI is through its Lens feature. Google Lens is becoming increasingly popular, with more than 10 billion searches a month.
With the new update, users can use Lens to search for information right from their mobile screens.
The technology will be available worldwide on Android devices in the coming months.

Multisearch
Another new feature, called multi-search, allows users to search images and text at the same time. This feature is available worldwide on mobile devices in all languages and in all countries where Lens is available.
Google now allows people to use multisearch for any image they see in mobile search results. For example, if a user searches for “modern living room ideas” and sees a coffee table that they like but is the wrong shape, they can use multisearch to add text such as “rectangle” to find what they are looking for.

Google’s new AI-powered features: search just got smarter
Google has also added local search capability, allowing users to find what they need nearby.
The feature is currently available in English in the U.S. and will soon be rolled out worldwide.
Google is constantly working to make search more natural and visual. As the AI race heats up, it is likely that we can expect more from the search giant in the coming months.