The search and internet advertising company Google made a flurry of announcements during its developer conference on Google I/O keynote day, including numerous unveilings of current projects it has been working on.

“Seven years into our journey as an AI-first company, we’re at an exciting inflection point. We have an opportunity to make AI even more helpful for people, for businesses, for communities, for everyone. We’ve been applying AI to make our products radically more helpful for a while. With generative AI, we’re taking the next step. With a bold and responsible approach, we’re reimagining all our core products, including Search,” announced Sundar Pichai.

AI in Products

TechCrunch

Gmail

Starting with Gmail, there are some excellent examples of how generative AI is assisting evolution. They introduced Smart Reply in 2017, a quick response you could choose with just one click. Smart Compose followed, which provided writing recommendations as you typed. More sophisticated writing features driven by AI followed Smart Compose. And now, with a significantly more potent generative model, Gmail takes the next step with “Help me write.”

Google Maps

Since the beginning of Street View, artificial intelligence (AI) has joined billions of panoramic photographs so that users can travel the globe from their devices. Immersive View, which employs AI to build a high-fidelity simulation of a location so you may experience it before you arrive, was presented at last year’s I/O conference.

They are now developing the same technology to perform what Maps does best: assist you in reaching your destination. Every day, Google Maps delivers directions covering 20 billion kilometers, which is a lot of journeys. Imagine being able to see your entire journey in advance. You may use Immersive View for routes whether you’re riding a bike, walking, or driving.

There is also other information available. Check the weather, traffic, and air quality to see if these might change. By the end of the year, Immersive View for routes will be available in 15 cities, including London, New York, Tokyo, and San Francisco. The service will start to roll out during the summer.

Google Photos

Google Photos is another product that has benefited from AI. You can now search your images for things like people, sunsets, and waterfalls thanks to advances in machine learning.

In Google Photos, 1.7 billion images are edited each month. AI developments allow us more effective ways to accomplish this. Magic Eraser, an AI-powered computational photography tool, was initially introduced on the Pixel. And later this year, a new tool called Magic Editor will allow you to accomplish much more by combining semantic understanding and generative AI.

Google Blog

Example: Although this is a wonderful picture, as a parent, you certainly want your child to be the focus. You can move the birthday kid into a new position since it appears that the balloons in this one were cut off. Balloons and portions of the bench that were not visible in the original shot are automatically recreated by Magic Editor. You can add a final flourish by punching up the sky. This modifies the lighting throughout the image so that the edit looks seamless. It really is magical. Later this year, we’re eager to introduce Magic Editor in Google Photos.

Making AI more helpful for everyone

Six of those 15 products, which have a combined user base of over 2 billion, serve over half a billion consumers and enterprises. This provides Google with a wealth of chances to fulfill its aim to organize the world’s knowledge and make it available to everyone and helpful.

It’s a timeless goal that feels more timely as time goes on. The most significant method they will further their purpose, in the future, is through making AI useful for everyone. Here are four ways how they plan to bring a change:

  • First, through increasing your study and knowledge and broadening your perspective on the world.
  • By increasing productivity and creativity, you may express yourself and complete tasks.
  • Thirdly, by giving entrepreneurs and developers the tools to create their own innovative goods and services.
  • And lastly, creating and using AI properly, ensures that everyone gains the same advantages.

PaLM 2 and Gemini

The ability to continually improve foundation models is necessary to make AI useful for everyone.

They discussed PaLM last year, which resulted in numerous product enhancements. They are now prepared to introduce the PaLM 2, the newest PaLM model currently in production.

PaLM 2 advances both our most recent infrastructure and our basic research. It is simple to implement and extremely effective at a variety of jobs. More than 25 new products and services that are powered by PaLM 2 are being introduced.

Google I/O 2023
Google

Excellent basic capabilities are delivered by PaLM 2 models in a variety of sizes. Gecko, Otter, Bison, and Unicorn are their names. Because Gecko is so small and light, it can operate on mobile devices and is quick enough for fantastic interactive apps to run offline. PaLM 2 models have improved logic and reasoning abilities as a result of their extensive instruction in scientific and mathematical subjects. Additionally, it has been trained on multilingual text in more than 100 different languages, which helps it comprehend and produce accurate findings. In addition to offering strong coding capabilities, PaLM 2 can facilitate global developer collaboration.

Although PaLM 2 has a lot of potentials, it really shines when tailored to a certain domain. Sec-PaLM, which has been improved for security use cases, was just released. It employs AI to effectively identify dangerous scripts and can assist security professionals in comprehending and resolving risks.

One more illustration is Med-PaLM 2. It is focused on medical knowledge in this instance. In comparison to the base model, this fine-tuning achieved a 9X reduction in incorrect reasoning, coming close to the performance of clinical experts who responded to the identical set of questions. In fact, Med-PaLM 2 is the most advanced language model currently available and was the first to perform at an “expert” level on problems modeled after medical licensing exams. You may picture an AI partner assisting radiologists with image interpretation and conveying the findings.

Also Read: YouTube Unveils New Creator Music Features

These teams have contributed significantly to many of the landmark developments in AI over the past ten years, including AlphaGo, Transformers, sequence-to-sequence models, and others. All of this contributed to creating the conditions for the current inflection point.

These two groups were recently combined into one organization, Google DeepMind. They are concentrating on developing more competent systems in a safe and responsible manner by utilizing Google’s computational resources.

This also applies to Gemini, our upcoming foundation model, which is now undergoing training. Gemini was built from the ground up to support future advancements like memory and planning and to be multimodal, and very effective at the tool and API interfaces. Even though it is still early, they have already noticed outstanding multimodal capabilities that were not present in earlier models.

Tools to Identify Generated Content

Watermarking and metadata are two crucial strategies. When information is watermarked, it is directly incorporated into the content in ways that can withstand even minimal image alteration. We are currently developing our models from the ground up to incorporate watermarking and other methods. You can see how realistic a synthetic image appears when you look at one, so you can only imagine how significant this will be in the future.

When you see an image, you will have more context thanks to metadata, which content providers can link to the source files. We’ll make sure that metadata is present in each and every AI-generated image.

Updates to Bard

That is the chance that they have with Bard, their conversational AI project that they debuted in March. They have been developing quickly, Bard. It now supports a wide variety of programming features, and it is considerably more adept at answering questions involving logic and mathematics. And as of right now, PaLM 2 is now completely supporting it.

Adding new functionality to Google Workspace as well. Duet AI in Google Workspace offers features to produce visuals from text descriptions in Slides and Meet, develop unique plans in Sheets, and more in addition to “Help me write” in Docs and Gmail.

Introducing Labs and New Search Generative Experience

They are concentrated on providing useful features to users as AI continues to advance quickly. A new method of previewing some of the experiences across Workspace and other products is now available. It is known as Labs.

Introducing Labs and New Search Generative Experience
Google

One of the first experiments that users will be able to test in Labs is Google Search, which was one of the original products. They saw an opportunity to improve Search, which is why they started investing heavily in AI many years ago.

The ability to ask inquiries more organically and get the most pertinent online knowledge is made possible by advancements in language understanding. New methods of visual search have been made possible by developments in computer vision. With Google Lens, you can search everything you see even if you lack the words to define what you’re looking for.

In actuality, more than 12 billion visual searches are made using Lens each and every month, a 4X increase in just the last two years. You may now search using both text and a picture thanks to multi-search, which was made possible by the lens and multimodality.

By unlocking whole new queries that Search can respond to and developing ever-more useful experiences that connect you to the depth of the web, Google’s profound understanding of information and the special skills of generative AI has the potential to completely revolutionize how Search functions.

Naturally, using generative AI for search is still in its infancy. Search is used by people all around the world in crucial situations, therefore it’s necessary to do it right and maintain their trust. Our North Star will always be that.

As a result, they take a responsible approach to innovation and continue to uphold the highest standards for information quality. For this reason, the new Search Generative Experience will be available to you in Labs initially.