Google has released a number of updates to its AI products, including as the release of Gemini 1.5 Flash, improvements to Gemini 1.5 Pro, and developments on Project Astra, the company’s AI assistant of the future.
The new model in Google’s lineup, Gemini 1.5 Flash, is intended to be speedier and more effective for large-scale use. Despite being less heavy than the 1.5 Pro, it still has the revolutionary long context window of one million tokens and the capacity for multimodal reasoning across large volumes of data.
Demis Hassabis, CEO of Google DeepMind, stated, “1.5 Flash excels at summarization, chat applications, image and video captioning, data extraction from long documents and tables, and more.” “This is because 1.5 Pro trained it using a process known as distillation, which transfers the most crucial knowledge and abilities from a larger model to a smaller, more effective model.”
In the meantime, Google has expanded the context window to an unprecedented two million tokens, greatly enhancing the capabilities of its Gemini 1.5 Pro model. Its logical reasoning, code creation, multi-turn communication, and visual and audio understanding have all been improved.
Additionally, the business has included Gemini 1.5 Pro into Google products, such as the Workspace and Gemini Advanced apps. Furthermore, Gemini Nano can now process multimodal inputs, including visuals in addition to text.
Google unveiled Gemma 2, the next generation of open models built for ground-breaking effectiveness and performance. PaliGemma, the business’s first vision-language model that draws inspiration from PaLI-3, is another addition to the Gemma family.
Lastly, Google presented its vision for the future of AI assistants, Project Astra (advanced seeing and talking response agent), and its efforts on it. The business has created prototype agents that have improved context understanding, information processing speed, and conversational responsiveness.
“Creating a universal agent that is helpful in daily life has always been our goal. Google CEO Sundar Pichai stated, “Project Astra demonstrates multimodal understanding and real-time conversational capabilities.”
“With this kind of technology, it’s not hard to imagine a world in which people could wear glasses or a phone to have an expert AI assistant by their side.”
Some of these features, according to Google, will be added to its products later this year. This is where developers may discover all the announcements they need about Gemini.