Gemini introduces Gemini 1.5 Flash, its fastest multimodal yet
- by autobot
- May 14, 2024
- Source article
Publisher object (8)
During Google I/O 2024, the company unveiled its latest update to its AI Gemini family, Gemini 1.5 Flash, which it says is optimised for high-volume, high-frequency tasks at scale that is more cost-efficient to use. Gemini 1.5 Flash was created because developers needed a model that was lighter and less expensive than the which was announced in February. According to Demis Hassabis, CEO of Google DeepMind, who wrote in a , both 1.5 Pro and 1.5 Flash will have 2 million token context window is and Google Cloud customers via waitlist. Additionally, 1.5 Pro has had its code generation, logical reasoning and planning, multi-turn conversation, and audio and image understanding enhanced through data and algorithmic improvements. 1.5 Flash uses a process called “distillation” where the most essential knowledge and skills from a larger model (in this case 1.5 Pro) are transferred to a smaller, more efficient model. So 1.5 Pro is for developers who need more complex tasks, whereas 1.5 Flash is for those concerned with the speed of the model. Hassabis added that while 1.5 Flash may be a lighter weight when compared to 1.5 Pro, 1.5 Flash still excels at summarisation, chat applications, image and video captioning, data extraction from long documents and tables, and more.