Google unleashes 2M token context and code execution for Gemini developers

2 months ago 49

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


Google has announced a series of updates to its Gemini AI platform, offering developers enhanced capabilities and access to more powerful tools. The tech giant is opening up the two million token context window for Gemini 1.5 Pro to all developers, introducing code execution features, and adding Gemma 2 to Google AI Studio.

The two million token context window, previously behind a waitlist, is now available to all developers using Gemini 1.5 Pro. This extensive context window allows for more comprehensive analysis and generation of content.

To address potential cost concerns associated with larger inputs, Google has implemented context caching for both Gemini 1.5 Pro and 1.5 Flash. This feature aims to reduce costs for tasks that reuse tokens across multiple prompts.

Code execution capabilities

In a move to improve accuracy in mathematical and data reasoning tasks, Google has enabled code execution for Gemini 1.5 Pro and 1.5 Flash. This feature allows the model to generate and run Python code, learning iteratively from the results. The execution environment is sandboxed without internet access and includes several numerical libraries. Developers are billed based on the output tokens from the model.

“This is our first step forward with code execution as a model capability and it’s available today via the Gemini API and in Google AI Studio under ‘advanced settings’,” says Google.

Gemma 2 integration and Gemini 1.5 Flash enters production

To further democratise AI development, Google is making Gemma 2, its open model, available in Google AI Studio for experimentation. This move allows developers to explore and integrate Gemma 2 alongside the Gemini models.

In addition, Google highlighted several use cases of Gemini 1.5 Flash in production—showcasing its speed and affordability:

  • Envision: An app providing real-time environment descriptions for people with low vision.
  • Plural: An automated policy analysis platform summarising complex legislation.
  • Zapier: Utilising video reasoning capabilities for automation in video editing.
  • Dot: An AI leveraging 1.5 Flash for information compression tasks in long-term memory systems.

The company also announced that text tuning for Gemini 1.5 Flash is now in the red-teaming phase and will be gradually rolled out to developers. Full access to Gemini 1.5 Flash tuning via the Gemini API and Google AI Studio is expected by mid-July.

Developers interested in exploring these new features can join the conversation on Google’s developer forum. Enterprise developers are encouraged to explore Vertex AI, which Google touts as the most enterprise-ready genAI platform.

(Image Credit: Google)

See also: Google: Third-party stores in Play Store would cost over $61M

Looking to revamp your digital transformation strategy? Learn more about Digital Transformation Week taking place in Amsterdam, California, and London. The comprehensive event is co-located with AI & Big Data Expo, Cyber Security & Cloud Expo, and other leading events.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , , , , ,

Read Entire Article