Google AI Studio recently introduced several new functionalities aimed at making the coding process more flexible for developers. Notably, code assist is now integrated directly within the Apps section, so developers can write and modify code right inside AI Studio, with immediate feedback and diff visualizations. Another update is that AI Studio now fetches submitted URLs directly, removing previous dependencies on search-based grounding. This means generative models like Gemini can access and process the actual textual content of any given web page, broadening the scope of information AI can learn from and utilize within the app.
ICYMI: Google AI Studio now has code assist in the Apps section. Now you can do vibe coding straight inside AI Studio from anywhere with diffs and all that stuff. pic.twitter.com/1ekucQyvp2
— TestingCatalog News 🗞 (@testingcatalog) May 25, 2025
Additionally, a new model, Gemma 3n E4B, has been added. This open-source, state-of-the-art AI model is both compact and multimodal, optimized for use on devices with limited resources. It handles text, visual, and audio inputs, and introduces advanced techniques like parameter-efficient caching and a MatFormer architecture. These capabilities let the model run lighter while supporting a context window of up to 32,000 tokens and multilingual processing.


Integration of the URL context feature allows AI Studio to work with the newest, most relevant web content rather than being limited by its training data and search. The combination of in-app coding support and powerful, efficient AI models like Gemma 3n points toward a continued effort to make application development more adaptive, accessible, and context-aware for engineers of all backgrounds.