Gemini Updates have been a bit frequent lately, organized out of order:
officially online Veo2
Inference modeling in Google AI Studio Gemini (shrunken version) are online separately.
Native support for multimodal models for image generation editing:Gemini 2.0 Flash (now standardized as: Gemini 2.0 Flash (lmageGeneration) Experimental)
Deep research, historical search... And so on.
This time it's "Canvas". You can access it directly at https://gemini.google.com/canvas, or in the Gemini interface, turn it on:
What are the features of Gemini Canvas?
Canvas writing interface, after selecting Canvas, enter any text to enter the editing interface.
Gemini 2.0 Flash has built-in Imagen 3 functionality, while in Canvas mode only text can be output.
It would be really fun to integrate the Gemini 2.0 Flash (lmageGeneration) Experimental capability into the canvas, which is a shame!
When recognized as writing mode, a simple document layout editor is demonstrated, as well as a paragraph selection modification feature.
Expands the code generator when recognized as a generate code task (you cannot modify the code directly in the canvas) and can preview the code.
Generate SVG and React The code works all right (slightly worse than Claude). If you use the code previously described in Claude Generating SVGs with common cue words for testing may not work very well, and you will need to reconstruct, for example, cue words declared in LISP syntax, which generally do not generate graphical SVGs.
You can find more information on the Graphic card cue words The test is copied one by one in the middle and the pattern will soon be found.
Try generating HTML, just use the recently popular generation of multi-page PPT presentations