Google has launched Gemini 3 across the Gemini app, Google Search AI Mode, AI Studio, Vertex AI, and Google Antigravity. Gemini 3 Pro is now in preview, while Deep Think mode is still being tested before it ships to Google AI Ultra subscribers. It is the first Gemini model to launch in Search on day one, giving users instant access to stronger reasoning and multimodal tools.
On paper, Gemini 3 brings major upgrades. It now tops key benchmarks, including LMArena, Humanity’s Last Exam, GPQA Diamond, MathArena Apex, Vending Bench 2, MMMU Pro, and Video MMMU, covering advanced reasoning, math, and multimodal tasks.

These results show PhD-level reasoning, long-horizon planning, and stronger multimodal understanding across text, images, video, audio, and code. The model offers a 1 million token context window, improved multilingual performance, and better resistance to prompt injections and misuse. Deep Think mode raises the ceiling even further and beats Gemini 3 Pro on several complex reasoning metrics.

Google DeepMind says Gemini 3 is designed to act like a true thought partner. It gives concise, direct answers, avoids filler, generates code for high-fidelity visualizations, and consistently follows complex instructions.
What can it do now?
- Understand prompts with more depth and give clear, direct answers
- Work with any input, from text and images to video, PDFs, audio, and code
- Break down long videos and turn them into personalized explanations or training plans
- Turn research papers, lectures, and tutorials into interactive lessons or visual guides
- Build full apps and interactive tools from a single prompt
- Generate richer, more dynamic web UI with top-tier zero-shot accuracy
- Operate terminals, run tests, debug code, and handle full agentic workflows
- Plan multi-step tasks like inbox cleanup, trip planning, or booking services
People have already started to make projects that show the depth of this tool’s use case:
Gemini 3 arrives as Google tries to regain momentum in the AI race. A real jump in reasoning, multimodal reliability, and agentic behavior could help the company reassert strength across Search, Android, Chrome, and Workspace. It also lays the groundwork for Google’s next phase of AGI research, focused on blending reasoning, planning, memory, and tool use into one system.
Google also launched Antigravity, a free vibe coding IDE

Antigravity is Google’s first agentic IDE, built by a team that joined the company four months ago when Google acquired Windsurf CEO Varun and several team members for 2.4 billion dollars. The goal is simple: target both traditional developers and a rising group of vibe coders who prefer to build through natural prompts rather than manual coding.
Antigravity gives AI agents direct access to the code editor, terminal, and browser. Ask it to build a basic web app and it will write the code, run tests, debug issues, open the browser, verify the output, and hand you a ready-to-review result. In Google’s demo, it even built a small flight-tracking app and produced a full browser recording of the test run.
To help users understand what the agent is doing, Antigravity generates Artifacts such as plans, screenshots, task lists, and short recordings. Instead of scrolling through dense logs, you get clear checkpoints of how the agent is reasoning and what it is doing at each stage.
Taken together, Gemini 3 and Antigravity show how Google wants AI to move from chat replies to end-to-end workflows, where models plan, build, and ship usable products with minimal human intervention.
