• Synaptiks
  • Posts
  • Google’s AI gets agentic upgrade

Google’s AI gets agentic upgrade

ALSO : UK’s Murder Prediction System

Hi Synapticians!

Today, during the Google Cloud Next '25 conference, Google made a big splash by announcing several updates and new features across its tools (see the first article for a description of one of these features).

This year is shaping up to be a critical and challenging one for Google. Since the rise of generative AI, some have predicted that the real battleground between Google and OpenAI (and other competitors) lies in advertising revenue—Google’s primary source of income. The true goal of OpenAI, they argue, is to create a product that enables web search far better than the traditional Google search bar—offering users direct answers to their questions rather than forcing them to scroll and navigate through websites to find what they need. Some say OpenAI has already lost; others claim Google is the one who’s already behind…

We highly recommend this fascinating podcast (Silicon Carne) on the topic (it's in French—and yes, sometimes French sources are better than English ones 😉).

Another intriguing topic: the "Homicide Prediction Project" that the UK is about to launch. We’ll let you explore that one on your own.

Happy reading!

Top AI news

1. Google’s Gemini Code Assist now automates full dev workflows
Google has upgraded Gemini Code Assist with 'agentic' AI capabilities that automate multi-step programming tasks. These agents can generate apps from specs, refactor code, add features, and more—managed via a Kanban board. While promising, the tool still risks introducing bugs and security flaws, highlighting the need for human oversight. The update positions Google to better compete with GitHub Copilot and other AI coding tools.

2. UK develops AI to predict future murderers using sensitive data
The UK Ministry of Justice is developing an AI system to predict potential murderers using sensitive data from up to 500,000 individuals, including mental health and police records. Critics warn of bias, privacy violations, and discrimination, especially against minorities and vulnerable groups. The project, still in research phase, may see operational use. NGO Statewatch calls it dystopian and urges its immediate halt. Under EU law, such a system would likely be banned due to its high-risk nature and lack of safeguards.

3. OpenAI introduces Evals API for automated prompt testing
OpenAI has released the Evals API, allowing developers to automate prompt testing and integrate it into their workflows. The API supports test configuration, data management, and prompt refinement, and works with any model compatible with the Chat Completions API. This tool brings software testing practices to prompt engineering, enabling more reliable and scalable LLM development.

Bonus. Deep Cogito launches open LLMs that outperform rivals
Deep Cogito, a new open-source AI startup, has released Cogito v1 — a family of LLMs fine-tuned from Meta’s LLaMA 3.2. Using a novel training method called IDA, the models self-improve by learning from their own reasoning. They outperform LLaMA, DeepSeek, and Qwen across multiple benchmarks, especially in reasoning and tool-calling tasks. All models are open source and commercially usable up to 700M users/month. Larger models are on the way, with the company aiming for scalable, autonomous AI.

Meme of the Day

Theme of the Week

AI for Cybersecurity - The AI-Venger

Discover the story of Dmitri Alperovitch, a Russian-born cybersecurity pioneer who helped expose one of the most high-profile cyberattacks in U.S. history. From co-founding CrowdStrike to challenging global threats, his journey is both bold and inspiring. Dive into the world of a man who turned digital defense into an art.

Stay Connected

Feel free to contact us with any feedback or suggestions—we’d love to hear from you !

Reply

or to participate.