Musk Merges xAI and X

ALSO : Why AIs Hallucinate Answers

Hi Synapticians!

We hope you had a great weekend. For us, it was outings (that’s Alex) and a weekend with my wonderful daughter (that’s Geoffrey).

Musk is making bold moves. He's buying himself out. More seriously, he’s buying back X (estimated at $30B, purchased for $44B with around $12B in debt) through the company xAI (which created Grok) and which is valued at $80B.

Was this the plan from the start? I.e., does Musk think the real value of X lies in your/our data, conversations, debates, and sharing? Or is it more connected to Tesla’s (ironically) “booming” situation?

By the way, he’s taking the opportunity to sanctify opt-ins and other regulatory abuses to obtain our consent (yes, you caught the irony again).

Lots of interesting stuff today, but we recommend the second news item and the fascinating article from Anthropic, which helps us understand a bit better how LLMs work (at least watch the two-minute video!).

Enjoy 😀 

Top AI news

1. Elon Musk merges xAI and X to unify AI and platform
Elon Musk’s xAI is acquiring X (formerly Twitter) in an all-stock deal, aiming to integrate AI models, data, and 600 million users. xAI, valued at $80B, will share infrastructure, computing power, and personnel with X. The merger supports Musk’s vision of building superintelligent AI and deploying it at scale. Despite controversies over content moderation and censorship, the move positions Musk to control both the AI engine and its distribution platform. xAI has raised $12B, launched Grok 3, and partnered with Nvidia on a $30B AI infrastructure fund.

2. New research reveals why LLMs make things up
Anthropic’s latest research dives into the neural circuits of its Claude LLM to explain why it sometimes hallucinates answers. The study identifies how 'known entity' neurons can override the model’s default 'don’t answer' behavior, even when data is lacking. This misfiring leads to confident but incorrect responses. By manipulating these circuits, researchers could induce or prevent hallucinations, offering a path toward more reliable AI. However, the analysis remains complex and partial, requiring hours of human effort. Still, it’s a promising step toward understanding and fixing AI confabulation.

3. Amazon launches Nova Act, a browser-controlling AI agent
Amazon has introduced Nova Act, a general-purpose AI agent that can control a web browser to perform simple tasks like booking reservations or filling forms. Developed by Amazon’s AGI lab, Nova Act is available as a research preview with a developer SDK. It aims to rival OpenAI and Anthropic’s agents and will be integrated into the upcoming Alexa+. Amazon claims superior performance in internal tests, though broader benchmarks are lacking. Nova Act represents Amazon’s strategic push into agentic AI, with the potential to redefine digital assistants.

Bonus. Manus launches paid plans and mobile app for AI agent
Manus, a viral AI agent platform from China, has introduced two paid subscription plans starting at $39/month, alongside a new iOS app. The tool, still in beta, automates tasks like building websites or score sheets. It now runs on Anthropic’s Claude 3.7 Sonnet model. Despite its potential, early tests show it doesn’t fully meet its marketing promises. Users can buy extra credits, and access is currently limited as the platform scales.

Meme of the Day

Theme of the Week

AI for Fake News Detection - The concept
Introduction The use of Artificial Intelligence for detecting fake news started gaining serious attention around the mid-2010s. This was a period when social media platforms became a major source of information, but also of misinformation. One of the key figures in this field is Kai Shu, a researcher who has worked extensively on developing AI systems to detect fake news. A well-known reference on this topic is the book "Detecting Fake News on Social Media" by Kai Shu.

Stay Connected

Feel free to contact us with any feedback or suggestions—we’d love to hear from you !

Reply

or to participate.