- Dotika
- Posts
- Musk loses OpenAI legal battle?
Musk loses OpenAI legal battle?
ALSO : Google’s AI avoids politics


Hi Synapticians!
It seems OpenAI is in headline mode again! This time, they’re dropping $50 million into NextGenAI, a research initiative spanning 15 top universities, including Harvard and Oxford. They’re funding everything from AI-powered healthcare to text digitization, ensuring that academia stays in the race for cutting-edge AI advancements. It’s a smart move—after all, some of the biggest breakthroughs come from university labs, and OpenAI clearly wants a front-row seat.
Speaking of competitive positioning, Google’s handling of political queries with Gemini is raising eyebrows. While OpenAI and Meta embrace nuanced discussions, Google is still playing it extra safe, sometimes even refusing to answer basic factual political questions. Originally a move to avoid election controversy, this stance now makes Google look increasingly isolated in a market demanding transparency. Can they fix this without either upsetting regulators or falling behind their AI rivals? Stay tuned.
And finally, Elon Musk just took a legal swing at OpenAI’s shift to a for-profit model—and missed. A judge ruled there wasn’t enough evidence to block the move, meaning OpenAI can continue evolving into a business powerhouse. Musk isn’t giving up, though, so expect more legal drama to come.
Plenty happening in AI today—go ahead and dive in!
Top AI news
1. Elon Musk fails to block OpenAI’s for-profit transition
A US judge ruled against Elon Musk’s attempt to block OpenAI from transitioning to a for-profit model, stating there was insufficient evidence for an injunction. Musk argued that OpenAI violated antitrust laws and betrayed its original mission, but the court allowed OpenAI to proceed. The trial will continue, but OpenAI remains free to pursue its new business structure, citing financial sustainability as a key factor.
2. Google’s Gemini still avoids political questions, unlike rivals
Google continues to restrict Gemini’s political responses, even as OpenAI and Meta allow more nuanced discussions. Initially a precaution for elections, this policy now isolates Google in a market favoring transparency. Recent tests show Gemini refusing to answer factual political questions, sometimes with errors. Google claims to be working on fixes but remains vague on policy changes. Critics argue this approach limits access to information, while OpenAI promotes 'intellectual freedom.' As AI companies navigate political discourse, Google’s cautious stance may put it at a competitive disadvantage.
3. OpenAI invests $50M in AI research with top universities
OpenAI is investing $50 million in NextGenAI, a research initiative involving 15 top universities, including Harvard, MIT, and Oxford. The funding will support AI research through fellowships, computing resources, and API access. Each institution will focus on different AI applications, such as digital health, rare disease diagnosis, AI literacy, and text digitization. This initiative builds on OpenAI’s previous educational efforts, such as ChatGPT Edu, and aims to accelerate AI advancements while making cutting-edge technology more accessible to academic institutions.
Bonus. AI for ecological research
TaxaBind is a multimodal AI tool that combines six data types—ground-level images, geographic location, satellite imagery, text, audio, and environmental features—to improve ecological research. It excels in species classification, distribution mapping, and zero-shot classification, identifying species it has never seen before. TaxaBind also enables advanced cross-modal retrieval, linking ecological data with environmental insights. Its potential applications include deforestation monitoring and habitat mapping, making it a valuable tool for conservation efforts. Presented at WACV, TaxaBind could become a foundational model for future ecological AI applications.
Tweet of the Day
aidanbench updates!!
>gpt-4.5 is #3 overall, #1 non-reasoner
>claude-3.7 and 3.7-thinking score below newsonnet
>updated o1 scores (ughg sorry explanation in thread)for transparency, we've published aidanbench.com where you can audit all response data
— Aidan McLaughlin (@aidan_mclau)
11:32 PM • Mar 4, 2025
Theme of the Week
AI Voice interaction - Scientific paper review
Speech recognition models traditionally need to be fine-tuned on specific datasets to perform well in different conditions. This means a model trained on clear read speech (like the LibriSpeech benchmark) might stumble when faced with conversational audio, background noise, or different accents. Fine-tuning for each new dataset or environment is not only labor-intensive, but it can also lead to models that overfit to peculiarities of their training set and fail to generalize. An ideal speech recognizer would work out-of-the-box on a wide range of real-world audio without additional training. Recent research began addressing this by combining multiple speech datasets to train more general models.
Stay Connected
Feel free to contact us with any feedback or suggestions—we’d love to hear from you !

Reply