- Synaptiks
- Posts
- Figure splits with OpenAI!
Figure splits with OpenAI!
ALSO : What is the state of AI regulations around the world?


Hey Synapticians,
The rapid advancement of robotics technology is reshaping our world in unprecedented ways. As noted futurist Tim Urban warns: "Humanoid robots and drones will both be absolutely everywhere 10-20 years from now. Delivery drones flying overhead, robot janitors, baristas, grocery store employees guiding you to the right aisle, and (eventually) housekeepers—will all seem as normal as smartphones do today."
In a significant development that's making waves across the robotics industry, Figure has announced its decision to part ways with OpenAI. The company's CEO has revealed that Figure has achieved sufficient autonomous capabilities to pursue its vision independently, marking a pivotal moment in the evolution of robotics companies. This split between two major players in the AI and robotics space signals a new chapter in the industry's development.

While we're incredibly excited about these developments in the robotics world, for now, let's dive into today's news! 😀
Top AI news
1. Figure Abandons OpenAI to focus on in-house models
Figure, a startup specializing in humanoid robots, has announced it will no longer utilize OpenAI's models for its artificial intelligence systems. Instead, the company plans to develop proprietary in-house models to have full control over the technology and better tailor it to their specific requirements. This move comes after months of collaboration with OpenAI, during which Figure assessed the advantages and limitations of existing models. By opting for internal solutions, Figure aims to enhance the performance of its robots and accelerate the development of innovative features.
2. State of AI Regulations around the world
AI regulation varies globally, with different nations taking distinct approaches. The Global Partnership on Artificial Intelligence (GPAI), comprising over 40 countries, promotes responsible AI use and will outline its 2025 action plan this Sunday. Meanwhile, the Council of Europe adopted the first binding international AI treaty in May last year, but global participation remains uneven—only seven of 193 UN members are part of major AI governance initiatives, with 119, mostly in the Global South, uninvolved.
UK: A "pro-innovation" approach avoids strict AI-specific laws, relying on existing regulations and voluntary guidelines.
India: AI is regulated through broader laws on privacy, defamation, and cybercrime rather than AI-specific policies.
EU: The global leader in AI regulation, with the 2024 AI Act setting strict rules for high-risk AI while banning predictive policing and biometric profiling.
US: AI regulation has weakened as President Trump rescinded a 2023 executive order requiring safety evaluations, leaving oversight to states and industry.
China: Developing a Generative AI Law mandating compliance with socialist values, content labeling, and protection of personal and business interests.
3. Hugging Face's Open-Source Reproduction of OpenAI Deep Research in 24H
Hugging Face announced Open Deep-Research, an open-source agent designed to autonomously navigate the web, summarize content, and answer questions based on these summaries. Developed in just 24 hours, the agent can search, scroll, extract information, download and manipulate files, and perform data calculations. Initial tests on the GAIA benchmark show a 55% accuracy on the validation set, making it the leading open-source solution currently available. Hugging Face invites the community to contribute to enhancing this agent, emphasizing the significance of agent frameworks in expanding the capabilities of current language models.
Bonus. Anthropic Prohibits AI Use in Job Application Materials
Anthropic, a leading AI company, has announced that job applicants should not use AI tools like large language models (LLMs) when writing their cover letters or answering the question, "Why do you want to work here?" The goal? To assess candidates' genuine communication skills and personal motivations—without AI assistance. While the company embraces AI for workplace efficiency, it insists that applications reflect authentic human abilities. Coming from a company building advanced AI models, this stance feels a bit ironic—but it highlights the ongoing debate about AI's role in creativity and communication.
Tweet of the Day
The first feedback on OpenAI's Deep Research is starting to come in, and it's quite positive. For instance, see the reaction below from a well-known AI researcher.

Theme of the Week
AI Generated Music - AI-venger of the week
David Cope developed EMI, an AI that analyzes classical compositions to create original works in the styles of masters like Bach and Beethoven. His innovation challenges our understanding of creativity, as EMI’s pieces have fooled even experts. The controversy surrounding his work raises questions about authorship and AI’s role in artistic expression. Cope’s research laid the foundation for modern AI music tools like MuseNet (OpenAI) and Magenta (Google).
Stay Connected
Feel free to contact us with any feedback or suggestions—we’d love to hear from you !

Reply