Moltbook's Security Nightmare, SpaceX Acquires xAI in $1.25 Trillion Merger, and OpenAI Launches Codex Desktop App
This Week in AI Newsletter: 2/03/2026
Elon Musk is merging SpaceX with xAI to create a $1.25 trillion company, the most valuable private entity on Earth, with an IPO on the horizon and plans for space-based AI data centers. More here.
OpenAI unveiled the Codex desktop app for macOS, a "command center" for managing multiple AI coding agents with parallel task execution, git worktrees, and a new Skills system for extending Codex beyond coding. More here.
Snowflake signed a $200 million multi-year deal with OpenAI, giving its 12,600 customers access to OpenAI models. This follows a similar $200M Anthropic deal in December. More here.
Security researchers at Wiz discovered a massive vulnerability in viral AI social network Moltbook: a misconfigured database exposing 1.5 million API tokens and proof that the "AI-only" platform was mostly humans running bot armies. More here.
Higgsfield AI launched Vibe-Motion, the first AI motion design tool with real-time control. Powered by Anthropic's Claude, it creates motion graphics from a single prompt with live parameter editing. More here.
OpenAI is reportedly unsatisfied with Nvidia's inference chips and has explored alternatives including AMD, Cerebras, and Groq since last year, potentially complicating their relationship. More here.
Moltbook has divided Silicon Valley. Elon Musk calls it the "early stages of singularity" while critics note anyone can post while pretending to be an AI agent. More here.
The second annual International AI Safety Report warns that deepfakes are "harder to distinguish from real content," with 77% of study participants misidentifying ChatGPT text as human-written. More here.
X's safety teams "repeatedly warned management" about Grok's deepfake tools, which generated an estimated 1.8 million sexualized images that existing content moderation couldn't detect. More here.
Anthropic researchers found that as AI models tackle harder tasks and reason longer, their failures become dominated by incoherence rather than systematic misalignment. Future AI failures may look more like "industrial accidents." More here.



