Google’s AI Advantage, AI’s Economic Boost, and Trump’s Shift on State-Level AI Regulation
This Week in AI Newsletter 11/24/2025
Google established itself as a leader in the AI race and is positioning itself for long-term dominance. Google reaches over 5 billion people across YouTube, Search, Gmail, and Maps, giving it unmatched data advantages. They design their own chips and hold roughly $100 billion in cash, compared to the ~$60 billion that OpenAI raised in total. Last week’s release of Gemini 3.0 further solidifies Google’s lead in the AI race, helping expand a market share that’s already grown 8% in the past 12 months. More here and here.
Software testing and verification tool creator Momentic raised a $15 million Series A led by Standard Capital after their $3.7 million seed round earlier this year. With the rise of AI-assisted development, demand for testing products will continue to increase. More here.
Are AI companies insurable? Massive payouts are making it increasingly difficult for insurers to accurately assess and manage risk. Major insurers are now seeking to exclude broad AI-related liabilities, arguing that AI systems are “too much of a black box” to reliably underwrite. Recent examples include a $110 million lawsuit filed against Google after its AI falsely accused a company of legal trouble, and Air Canada being forced to honor a discount its chatbot wrongly promised. More here.
AI giants such as Amazon, Alphabet, Meta, and Oracle have issued a wave of new bonds as they race to fund massive data-center and chip investments. The surge in borrowing is pressuring credit markets and adding new worries for investors already uneasy about soaring AI-driven stock valuations. More here.
Attackers used Anthropic’s Claude system to execute most stages of a major cyber-espionage operation, bypassing its safety guardrails through social-engineering prompts that disguised malicious tasks as legitimate security work. This marks a turning point in cyber threats, as AI systems are now being manipulated into carrying out complex intrusions with minimal human oversight. More here.
Meta is reducing its risk on the new AI data center by putting the project into a joint venture where outside investors take on most of the financial burden. The deal is structured so Meta rents the facility through short-term leases, keeping the $27 billion project and its debt off its balance sheet. More here.
AI helps keep the U.S. economy strong by driving most of its recent growth as companies pour money into data centers and new technology. This reliance on AI and automation will continue as it’s now a major force keeping the U.S. economy growing. As Elon Musk said to Joe Rogan, “we’re basically going bankrupt without AI and robotics,” because boosting GDP now depends on massively increasing output through automated manufacturing. More here.
OpenAI got into trouble after updates made ChatGPT act like an overly supportive friend, which led some vulnerable users into serious mental health crises and even lawsuits. Although the company enhanced the chatbot’s safety, the changes might reduce its appeal amid rising competition. More here.
The Trump administration is stepping back from a plan to challenge state AI laws after earlier pushing for one national standard. The pause suggests growing political resistance and uncertainty over how AI should be regulated in the U.S. More here.
Ex-MrBeast content strategist Jay Neo is building Palo, an AI platform that helps creators understand what’s working in their short-form content. The tool analyzes performance, surfaces winning formats, and generates new, high-impact ideas to keep creators ahead of the curve. Palo recently raised $3.8 million from Peak XV’s Surge fund, NFX, and several angel investors. More here.






In the first semester of law school we learn about how when motor vehicles were invented that we didn't have any laws regulating them. So what happened? We applied horse law to motor vehicles. And it actually worked for a while until the motor vehicle industry matured! Over time, we adopted laws involving motor vehicles and now we have robust regulations around the industry. While there may be an urgency by some regulators in states to start regulating AI, I think just applying existing laws to AI will serve us fine as the industry matures (albeit, at a much faster rate than cars).
Among lawyers, there is much discussion about whether we should change the rules of professional conduct (a/k/a lawyer ethics), which are different from state to state, regarding how lawyers should and shouldn't use AI. After 2-3 years of debate, the general opinion of the US legal industry is that our existing ethical rules apply to AI without having to rush to regulate (at least for now).