Nano Banana Pro, Home Robots, Nvidia’s Record Earnings, and Funding News
This Week in AI Newsletter 11/20/2025
Google released Nano Banana Pro with enhanced editing capabilities, higher resolution, better web search and more accurate text. Built on Gemini 3.0, the new model can maintain more consistent character resemblance, and will be available across all Gemini plans with limited generations for free plans. The model will be slower and more expensive than its predecessor, but produces higher-quality images. More here.
OpenAI makes ChatGPT for Teachers for free, through 2027. This will give educators access to unlimited GPT-5.1, FERPA-grade privacy, Canva, Drive, and Microsoft 365 integrations, along with teaching-specific features like lesson planning, examples, and standards. More here.
Figure’s 02 robot worked in BMW’s Spartanburg Factory for the past 11 months, as part of a test pilot. The robot was tasked with completing a variety of tasks assigned and prepared by human operators. Key results:
Contributed to building 30,000+ X3 vehicles, loading 90,000+ parts
Running 10-hour shifts Monday through Friday
200+ miles of walking
One Figure 02 robot logged six months of continuous daily runtime
Ramp released Ramp Sheets, an AI spreadsheet editor that helps teams plan, execute, and format sheets automatically. The product models budgets, forecasts revenue, and reconciles financial statements. More here.
Meta released Segment Anything 3 (SAM3), an open-sourced image and video segmentation model. The model will be used to power Edits, Meta AI, and Facebook marketplace. Some early use cases include using SAM3 to add effects to specific objects in videos and creating 3D models of any object in a still image. More here.
Nvidia posted a record $57B in quarterly revenue, driven largely by surging demand for its data center GPUs and the growing popularity of AI tools for both consumers and enterprise like ChatGPT, Claude, and Google’s Gemini suite. With a bullish forecast of $65B for the following quarter, CEO Jensen Huang argued that talk of an “AI bubble” overlooked the reality that growth is still accelerating. More here & here
Funding News:
Wispr Flow raised $25 million from Notable Capital after a $30 million round led by Menlo Ventures in June. The company saw 100x year-over-year growth in its user base with 70% retention. The company faces competition in the dictation space from YC-backed Willow and Every’s Monologue, and aims to expand its product into full voice-enabled workflow automation. More here.
Polydot raised an $8 million seed round. The startup uses AI to help people store, browse, research, and organize their files. The round was led by Felicis with participation from Figma Ventures, Bloomberg Beta, Y Combinator, and others. More here.
Sunday Robotics unveiled its robot named Memo after 18 months of building in stealth. The company is backed by Sarah Guo at Conviction and Eric Vishria at Benchmark. Memo takes a notably different approach to domestic helper bot design from 1x’s NEO, which has softer, less ‘robotic’ features and trains via teleoperation rather than Memo’s ‘training glove.’ Sunday Robotics pays remote workers to do household chores using the $400 gloves, which resemble Memo’s hands, to gather training data for its AI model. The robot is designed for practical in-home support, focusing on everyday tasks such as cleaning and folding laundry. More here.
HUMAIN and xAI announced a landmark partnership to build next-generation GPU data centers in Saudi Arabia. This includes a flagship 500 MW+ facility that is set to become one of the world’s most advanced AI compute hubs. The collaboration also deployed xAI’s Grok models nationwide, accelerating the Kingdom’s vision to become the world’s most AI-enabled nation. More here.





It can be definitely OK, but it depends on what you're trying to do, and what "reality" is (i.e. what's the most correct answer). Adding variables that aren't needed won't help your model (particularly your estimates), but also might not matter much (e.g. predictions). However, removing variables that are real, even if they don't meet significance, can really mess up your model.
Here's a few rules of thumb:
Include the variable if it is of interest before hand, or you want a direct estimate of its effect. If your business collaborators say to put it in, put it in. If they're looking for estimates of the holiday effects, put it in (although there might be some debate as to whether you should look at each holiday individually).
Include the variable if you have some prior knowledge that it should be relevant. This can be misleading, because it's a confirmation bias, but I'd say in most cases it makes sense to do so. Particularly for holiday effects (I assume this is something like sales or energy consumption), these are well-known and documented, and those small but not-statistically-significant are real.
In general practice (i.e. most real world situations), it's better to have a slightly overspecified model than an underspecified one. This is particularly true for the purposes of prediction, because the response remains unbiased (i.e. determining the response of Y). This rule is very conditional, but the other bullets that favor overspecification tend to be more common in practice, especially in the business/applied world. Note that by saying that, I bring it back to the second bullet point, emphasizing business experience.
If you want a model that can generalize to many cases, you should favor fewer variables. Overfitting works, but it tends to make your model only work for a narrow inference space (i.e. the one reflected by your sample).
If you need precise (low variance) estimates, use fewer variables.
Just to re-emphasize; these are rules of thumb. There are plenty of exceptions. Judging by the limited information you've provided, you probably should include the non-significant "holiday" variable.
I've seen many saturated models (every term included) that perform extremely well. This isn't always true, but this works because, in a lot of business problems, reality is a complex response (so you should expect a lot of variables to be present), in addition to the lack of statistical bias from adding all these variables. Less relevant to this question, but relevant to this answer is that "Big data" also captures the power of the law of large numbers and the central limit theorem.
Variable selection is a long and complicated topic. Look up descriptions of the drawbacks of underspecification vs. overspecification, while remembering that the "right" model is the best - but unachievable. Determine if your interest is in the mean or the variance. There's a lot of focus on variances, especially in teaching and academia...but in practice and in most business settings, most people are more interested in the mean! This goes back to why overspecification in most real world cases should probably be favored.