I’ve been testing a range of AI video tools recently, mostly to see how well they handle existing footage. I found that this is where most models still struggle.
This demo is a hands-on look at Luma AI’s Ray 3 Modify, which is focused on modifying and transforming existing video rather than generating everything from scratch.
Over a few days of testing, three things stood out:
Performance adherence: the original motion and timing stay intact
Character reference:" swapping characters without changing the underlying performance
Consistency: preserving continuity across shots while changing the look
In one test, I uploaded a video of myself and used character reference to transform the subject into the Hulk. The result kept the original gestures, timing, and movement, which made it easy to see exactly what the model was preserving versus changing.
I also tested Ray 3 Modify on an existing film scene by inserting myself as a character reference into Pulp Fiction. The model retained the original performance and audio while replacing the character, making it a useful way to evaluate how well different faces and identities translate onto the same motion.
Finally, I brought in a Tesla ad and changed the environment to winter. Ray 3 Modify preserved shot continuity and motion while altering the visual setting, which points to how this type of tool could fit into real editing or post-production workflows.
Check out the YouTube Video here.
🤖 If you want to stay ahead of the curve on all things AI, make sure to join our community across all platforms:
📩 Get the Weekly Newsletter: https://thisweekinai.ai/
📺 Subscribe on YouTube: www.youtube.com/@ThisWeekinAIPodcast
📸 Instagram: www.instagram.com/thisweekinaipodcast
📱 TikTok: www.tiktok.com/@thisweekinaipodcast









