
AI models won’t be siloed in the future.

Google demos new Lumiere text to video engine. Results are a huge leap forward from previous engines.
Click to view this content.
Google’s new video generation AI model Lumiere uses a new diffusion model called Space-Time-U-Net, or STUNet, that figures out where things are in a video (space) and how they simultaneously move and change (time). Ars Technica reports this method lets Lumiere create the video in one process instead of putting smaller still frames together.
Lumiere starts with creating a base frame from the prompt. Then, it uses the STUNet framework to begin approximating where objects within that frame will move to create more frames that flow into each other, creating the appearance of seamless motion. Lumiere also generates 80 frames compared to 25 frames from Stable Video Diffusion.
Beyond text-to-video generation, Lumiere will also allow for image-to-video generation, stylized generation, which lets users make videos in a specific style, cinemagraphs that animate only a portion of a video, and inpainting to mask out an area of the video to change the color or pattern.
Google’s Lumiere paper, th
AI models won’t be siloed in the future.
More great nonfiction, sent in by Verge readers
Apple is a $3 trillion company — again
That price may have more to do with the iPhone 15 than the Vision Pro.
"While developers start work on building Vision Pro apps, the potential for people upgrading to the iPhone 15 this year is a big reason for investor optimism."
Twitter has started blocking unregistered users
Elon Musk blamed “data pillage” and claims the move is temporary.