No Widgets found in the Sidebar

The Hype

OpenAI dropped a bomb on us with Sora, a new video-generating model that’s blowing minds. The demos show off crazy realistic videos, like a shaky train ride through the suburbs that looks like it was shot on a phone. And a spaceman movie scene that could have come straight out of a Christopher Nolan film.

The Mystery

But here’s the big question: how did OpenAI pull this off? Well, they’re not telling. They’ve been tight-lipped about the training data they used. But to make something this good, Sora must have been fed a ton of videos.

The Suspicions

And that’s where things get murky. Some people think OpenAI used a whole bunch of videos scraped from the internet, including copyrighted stuff. OpenAI isn’t saying anything, which is making folks even more suspicious.

The Legal Mess

Using copyrighted material to train AI models is a big no-no. It’s like stealing. And guess what? The New York Times is already suing OpenAI for using their articles without permission.

The Impact on Creatives

Artists and creatives are freaking out. They’re worried that Sora and other video-generating models will take their jobs. And they’re right to be concerned. OpenAI itself has admitted that Sora could be used for deepfakes and spreading misinformation.

The Next Steps

OpenAI says they’re working with policymakers, creatives, and artists to figure out how to use this technology for good. But that doesn’t fix the damage that’s already been done.

The Bottom Line

Sora is a game-changer, but it also raises serious questions about ethics and the future of creativity. Until OpenAI comes clean about where they got their training data from, the mystery will continue to haunt the industry.