Sam Altman responds to CRED’s Kunal Shah, creates video for him using new tool

Sam Altman responds to CRED’s Kunal Shah, creates video for him using new tool

Sam Altman answers Kunal Shah’s prompt for an AI video.

OpenAI, the company behind ChatGPT, recently introduced Sora, its first artificial intelligence-powered text-to-video generation model. The company claims that it can produce videos up to 60 seconds long. The development was also shared by the company’s CEO Sam Altman, who took to X (formerly Twitter) and shared a short clip made by Sora.

Mr. Altman told his followers on the platform, “Reply with captions for videos you want to see and we’ll start making something!” He said that they should “not shy away from detail or difficulty!” Responding to the same, CRED founder Kunal Shah said that he wanted to create an interesting video using animals and the ocean as elements. “A bicycle race on the ocean with various animals in the form of cycling athletes with a drone camera view,” he said.

A few hours later, the OpenAI chief responded to the post with a video. In the clip, whales, penguins and turtles can be seen riding colorful bicycles in the ocean.

Since being shared, the clip has received 4.5 million views and 30,000 likes on the platform.

“Interesting and powerful AI,” one user said.

Another said, “This is truly the most impressive video I have ever seen from a semantics and fidelity standpoint.”

A third said: “Such a powerful device and its magic has already spread all over the world.”

“No, the turtle can’t reach the paddle,” said one user.

Another person said, “It’s incredible how fast these AI technologies are advancing… and scary because we are not prepared for the disruptions they will soon cause.”

Meanwhile, according to OpenAI, Sora can create videos up to 60 seconds long that include highly detailed scenes, complex camera movements, and multiple characters with lifelike emotions. Its competitors offer.
OpenAI said on its website, “The current model has weaknesses. It may struggle to accurately simulate the physics of a complex scene, and may not understand specific examples of cause and effect. For example, someone might use a cookie You may eat something from , but afterward, there may not be a bite mark on the cookie.” To ensure that AI tools are not used to create deepfakes or other harmful content, the company is building tools to help detect misleading content.