The Greatest AI Video EVER?! (Available Now!)

Matt Wolfe
30 Jun 202420:00

TLDRThe video showcases the capabilities of Runway's Gen 3, an AI video generation tool, with a focus on its strengths in creating mesmerizing abstract visuals and its limitations with human figures. The creator tests various prompts, revealing impressive results for abstract concepts and time-lapse shots, but inconsistencies when depicting people, particularly hands. The video also discusses the tool's current availability and potential public release timeline.


  • 🎉 Gen 3 from Runway is a new AI video tool that has generated impressive and mesmerizing videos.
  • 👀 The tool is not yet publicly available but is being tested through Runway's Creative Partner Program.
  • 🔍 The video demonstrates the capabilities of Gen 3, showing both its strengths in abstract concepts and areas where it struggles, such as with human figures and hands.
  • 🕒 A demonstration of the generation process shows that creating a 10-second video can be relatively quick.
  • 🎨 Gen 3 excels at creating abstract visuals with unique color palettes and trippy, kaleidoscope effects.
  • 🐺 It can generate more concrete prompts like 'a wolf howling at the moon' with good results for b-roll usage.
  • 🚗 Time-lapse and fast-paced action shots, such as cars on a freeway or a nighttime race in Tokyo, are rendered coherently.
  • 🤷‍♂️ Gen 3 struggles with human hands and complex human actions, often resulting in glitches or disappearances.
  • 🎭 Text generation within videos is hit or miss, with shorter words like 'Runway' appearing better than longer phrases.
  • 🚫 The tool has restrictions against generating content with certain IPs like Mario or Sonic, and it can't create videos with real celebrities.
  • 🔮 Using AI tools like chat GPT or Claud to help generate prompts is suggested for more focused and imaginative results.

Q & A

  • What is the name of the AI video tool discussed in the transcript?

    -The AI video tool discussed in the transcript is called Gen 3 from Runway.

  • Who created the first video example shown in the transcript?

    -The first video example was created by Bavo Sidu.

  • What is the purpose of Runway's 'Creative Partner Program'?

    -The 'Creative Partner Program' allows creators to get early access to tools like Gen 3, play around with them, test, and offer feedback before the general public gets access.

  • How long does it typically take for the general public to gain access to Runway tools after the creative partners?

    -Typically, it takes a week or two after the creative partners get access for the general public to gain access to the tools.

  • What is the time taken to generate a video using Gen 3 as demonstrated in the transcript?

    -In the transcript, it took approximately the length of the song playing in the background to generate a 10-second video using Gen 3.

  • What type of content does Gen 3 excel at generating according to the transcript?

    -Gen 3 excels at generating abstract concepts and time-lapse type shots, as well as certain go-to prompts like a wolf howling at the moon.

  • What are some of the issues Gen 3 has when generating videos with people?

    -Gen 3 struggles with generating videos with people, especially when their hands are visible or in motion, often resulting in hands disappearing, blurring, or other inconsistencies.

  • What is the transcript's author's opinion on the abstract videos generated by Gen 3?

    -The author really likes the abstract videos generated by Gen 3, appreciating the color palettes that Runway seems to pick.

  • What issues did the author encounter when trying to generate text within the videos using Gen 3?

    -The author encountered issues with generating text, where longer pieces of text were not rendered well or resulted in errors, while shorter words like 'Runway' appeared to work without problems.

  • What suggestion did the author receive from Smokeaway to improve video generation prompts?

    -Smokeaway suggested using AI tools like Chat GPT or Claud to help generate detailed and creative descriptions for text to video prompts, keeping responses focused and imaginative under 500 characters.



🎨 Exploring Runway Gen 3's AI Video Tool

The video script introduces Runway Gen 3, an AI video tool that has generated impressive and mesmerizing videos. The narrator discusses the creative potential of Gen 3, showcasing examples from other creators like bavo sidu and Nicholas Newbert. It highlights the tool's current limited availability through Runway's creative partner program and the anticipation of its public release. The narrator also shares their firsthand experience with Gen 3, generating a test video of a humanoid robot dancing in a nightclub, and discussing the time it takes to create such content.


🤖 Reviewing Gen 3's AI Video Generation Capabilities

This paragraph delves into the narrator's experience with Gen 3, discussing its strengths and weaknesses. The tool excels at creating abstract and time-lapse videos, such as a first-person view of flying through a colorful cosmos and a time-lapse of cars on a freeway. However, it struggles with generating videos involving people, particularly when their hands are visible. The narrator also mentions the tool's ability to create consistent shadows and the challenges of using AI for specific prompts like a monkey on roller skates or a cat eating a taco.


🎭 Challenges with Human Elements and Text in Gen 3 Videos

The narrator identifies issues with Gen 3 when it comes to rendering human elements, such as hands, which often appear distorted or disappear. Examples include a rapper on stage and a man entering a virtual reality world. Additionally, the tool has difficulty with text generation, either failing to produce the desired text or creating errors. The narrator also touches on the inability to generate content featuring celebrities or intellectual property, as it violates Runway's terms.


🚀 Gen 3's Potential and the Future of AI Video Tools

In the final paragraph, the narrator reflects on the potential of Gen 3 and other AI video tools, acknowledging the hit-and-miss nature of the results and the need for multiple attempts to achieve satisfactory output. They compare the current state of AI video generation to the progress made in the past year and express excitement for the upcoming release of Gen 3 to the public. The narrator also shares their enthusiasm for exploring new AI tools and invites viewers to subscribe for more content on the subject.



💡AI Video Tools

AI video tools refer to software applications that utilize artificial intelligence to assist in the creation or editing of video content. In the context of the video, these tools are highlighted as the latest advancements in video generation technology, with 'gen 3 from Runway' being a prime example. The script mentions how these tools are capable of producing mesmerizing and cool-looking videos, showcasing the evolution and capabilities of AI in the field of video production.


Runway is the name of the company responsible for the AI video tool 'gen 3' discussed in the video. It signifies the platform where this technology is being developed and made available to users. The script describes Runway's 'creative partner program,' which allows selected creators early access to test and provide feedback on new tools like gen 3, indicating the company's approach to product development and user engagement.

💡Abstract Art

Abstract Art is a form of art that does not depict external reality but instead emphasizes the non-representational qualities of color, form, and composition. In the video script, the term is used to describe the creative and impressive videos generated by gen 3, which play with light and abstract visuals to produce unique and engaging content, as seen in the example provided by the video creator bavo sidu.

💡First-Person Shooter

First-Person Shooter (FPS) is a genre of video games that emphasizes on providing a first-person perspective to the player, often involving gunplay and combat. The script references an AI-generated video simulating a first-person shooter experience, highlighting the tool's ability to create immersive and realistic scenarios that could be used in gaming or other media.


B-roll refers to supplementary footage that is edited into a video production to provide visual context and variety. In the script, the creator discusses the potential use of AI-generated videos as b-roll for various topics, such as space or the cosmos, indicating the practical applications of the generated content in enhancing storytelling and visual appeal.


Time-lapse is a photographic technique that captures an event at a slower frame rate than it unfolds in real-time, and then plays it back at a faster rate, creating a sense of accelerated motion. The script mentions time-lapse shots of cars on a freeway and the northern lights, demonstrating the AI tool's capability to generate dynamic and visually compelling sequences that simulate the passage of time.


Cinematic refers to the style or quality of a movie, often characterized by high production values, visual storytelling, and emotional impact. The script uses the term to describe the quality of certain AI-generated videos, such as the Fast and Furious-like shot of a nighttime rainy car race, suggesting that the tool can produce content with a high degree of visual sophistication.


Cherry-picking is the act of selectively choosing only the best or most favorable examples while ignoring others. In the context of the video, the term is used to address the question of whether the showcased AI-generated videos are the best examples or if they represent the typical output of the tool, implying that not all results may be of the same high standard.


Morphing is a visual effect used in video production where one image or object gradually changes its shape to become another. The script mentions instances where the AI-generated videos exhibit morphing effects, particularly with hands and objects, which can sometimes result in unrealistic or 'funky' visuals, indicating a limitation of the technology in accurately rendering certain motions or transformations.

💡Text Generation

Text generation refers to the process of creating textual content using AI algorithms. The script discusses the AI tool's ability to incorporate text into videos, such as spelling out words like 'Runway' in various contexts. However, it also points out the inconsistencies and errors that can occur, such as generating incorrect or incomplete text, highlighting the challenges in integrating text seamlessly within video content.


Introduction of Runway's gen 3, an AI video tool generating mesmerizing and cool-looking videos.

Creative AI video examples from users showcasing the potential of gen 3.

Runway's gen 3 is not publicly available yet, with early access through the Creative Partner Program.

The video demonstrates the generation process, showcasing real-time video creation.

Gen 3 excels in creating abstract concept videos with impressive color palettes.

Examples of successful AI-generated videos for b-roll, including a wolf howling at the moon.

The AI struggles with generating videos involving human hands, resulting in inconsistencies.

Successful generation of time-lapse and cinematic shots, such as a nighttime rainy car race in Tokyo.

The video creator's experience with generating abstract videos and their artistic appeal.

Issues with generating videos with text, where longer text prompts often fail.

A tip for using AI tools like chat GPT or Claud to help generate better video prompts.

Comparison of gen 3's capabilities with other AI video generators like Luma or Pika.

The necessity to prompt AI video generators multiple times to achieve desired results.

Reflection on the progress of AI video generation from the previous year to now.

Anticipation for the public release of Runway gen 3, expected within a few weeks.

The video creator's excitement for the wave of new AI tools and their potential applications.

A call to action for viewers interested in AI tools to subscribe for more content.