Turning a VIDEO into 3D using LUMA AI and BLENDER!
TLDRIn this video, the creator explores the innovative Luma AI technology, which enables 3D modeling from video footage. Despite time constraints and challenging conditions, the AI successfully converts various scenes, including a payphone and a car, into detailed 3D models. The results, particularly from the DSLR footage, are impressive, showcasing the potential of this technology for future applications in 3D rendering and animation.
Takeaways
- 🌐 Luma AI enables 3D modeling from video footage, not just photos.
- ⏱️ The process needs to be completed quickly before dark, highlighting a time-sensitive challenge.
- 🎥 The video demonstrates the use of Luma AI on different objects, including a payphone and a car.
- 📸 The quality of the 3D models depends on the camera used, with DSLR footage providing sharper results.
- 🖼️ The AI separates the scene from the object automatically, showing impressive accuracy.
- 🚀 The technology is in its early stages, suggesting potential for significant future improvements.
- 📹 The video footage used for the 3D model took only 1 minute and 42 seconds to record.
- 🛠️ Post-processing, such as cleaning up reflective surfaces and adjusting for darkness, is necessary for better model quality.
- 🔄 The script mentions plans to use the created 3D assets in a short video to test their performance in 3D software.
- 📦 Luma AI provides a glTF model and an Unreal Engine plugin for further use and development.
- 🔧 The process of creating 3D models from video is simplified, allowing for quick and easy integration into various applications.
Q & A
What is the main topic of the video?
-The main topic of the video is the demonstration of Luma AI's ability to convert video footage into 3D models using photogrammetry.
What is the significance of the Luma AI technology?
-Luma AI technology allows users to create 3D models from video footage, which is faster and more convenient than traditional photo-based 3D capture methods.
How long did it take to process the 3D mesh in the video?
-Each 3D mesh took about 20 to 30 minutes to be processed.
What was the issue with the DSLR footage initially?
-The initial issue with the DSLR footage was that it was failing to upload. The problem was resolved by running the footage through DaVinci with h.265 encoding and re-uploading them.
What did the AI do with the video scene?
-The AI automatically separated the scene from the object, creating a detailed 3D model.
What software was used to refine the 3D model?
-Blender was used to refine the 3D model by smoothing out sharp edges.
How long did it take to record the video footage that was used to create the 3D model?
-It took only one minute and 42 seconds to record the video footage.
What was the quality difference between the iPhone and DSLR results?
-The DSLR had sharper quality due to better image resolution and closer proximity to the subject, while the iPhone footage followed the website's instructions for three levels of loop from high, mid, and low angles.
What was the challenge with the car footage?
-The car had a long and reflective paint job, and the footage was taken in low light conditions, which made it difficult to capture a clear 3D model.
What is the plan for the 3D models created in the video?
-The plan is to use these 3D models to create a quick and short video to demonstrate how these assets perform when used in 3D software for background purposes.
Outlines
🎥 Introducing Luma AI Video to 3D Photogrammetry
The script describes the excitement of discovering Luma AI's ability to convert video into 3D models. The narrator rushes to capture a video before sunset and discusses the process of uploading the footage to Luma AI's website. The AI's capability to separate the scene from the object is highlighted, and the user's experience with the software is shared, including the time it took to process the 3D mesh and the quality of the resulting models. The script also mentions the potential for future improvements in the technology.
Mindmap
Keywords
💡3D model
💡Luma AI
💡Photogrammetry
💡Video to photogrammetry
💡DaVinci
💡GLTF model
💡Unreal Engine
💡Blender
💡Payphone
💡Reflective surfaces
💡Background purposes
Highlights
Luma AI enabled video to photogrammetry allows 3D capture from videos instead of photos.
The process needs to be completed before dark, indicating a time constraint.
The Luma AI technology was tested with a video of a payphone from different angles.
The video was captured quickly, in just a couple of minutes.
Luma AI's website was used to upload the video clips for processing.
DSLR footages initially failed, so they were re-uploaded after encoding with h.265.
Each 3D mesh took about 20 to 30 minutes to be processed.
The AI automatically separates the scene from the object, resulting in a detailed 3D model.
A glTF model and an Unreal Engine plugin are available for the 3D outputs.
The 3D model was refined in Blender to remove sharp edges.
The video footage was compared with the 3D model output, showing the potential of the technology.
The iPhone and Sony DSLR results were compared, with the DSLR having sharper quality.
The car model, despite challenges like reflective paint and darkness, yielded impressive results.
The assets created will be used to create a short video to test their performance in 3D software.
The technology is expected to improve, offering better quality in the future.