100% WORKED!!! Step-by-Step Guide: Install ComfyUi, Controlnet & Models | Beginner to Expert

18 Jun 202351:05

TLDRThis tutorial provides a step-by-step guide to installing ComfyUI, ControlNet, and various models, ensuring a working setup from beginner to expert level. The script details downloading specific versions from GitHub, installing necessary packages, and configuring the system for optimal performance. It also explains the process of using different models like Realistic Vision, Rev Animated, and ControlNet processors for generating art and images, showcasing the power of these tools in creating unique visuals.


  • 😀 The video provides a step-by-step guide to install ComfyUI, ControlNet, and various models, targeting users from beginner to expert levels.
  • 📝 The specific version of ComfyUI that is confirmed to work is version, which should be downloaded from the ComfyUI GitHub page.
  • 🔍 Installation of ControlNet involves using a command prompt, cloning the repository from GitHub, and installing necessary files and dependencies.
  • 🔧 After installation, it's important to check for errors and ensure that all components are successfully installed, including the ControlNet and ComfyUI Manager.
  • 🖼️ The script mentions downloading and placing specific models in designated folders, such as 'checkpoint' for stable diffusion models and 'vae' for variational autoencoder models.
  • 🛠️ Different ControlNet models are explained, each serving specific functions like line drawing conversion, human pose detection, and semantic segmentation.
  • 🎨 The importance of choosing the right VAE and ControlNet models for desired artistic styles and effects is highlighted.
  • 🔄 The process of updating ComfyUI and its dependencies is described, including updating Python and other necessary packages.
  • 🔗 The video script details the connection process within ComfyUI for creating images using the installed models and ControlNets.
  • 🚀 The guide concludes with a demonstration of creating an image using ComfyUI with ControlNet, showcasing the power of the software in generating art from sketches and styles.
  • 💡 The video promises more content to come, indicating the creator's intention to produce further instructional videos on using ComfyUI and related tools.

Q & A

  • What is the title of the video guide about?

    -The title of the video guide is '100% WORKED!!! Step-by-Step Guide: Install ComfyUi, Controlnet & Models | Beginner to Expert', which suggests it is a comprehensive tutorial on installing and using ComfyUi, Controlnet, and various models, suitable for all skill levels.

  • Which version of ComfyUi is the speaker confident is working according to the transcript?

    -The speaker is confident that version of ComfyUi is working.

  • Where can the working version of ComfyUi be downloaded from?

    -The working version of ComfyUi can be downloaded from the ComfyUi GitHub page, specifically from the 'releases' section under 'installing configure'.

  • What is the purpose of the 'install.py' script mentioned in the transcript?

    -The 'install.py' script is used to download and install all necessary packages and other dependencies for Controlnet.

  • What is the role of the 'update_config.bat' file in the process described?

    -The 'update_config.bat' file is used to update ComfyUi to the latest version and also to update Python and its dependencies.

  • What is the significance of the 'UI Manager' extension mentioned in the transcript?

    -The 'UI Manager' extension is very important for managing the nodes in ComfyUi and is installed in a process similar to that of Controlnet.

  • What are the recommended checkpoints for download and use with realistic vision and rev animated models?

    -The recommended checkpoints are 'realistic vision' and 'rev animated', which are useful for providing different styles of art and are known to work very well together.

  • What is the purpose of the VAE (Variational Autoencoder) in the context of the video guide?

    -The VAE is used for encryption algorithms that convert images to code and vice versa, providing different styles of images depending on the VAE used.

  • What does the term 'Controlnet' refer to in the script?

    -Controlnet refers to a set of models that perform specific tasks such as converting lines to images, processing straight lines, and working with hand-drawn sketches, among others.

  • How important is it to match the correct YAML files with the .pth models for the Controlnet processors?

    -It is very important to ensure that each .pth model file for the Controlnet processors has a corresponding YAML file, as these files define the configuration and usage of the models.

  • What is the final step described in the transcript for creating an image using ComfyUi and Controlnet?

    -The final step involves adjusting the 'range' or 'strength' parameter, using the correct prompt, and running the process to generate the desired image, ensuring that the elements from the sketch and style images are correctly incorporated.



🛠️ Installation of Comfy UI and Control Net

The speaker is attempting to install the Comfy UI and Control Net for the third time, confident that version 0.4_06.2023 will work. They guide the audience through downloading the correct version from the GitHub page, extracting the files, and installing necessary components. Emphasis is placed on following the installation process carefully to avoid errors.


🔄 Post-Installation Updates and Extensions

After successfully installing the Control Net, the speaker updates the Comfy UI through an update batch file, which also updates Python and its dependencies. They then install an extension from UI manager, which is crucial for the pre-processors. The speaker confirms the installation of both the Control Net and the Comfy UI manager on the system.


🎨 Downloading and Setting Up Checkpoints and VAEs

The speaker explains the importance of downloading checkpoints and VAEs for different styles of art from cv.ai. They suggest using 'realistic vision' and 'rev animated' checkpoints for processing images and describe the role of VAEs in converting images to code and vice versa. The speaker also details the process of placing the downloaded models in the correct folders.


📚 Understanding Different Control Net Processors

The speaker provides an overview of various Control Net processors, such as Kenny Edge, MLSD, and Scribble, each serving specific functions like converting lines to images or processing straight lines and hand-drawn sketches. They also mention other models like Human Pose and Segmentic for body positioning and semantic segmentation, respectively.


🛑 Exploring Advanced Control Net Processors

The speaker delves into advanced Control Net processors like DIPS for depth information, Normal Map for processing image dimensions, and Animal Line Drawing for creating clean and sharp lines. They discuss the importance of these models in various applications, such as 3D modeling and image processing.


🔄 Downloading and Configuring Control Net Models

The speaker instructs on downloading the latest Control Net models and emphasizes the importance of downloading both the model and its corresponding YAML file. They explain the process of placing the files in the correct folders and highlight the need to create YAML files manually for certain models.


🔧 Customizing and Testing Control Net Processors

The speaker demonstrates how to add and test Control Net processors within the Comfy UI. They use a sample image and explain the process of connecting various nodes, adjusting prompts, and using different models to achieve the desired outcome. The focus is on experimenting with different settings to find the optimal configuration.


🖌️ Combining Sketch and Style with Control Net

The speaker describes a process where they combine a sketch and a style image using Control Net. They detail the steps of loading models, setting up the conditioning, and adjusting the syringe to balance the influence of the sketch and the style on the final output image.


🔍 Fine-Tuning the Image Processing Parameters

The speaker discusses the importance of fine-tuning parameters such as the syringe, style model, and prompt to achieve the desired image. They demonstrate how adjusting these parameters can influence the visibility of elements like a motorcycle in the generated images.


🎉 Conclusion and Future Content Tease

In conclusion, the speaker reflects on the successful installation and application of Comfy UI, Control Net, and related models. They express their enthusiasm for creating more content and videos, hoping that the audience will enjoy and benefit from their work.



💡Comfy UI

Comfy UI is a user interface for configuring and running models in the context of AI and machine learning. In the video, it is mentioned as a crucial tool for installing and managing models and extensions, such as ControlNet and various AI models. The script discusses the process of downloading and installing a specific version of Comfy UI from its GitHub page to ensure compatibility and functionality.


ControlNet refers to a system or set of tools that allow for the control and manipulation of AI models, enhancing their performance and accuracy. The script describes the installation of ControlNet through GitHub, emphasizing its importance in the process of setting up the AI environment and its integration with Comfy UI.


In the context of this video, 'models' refers to AI models that are used for various tasks such as image generation, processing, and style transfer. The script mentions downloading and installing different models like 'realistic vision' and 'rev animated' from sources like cv.ai, which are essential for the functionality of the AI system being set up.

💡VAE (Variational Autoencoder)

VAE, or Variational Autoencoder, is a type of neural network that learns to compress data and then reconstruct it. In the video, VAEs are used for image processing, where they convert images into a code and back to an image, allowing for the transformation of styles and features. The script specifies using certain VAEs for achieving specific visual results with the AI models.


A checkpoint in machine learning is a saved state of the model during training, which can be used to continue training or to perform inferences. The script mentions placing stable diffusion checkpoint models downloaded from cv.ai into a specific folder, indicating that these checkpoints represent different styles of art and are essential for the diversity of outputs generated by the AI.


Preprocessors are components in AI systems that prepare data for processing by the main model. In the script, after installing ControlNet and Comfy UI Manager, the presence of pre-processors is verified in the UI, suggesting that they are necessary for the initial stages of image processing before the main model generates outputs.

💡Control Net Processors

Control Net Processors are specific models within the ControlNet system that perform particular tasks, such as converting sketches to images or processing lines into a specific style. The video script provides examples of different processors like 'Kenny Edge' and 'Scribble', each serving a unique purpose in the image generation pipeline.

💡Semantic Segmentation

Semantic Segmentation is a process in AI where an image is divided into segments, each labeled with a description of the content. The script describes a model named 'Segmentic' that performs semantic segmentation, translating each segment into an object, which is useful for tasks like image editing and 3D modeling.

💡Normal Map

A normal map is a type of image that represents the surface details in a 3D scene, encoding the orientation of surfaces. The script explains that a normal map processor can understand and process the image dimension, translating the orientation of polygons into colors, which is vital for 3D modeling and rendering applications.

💡Style Model

A style model in the context of AI is used to apply a specific artistic style to an image or generate an image in that style. The video script discusses using a style model named 'A style' to transfer the style of one image onto another, such as turning a sketch of a motorcycle into an image with a salad theme.

💡Q Prompt

Q Prompt likely refers to a specific parameter or setting in the AI system that influences the style and detail of the generated image. The script mentions adjusting the 'Q Prompt' to find the right balance in the image generation process, suggesting it as a key factor in achieving the desired output.


A step-by-step guide to install ComfyUI, Controlnet, and models for beginners to experts.

Ensure the version of ComfyUI and Controlnet is for guaranteed functionality.

Download the specific version of ComfyUI from the GitHub release page.

Follow the installation instructions on the Controlnet GitHub page for proper setup.

Use the command prompt to navigate to the custom nodes directory for installation.

Install Controlnet and ComfyUI Manager through the custom node folder.

Run the update_config.bat file to update ComfyUI and its dependencies.

Install the Outer extension from the UI manager for additional functionality.

Download and place the stable diffusion checkpoint models for different art styles.

Use realistic vision and rev animated checkpoints for high-quality image processing.

Understand the importance of VAEs in converting images to code and vice versa.

Download and place necessary VAEs in the designated folder for specific image styles.

Controlnet requires specific databases and models to be manually downloaded and placed.

Explore different Controlnet processors for specific tasks such as Kenny Edge, MLSD, and Scribble.

Use the human pose model to process and pose body parts in images.

Utilize semantic segmentation to convert images into color-coded objects.

Employ depth processors to estimate the third dimension in 3D modeling.

Use normal map processors to understand the orientation of polygons in 3D models.

Experiment with different Controlnet models and processors for various image processing tasks.

Create YAML files for Controlnet models to ensure compatibility with ComfyUI.

Place specific models in the style model folder instead of the Controlnet folder.

Use the ComfyUI interface to load models, set prompts, and process images.

Adjust the syringe number and style model settings to fine-tune image processing results.

Experiment with different prompts and settings to achieve desired image outcomes.