Browse Models
The simplest way to self-host ControlNet SD 1.5 Normal. Launch a dedicated cloud GPU server running Lab Station OS to download and serve the model using any compatible app or framework.
Download model weights for local inference. Must be used with a compatible app, notebook, or codebase. May run slowly, or not work at all, depending on your system resources, particularly GPU(s) and available VRAM.
ControlNet SD 1.5 Normal enables surface geometry control in Stable Diffusion 1.5 using normal maps. It employs Bae's normal map estimation method, following ScanNet's color protocol (blue=front, red=left, green=top), offering improved handling of real normal maps from 3D rendering engines.
ControlNet SD 1.5 Normal is a specialized model within the ControlNet family, designed to add conditional control to Stable Diffusion 1.5 using normal maps as guidance. The model implements research from "Adding Conditional Control to Text-to-Image Diffusion Models", focusing specifically on using surface normals to influence image generation.
The model maintains the same architecture as ControlNet 1.0 but introduces significant improvements in its preprocessing and training methodology. A key advancement is the adoption of Bae's normal map estimation method, replacing the previous "normal-from-midas" approach. This new method provides more physically accurate results and enables the model to properly interpret real normal maps from rendering engines, provided they follow ScanNet's color protocol (blue for front, red for left, green for top).
The training data consists of normal maps generated using Bae's method, which addresses limitations of the previous approach. While specific training dataset details aren't publicly disclosed, the model demonstrates robust performance in handling both synthetic normal maps and those generated from real-world images. The developers have indicated that the neural network architecture will remain unchanged until at least ControlNet 1.5, ensuring stability and compatibility.
ControlNet SD 1.5 Normal excels at retaining minor surface details and geometry during image generation. It works in conjunction with a preprocessor (annotator) that converts source images into detectmaps, which then guide the Stable Diffusion generation process. The model's improved preprocessing, trained on NYU-V2's visualization method, makes it particularly effective at interpreting real normal maps from rendering engines.
The model can be used alongside other ControlNet variants through the Automatic1111 WebUI ControlNet extension, allowing for complex control over image generation. Multiple ControlNet instances can be combined to achieve more nuanced results, with parameters including control weight, starting and ending control steps, control mode, and various resize options.
A notable variant in the ControlNet family is the Stability AI Control-LoRA version, which offers a more compact implementation through Low-Rank Adaptation. While the original ControlNet models are approximately 4.7GB, the Control-LoRA versions are significantly smaller (738MB for rank 256, 377MB for rank 128), making them more accessible for users with limited computational resources.
The model can be installed in either stable-diffusion-webui\extensions\sd-webui-controlnet\models
or stable-diffusion-webui\models\ControlNet
. For systems with 8GB GPUs, setting save_memory = True
in config.py
is recommended. The model is primarily intended for research and academic purposes, though it has found widespread use in various creative applications.