Skip to content

patientx/ComfyUI-Zluda

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6,671 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ComfyUI-ZLUDA

Windows-only version of ComfyUI which uses ZLUDA to get better performance with AMD GPUs.

*** check out my new project, comfyui-rocm , it is using AMD's official ROCm and PyTorch packages from their TheRock repo. At the moment, RDNA2 (no igpus) , RDNA3 and RDNA4 GPU's are supported. It seems there is new RDNA1 builds happening as well, I am going to try to add them later. This is different from Comfyui-Zluda in some points, it doesn't use zluda or other 3rd party apps to launch comfyui, we need "less" tricks to make comfyui run with amd gpu's well and I tried a different approach with python as well, it is installed in a folder so in a sense, it is partially portable. It "hopefully" auto-detects your GPU (from the supported ones) and installs the necessary ROCm and PyTorch packages then comfyui. Give it a try ! ***

FOR THOSE THAT ARE GETTING TROJAN DETECTIONS IN NCCL.DLL IN ZLUDA FOLDER

In the developer's words: "nccl.dll is a dummy file, it does nothing. When one of its functions is called, it will just return 'not supported' status. nccl.dll and cufftw.dll are dummy files introduced only for compatibility (to run applications that reject to start without them, but rarely or never use them).

zluda.exe hijacks Windows API and injects some DLLs. Its behavior can be considered malicious by some antiviruses, but it does not hurt the user.

The antiviruses, including Windows Defender on my computer, didn't detect them as malicious when I made the nightly build. But somehow the nightly build is now detected as a virus on my end too."

SOLUTION: IGNORE THE WARNING AND EXCLUDE THE ZLUDA (or better the whole comfyui-zluda) FOLDER FROM DEFENDER.

What's New? (19-03-2026) [:: ltxv2.3 i2v workflow added ::]

Recent Updates

Added a ltx2.3 workflow to the workflows folder, that is designed to be fast and efficient , while using lowest amount of memory. It is for i2v and it only uses the distilled version of the ltxv2.3 model , no upscaling, no second stage. (based on a workflow by Kijai).

Added a simple "cfz-cudnn" node to the cfz-caching nodes. It is able to connect to ANY node AND can work without outputting to anything so it can be used like this for example (just to make vae work) :

image

In this example we disable cudnn just for vae decoding and enable it after , without connecting the node to image (we could do it but just showing how it can work)

  • Our friend sfinktah's impressive solution for automatically enabling-disabling cudnn on comfyui, the "ovum-cudnn-wrapper" node is now automatically installed when install-n is used. Be sure to go to the node's github page and give him a star. "[https://github.com/sfinktah/ovum-cudnn-wrapper](https://github.com/sfinktah/ovum-cudnn-wrapper]" . Also adding a new "wan 2.2 i2v workflow" based on kijai's excellent wan wrapper node pack into the cfz/workflows folder, try it and of course you can remove the image input and just use it as a t2v as well (with correct loras for that). Please read the nodes carefully,

  • **Changed node storage folder and added CFZ-Condition-Caching node. This allows you to save-load conditionings -prompts basically- it helps on two fronts, if you are using same prompts over and over it skips the clip part AND more importantly it skips loading clip model all together, giving you more memory to load other stuff, main model being the most important. (It is based on this nodepack, https://github.com/alastor-666-1933/caching_to_not_waste)

Screenshot 2025-09-02 182907
  • I also uploaded an example workflow on how to use the nodes in your workflows. It is not fully working, and it is there to an idea how to incorporate to your workflows.
  • Added "cfz-vae-loader" node to CFZ folder - enables changing VAE precision on the fly without using --fp16-vae etc. on the starting command line. This is important because while "WAN" works faster with fp16, Flux produces black output if fp16 VAE is used. Start ComfyUI normally and add this node to your WAN workflow to change it only with that model type.

  • Use update.bat if comfyui.bat or comfyui-n.bat can't update (as when they are the files that need to be updated, so delete them, run update.bat). When you run your comfyui(-n).bat afterwards, it now copies correct ZLUDA and uses that.

  • Updated included ZLUDA version for the new install method to 3.9.5 nightly (latest version available). You MUST use latest AMD GPU drivers with this setup otherwise there would be problems later (drivers >= 25.5.1).

New Nodes

  • Added "CFZ Cudnn Toggle" node - for some models not working with cuDNN (which is enabled by default on new install method). To use it:

    • Connect it before KSampler (latent_image input or any latent input)
    • Disable cuDNN
    • After VAE decoding (where most problems occur), re-enable cuDNN
    • Add it after VAE decoding, select audio_output and connect to save audio node
    • Enable cuDNN now
  • "CFZ Checkpoint Loader" was completely redone - the previous version was broken and might corrupt models if you loaded with it and quit halfway. The new version works outside checkpoint loading, doesn't touch the original file, and when it quantizes the model, it makes a copy first.

    • Please delete "cfz_checkpoint_loader.py" and use the newly added "cfz_patcher.py"
    • It has three separate nodes and is much safer and better

Note: Both nodes are inside the "cfz" folder. To use them, copy them into custom_nodes - they will appear next time you open ComfyUI. To find them, search for "cfz".

Model Fixes

  • Florence2 is now fixed (probably some other nodes too) - you need to disable "do_sample", meaning change it from True to False. Now it works without needing to edit its node.

Custom ZLUDA Versions

  • Added support for any ZLUDA version - to use with HIP versions you want (such as 6.1 - 6.2). After installing:
    1. Close the app
    2. Run patchzluda2.bat
    3. It will ask for URL of the ZLUDA build you want to use
    4. Choose from lshyqqtiger's ZLUDA Fork
    5. Paste the link via right-click (correct link example: https://github.com/lshqqytiger/ZLUDA/releases/download/rel.d60bddbc870827566b3d2d417e00e1d2d8acc026/ZLUDA-windows-rocm6-amd64.zip)
    6. Press enter and it will patch that ZLUDA into ComfyUI for you

Documentation

  • Added a "Small Flux Guide" - aims to use low VRAM and provides the basic necessary files needed to get Flux generation running. View Guide

Pre-Requisites

Installation (Windows-Only)

Important Note

DON'T INSTALL into your user directory or inside Windows or Program Files directories. Don't install to a directory with Non-English characters. Best option is to install to the root directory of whichever drive you'd like.

For Old Ryzen APU's, RX400-500 Series GPU's (HIP 5.7.1)

Note: You might need older drivers for sdk 5.7.1 and old ZLUDA to work so if you are getting errors with latest drivers please install an older version (below 25.5.1)

  1. Install HIP SDK 5.7.1 from "https://www.amd.com/en/developer/resources/rocm-hub/hip-sdk.html", "Windows 10 & 11 5.7.1 HIP SDK"

  2. Make the following changes to your system environment variables (instructions here):

  • Add entries for HIP_PATH and HIP_PATH_57 to your System Variables (not user variables), both should have this value: C:\Program Files\AMD\ROCm\5.7\
  • Check the PATH system variable and ensure that C:\Program Files\AMD\ROCm\5.7\bin is in the list. If not, add it.
  • Make sure the system variables HIP_PATH and HIP_PATH_57 exist, both should have this value: C:\Program Files\AMD\ROCm\5.7\
  1. Get library files for your GPU from Brknsoul Repository (for HIP 5.7.1) https://github.com/brknsoul/ROCmLibs or https://www.mediafire.com/file/boobrm5vjg7ev50/rocBLAS-HIP5.7.1-win%2528old_gpu%2529.rar/fil`
  • Go to folder C:\Program Files\AMD\ROCm\5.7\bin\rocblas, look for a "library" folder, backup the files inside to somewhere else.

  • Open your downloaded optimized library archive and put them inside the library folder (overwriting if necessary): C:\Program Files\AMD\ROCm\5.7\bin\rocblas\library

  • There could be a rocblas.dll file in the archive as well, if it is present then copy it inside C:\Program Files\AMD\ROCm\5.7\bin

  1. Restart your system.

  2. Open a cmd prompt. Easiest way to do this is, in Windows Explorer go to the folder or drive you want to install this app to, in the address bar type "cmd" and press enter.

  3. Copy these commands one by one and press enter after each one:

git clone -b pre24patched https://github.com/patientx/ComfyUI-Zluda 
cd ComfyUI-Zluda
install-for-older-amd.bat
  • You can use comfyui.bat or put a shortcut of it on your desktop, to run the app later. My recommendation is make a copy of comfyui.bat with another name maybe and modify that copy so when updating you won't get into trouble.
For AMD GPU VEGA through 6700 (HIP 6.2.4)

IMPORTANT: With this install method you MUST make sure you have the latest GPU drivers (specifically you need drivers above 25.5.1)

The GPUs listed below should have HIP 6.4.2 drivers available (though they have not been tested, and some may not work with the newer triton-miopen stuff). If you are updating from 6.2.4 to 6.4.2, remember to uninstall 6.2.4 and then delete the ROCm directory from your Program Files folder otherwise there may be problems even after uninstalling.

Card Type Model Numbers gfx code
AMD Radeon RX 5500XT gfx1012
AMD Radeon RX 5700XT gfx1010
AMD Radeon Pro 540 gfx1011
AMD Radeon RX 6500, 6500XT, 6500M, 6400, 6300, 6450, W6400 gfx1034
AMD Radeon RX 6600/6600XT gfx1032
AMD Radeon RX 6700/6700XT gfx1031
Mobile and Integrated
AMD Radeon RX 780M/ 740M / Ryzen Z1 gfx1103
AMD Radeon RX 660M / 680M gfx1035
AMD Radeon Graphics 128SP (All 7000 series IGP Variants) gfx1036
  • [There is a legacy installer method still available with install-legacy.bat (this is the old "install.bat") which doesn't include miopen-triton stuff, but I strongly recommend a standard install, since we have solved most of the problems with these GPUs. If you want you can still install hip 5.7.1 and use the libraries for your gpu for hip 5.7.1 or 6.2.4 and you don't need to install miopen stuff. You can use the install-legacy.bat but first try the install-n.bat if you have problems than go back to the legacy one.]

1 Install HIP SDK 6.2.4 from AMD ROCm Hub - "Windows 10 & 11 6.2.4 HIP SDK"

  1. Make the following changes to your system environment variables (instructions here):
  • Add entries for HIP_PATH and HIP_PATH_62 to your System Variables (not user variables), both should have this value: C:\Program Files\AMD\ROCm\6.2\

  • Check the PATH system variable and ensure that C:\Program Files\AMD\ROCm\6.2\bin is in the list. If not, add it.

  1. Download this addon package from Google Drive (or alternative source)
  • Extract the addon package into C:\Program Files\AMD\ROCm\6.2 overwriting files if asked

  • Get library files for your GPU from likelovewant Repository (for HIP 6.2.4)

  • Go to folder C:\Program Files\AMD\ROCm\6.2\bin\rocblas, there should be a "library" folder. Backup the files inside to somewhere else.

  • Open your downloaded optimized library archive and put them inside the library folder (overwriting if necessary): C:\Program Files\AMD\ROCm\6.2\bin\rocblas\library

  • If there's a rocblas.dll file in the archive, copy it inside C:\Program Files\AMD\ROCm\6.2\bin

  1. Restart your system

  2. Open a command prompt. Easiest way: in Windows Explorer, go to the folder or drive where you want to install this app, in the address bar type "cmd" and press enter

  • Copy these commands one by one and press enter after each:
git clone https://github.com/patientx/ComfyUI-Zluda
cd ComfyUI-Zluda
install-n.bat
For AMD GPU 6800 and above (Including 7000 and 9000 series) **IMPORTANT** - With this install method you MUST make sure you have the latest GPU drivers (specifically above 25.5.1)
  1. Install HIP SDK 6.4.2 from AMD ROCm Hub - "Windows 10 & 11 6.4.2 HIP SDK"

  2. Make the following changes to your system environment variables (instructions here):

  • Add entries for HIP_PATH and HIP_PATH_64 to your System Variables (not user variables), both should have this value: C:\Program Files\AMD\ROCm\6.4\

  • Check the PATH system variable and ensure that C:\Program Files\AMD\ROCm\6.4\bin is in the list. If not, add it.

  1. Restart your system

  2. Open a command prompt. Easiest way: in Windows Explorer, go to the folder or drive where you want to install this app, in the address bar type "cmd" and press enter

  • Copy these commands one by one and press enter after each:
git clone https://github.com/patientx/ComfyUI-Zluda
cd ComfyUI-Zluda
install-n.bat

IF YOUR GPU IS NOT LISTED AS SUPPORTED BY HIP 6.4.2:

  • Get library files for your GPU from 6.4.2 Libraries for unsupported GPU's (for HIP 6.4.2)
  • Go to folder C:\Program Files\AMD\ROCm\6.4\bin\rocblas, there should be a "library" folder. Backup the files inside to somewhere else.
  • Open your downloaded optimized library archive and put them inside the library folder (overwriting if necessary): C:\Program Files\AMD\ROCm\6.4\bin\rocblas\library
  • If there's a rocblas.dll file in the archive, copy it inside C:\Program Files\AMD\ROCm\6.4\bin

First-Time Launch

  • If you have done every previous step correctly, it will install without errors and start ComfyUI-ZLUDA for the first time. If you already have checkpoints copy them into your models/checkpoints folder so you can use them with ComfyUI's default workflows. You can use ComfyUI's Extra Model Paths YAML file to specify custom folders.

  • The first generation will take longer than usual, ZLUDA is compiling for your GPU, it does this once for every new model type. This is necessary and unavoidable.

  • To run in the future, run comfyui-n.bat (unless you are on an Older GPU, in which case run comfyui.bat.

  • You can add custom settings to comfyui-user.bat which will not get overwritten during software updates.

Updating ComfyUI-ZLUDA

  • Everytime comfyui.bat is run, it automatically updates to the latest ZLUDA-compatible version. Using ComfyUI's Software Update may break your installation. Always either depend on the launcher batch file or do a new git pull
  • Only use ComfyUI-Manager to update the extensions (Manager -> Custom Nodes Manager -> Set Filter To Installed -> Click Check Update On The Bottom Of The Window)

Troubleshooting

Incompatibilities

  • DO NOT use non-english characters as folder names to put comfyui-zluda under.
  • xformers isn't usable with zluda so any nodes / packages that require it doesn't work.
  • Make sure you do not have any residual NVidia graphics drivers instlled on your system.

Common Error Messages

  • module 'torch.compiler' has no attribute 'is_compiling' error, add --disable-async-offload to the launcher batch file. (this is now added by default to both bat files, you can try without it and if that works for you, all is good.) (thanks https://github.com/nota-rudveld) aster` now you can git pull
  • caffe2_nvrtc.dll-related errors: if you are sure you properly installed hip and can see it on path, please DON'T use python from windows store, use the link provided or 3.11 from the official website. After uninstalling python from windows store and installing the one from the website, be sure the delete venv folder, and run install.bat once again.
  • RuntimeError: GET was unable to find an engine to execute this computation or RuntimeError: FIND was unable to find an engine to execute this computation in the vae decoding stage, please use CFZ CUDNN Toggle node between ksampler latent and vae decoding. And leave the enable_cudnn setting "False" , this persists until you close the comfyui from the commandline for the rest of that run. So you won't see this error again.
Screenshot 2025-08-25 123335

That node can actually be used between conditioning or image loading etc so it's not only usable between latent and vae decoding , also you can use it in a simple workflow that it makes the setting disabled , than you can use any other workflow for the rest of the instance without worry. (try running the 1step-cudnn-disabler-workflow in cfz folder once after you start comfyui-zluda, it can use any sd15 or sdxl model it would just generate 1 step image than preview it so no saving) after that workflow runs once, switch to any workflow or start a new one.

Triton-related Errors

  • Remove visual studio 2022 (if you have already installed it and getting errors) and install "https://aka.ms/vs/17/release/vs_BuildTools.exe" , and then use "Developer Command Prompt" to run comfyui. This option shouldn't be needed for many but nevertheless try.
  • RuntimeError: CUDNN_BACKEND_OPERATIONGRAPH_DESCRIPTOR: cudnnFinalize FailedmiopenStatusInternalError cudnn_status: miopenStatusUnknownError , if this is encountered at the end, while vae-decoding, use tiled-vae decoding either from official comfy nodes or from Tiled-Diffussion (my preference). Also vae-decoding is overall better with tiled-vae decoding.
  • Note for 7900XT users: If running comfyui-n or comfyui-user terminates in the middle of the triton kernel test, follow the instructions in this bug: #384 (comment)

Resetting An Installation and Clearing Caches

  • If for some reason you can't solve with these and want to start from zero, delete "venv" folder and re-run the whole setup again step by step.
  • Wipe your pip cache "C:\Users\USERNAME\AppData\Local\pip\cache" You can also do this when venv is active with : pip cache purge
  • Run cache-clean.bat from the Comfyui-ZLUDA folder to clear caches from the following directories (note that this will require recompiling models again but may fix errors):
    1. C:\Users\yourusername\AppData\Local\ZLUDA\ComputeCache
    2. C:\Users\yourusername\.miopen
    3. C:\Users\yourusername\.triton
  • If you can't git pull to the latest version, run these commands, git fetch --all and then git reset --hard origin/m

If ComfyUI is Rendering Using Your Integrated Graphics

  • If you have an integrated GPU by AMD (e.g. AMD Radeon(TM) Graphics) you need to add HIP_VISIBLE_DEVICES=1 to your environment variables. Other possible variables to use : ROCR_VISIBLE_DEVICES=1 HCC_AMDGPU_TARGET=1 . This basically tells it to use 1st gpu -this number could be different if you have multiple gpu's- Otherwise it will default to using your iGPU, which will most likely not work. This behavior is caused by a bug in the ROCm-driver.

Credits

About

The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. Now ZLUDA enhanced for better AMD GPU performance.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Contributors

No contributors

Languages

  • Python 98.7%
  • Other 1.3%