This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. you can use SDNext and set the diffusers to use sequential CPU offloading, it loads the part of the model its using while it generates the image, because of that you only end up using around 1-2GB of vram. x for ComfyUI. 0 with ComfyUI. Part 4 (this post) - We will install custom nodes and build out workflows with img2img, controlnets, and LoRAs. 9 model images consistent with the official approach (to the best of our knowledge) Ultimate SD Upscaling. 9. IDK what you are doing wrong to wait 90 seconds. The difference between basic 1. x, SD2. Now that you have been lured into the trap by the synthography on the cover, welcome to my alchemy workshop! 现在,你已经被封面上的合成图所吸引. The sample prompt as a test shows a really great result. 0. 1 and 0. 下载此workflow的json文件并把他Load加载到comfyUI里,即可以开始你的sdxl模型的comfyUI作图之旅了。 . A technical report on SDXL is now available here. Place LoRAs in the folder ComfyUI/models/loras. The refiner is although only good at refining noise from an original image still left in creation, and will give you a blurry result if you try. There are several options on how you can use SDXL model: How to install SDXL 1. SDXL for A1111 – BASE + Refiner supported!!!! Olivio Sarikas. For an example of this. SDXL Refiner model 35-40 steps. Closed BitPhinix opened this issue Jul 14, 2023 · 3. How To Use Stable Diffusion XL 1. AP Workflow 6. Eventually weubi will add this feature and many people will return to it because they don't want to micromanage every detail of the workflow. 5 to SDXL cause the latent spaces are different. 25:01 How to install and use ComfyUI on a free. 35%~ noise left of the image generation. Custom nodes extension for ComfyUI, including a workflow to use SDXL 1. Does it mean 8G VRAM is too little in A1111? Anybody able to run SDXL on 8G VRAM GPU in A1111 at. I Have RTX3060 with 12GB VRAM and my pc has 12GB of RAM. Wire up everything required to a single. 9-usage This repo is a tutorial intended to help beginners use the new released model, stable-diffusion-xl-0. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. Pastebin is a. x and offers many optimizations, such as re-executing only parts of the workflow that change between executions. Readme file of the tutorial updated for SDXL 1. With Vlad releasing hopefully tomorrow, I'll just wait on the SD. With SDXL as the base model the sky’s the limit. The workflow should generate images first with the base and then pass them to the refiner for further refinement. With ComfyUI it took 12sec and 1mn30sec respectively without any optimization. 5s/it as well. go to img2img, choose batch, dropdown refiner, use the folder in 1 as input and the folder in 2 as output. A couple of the images have also been upscaled. SDXL clip encodes are more if you intend to do the whole process using SDXL specifically, they make use of. you are probably using comfyui but in automatic1111 hires. Source. When I run them through 4x_NMKD-Siax_200k upscaler for example, the. ) [Port 6006]. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples Installing ComfyUI Features. 5B parameter base model and a 6. Drag the image onto the ComfyUI workspace and you will see. Drag & drop the . Stability. It is highly recommended to use a 2x upscaler in the Refiner stage, as 4x will slow the refiner to a crawl on most systems, for no significant benefit (in my opinion). SECourses. BNK_CLIPTextEncodeSDXLAdvanced. Download the SD XL to SD 1. com. This repo contains examples of what is achievable with ComfyUI. Originally Posted to Hugging Face and shared here with permission from Stability AI. During renders in the official ComfyUI workflow for SDXL 0. 35%~ noise left of the image generation. It supports SD1. 0を発表しました。 そこで、このモデルをGoogle Colabで利用する方法について紹介します。 ※2023/09/27追記 他のモデルの使用法をFooocusベースに変更しました。BreakDomainXL v05g、blue pencil-XL-v0. Inpainting. When trying to execute, it refers to the missing file "sd_xl_refiner_0. This one is the neatest but. 1: Support for Fine-Tuned SDXL models that don’t require the Refiner. 2 noise value it changed quite a bit of face. Instead you have to let it VAEdecode to an image, then VAEencode it back to a latent image with the VAE from SDXL and then upscale. Refiner same folder as Base model, although with refiner i can't go higher then 1024x1024 in img2img. 9 ComfyUI) best settings for Stable Diffusion XL 0. SD1. 0 base checkpoint; SDXL 1. Natural langauge prompts. 0 base and have lots of fun with it. Description: ComfyUI is a powerful and modular stable diffusion GUI with a graph/nodes interface. Try DPM++ 2S a Karras, DPM++ SDE Karras, DPM++ 2M Karras, Euler a and DPM adaptive. SDXL-OneClick-ComfyUI . In ComfyUI this can be accomplished with the output of one KSampler node (using SDXL base) leading directly into the input of another KSampler node (using. The idea is you are using the model at the resolution it was trained. The node is located just above the “SDXL Refiner” section. Testing was done with that 1/5 of total steps being used in the upscaling. A number of Official and Semi-Official “Workflows” for ComfyUI were released during the SDXL 0. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. ComfyUI may take some getting used to, mainly as it is a node-based platform, requiring a certain level of familiarity with diffusion models. If the refiner doesn't know the LoRA concept any changes it makes might just degrade the results. 1. 9 and Stable Diffusion 1. r/StableDiffusion. 5. silenf • 2 mo. Therefore, it generates thumbnails by decoding them using the SD1. I just uploaded the new version of my workflow. But we were missing. ) These images are zoomed-in views that I created to examine the details of the upscaling process, showing how much detail. Thanks for this, a good comparison. 5 + SDXL Refiner Workflow : StableDiffusion. 0 performs. Comfyroll. Input sources-. Second KSampler must not add noise, do. 0终于发布下载了,第一时间跟大家分享如何部署到本机使用,最后做了一些和1. 1 Click Auto Installer Script For ComfyUI (latest) & Manager On RunPod. The disadvantage is it looks much more complicated than its alternatives. 5 + SDXL Base+Refiner is for experiment only. 0_comfyui_colab (1024x1024 model) please use with. -Drag and Drop *. python launch. Especially on faces. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. 0 ComfyUI. 🧨 Diffusers Examples. The following images can be loaded in ComfyUI to get the full workflow. Hires fix is just creating an image at a lower resolution, upscaling it and then sending it through img2img. We name the file “canny-sdxl-1. sdxl_v1. . A all in one workflow. Look at the leaf on the bottom of the flower pic in both the refiner and non refiner pics. 手順1:ComfyUIをインストールする. One of the most powerful features of ComfyUI is that within seconds you can load an appropriate workflow for the task at hand. 0 in both Automatic1111 and ComfyUI for free. Detailed install instruction can be found here: Link to. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. 5 base model vs later iterations. 0 ComfyUI. 20:57 How to use LoRAs with SDXL. It's down to the devs of AUTO1111 to implement it. . 1024 - single image 25 base steps, no refiner 1024 - single image 20 base steps + 5 refiner steps - everything is better except the lapels Image metadata is saved, but I'm running Vlad's SDNext. SDXL Base 1. It would be neat to extend the SDXL dreambooth Lora script with an example of how to train the refiner. custom_nodesComfyUI-Impact-Packimpact_subpackimpact. Download . 0 base checkpoint; SDXL 1. But if SDXL wants a 11-fingered hand, the refiner gives up. Why so slow? In comfyUI the speed was approx 2-3 it/s for 1024*1024 image. Installation. FromDetailer (SDXL/pipe), BasicPipe -> DetailerPipe (SDXL), Edit DetailerPipe (SDXL) - These are pipe functions used in Detailer for utilizing the refiner model of SDXL. 1 0 SDXL ComfyUI ULTIMATE Workflow Everything you need to generate amazing images! Packed full of useful features that you can enable and disable on the. sdxl sdxl lora sdxl inpainting comfyui. The workflow should generate images first with the base and then pass them to the refiner for further. This produces the image at bottom right. ComfyUI for Stable Diffusion Tutorial (Basics, SDXL & Refiner Workflows) Control+Alt+AI 818 subscribers Subscribe No views 1 minute ago This is a comprehensive tutorial on understanding the. 0; Width: 896; Height: 1152; CFG Scale: 7; Steps: 30; Sampler: DPM++ 2M Karras; Prompt: As above. Install SDXL (directory: models/checkpoints) Install a custom SD 1. install or update the following custom nodes. Stability is proud to announce the release of SDXL 1. safetensors and sd_xl_refiner_1. A good place to start if you have no idea how any of this works is the: ComfyUI Basic Tutorial VN: All the art is made with ComfyUI. You can add “pixel art” to the prompt if your outputs aren’t pixel art Reply reply irateas • This ^^ for Lora it does an amazing job. After gathering some more knowledge about SDXL and ComfyUI, and experimenting few days with both I've ended up with this basic (no upscaling) 2-stage (base + refiner) workflow: It works pretty well for me - I change dimensions, prompts, and samplers parameters, but the flow itself stays as it is. I’m sure as time passes there will be additional releases. 9 the base and refiner models. These are what these ports map to in the template we're using: [Port 3000] AUTOMATIC1111's Stable Diffusion Web UI (for generating images) [Port 3010] Kohya SS (for training) [Port 3010] ComfyUI (optional, for generating images. safetensors. 详解SDXL ComfyUI稳定工作流程:我在Stability使用的AI艺术内部工具接下来,我们需要加载我们的SDXL基础模型(改个颜色)。一旦我们的基础模型加载完毕,我们还需要加载一个refiner,但是我们会稍后处理这个问题,不用着急。 The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. 0 refiner on the base picture doesn't yield good results. , width/height, CFG scale, etc. 0; the highly-anticipated model in its image-generation series! After you all have been tinkering away with randomized sets of models on our Discord bot, since early May, we’ve finally reached our winning crowned-candidate together for the release of SDXL 1. The latent output from step 1 is also fed into img2img using the same prompt, but now using. For a purely base model generation without refiner the built-in samplers in Comfy are probably the better option. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect. This uses more steps, has less coherence, and also skips several important factors in-between I recommend you do not use the same text encoders as 1. 17. 0 + LoRA + Refiner With Comfy UI + Google Colab fot FREEExciting news! Introducing Stable Diffusion XL 1. 9vae Image size: 1344x768px Sampler: DPM++ 2s Ancestral Scheduler: Karras Steps: 70 CFG Scale: 10 Aesthetic Score: 6Config file for ComfyUI to test SDXL 0. Install this, restart ComfyUI and click “manager” then “install missing custom nodes” restart again and it should work. 0 or 1. ComfyUI is also has faster startup, and is better at handling VRAM, so you can generate. 17:38 How to use inpainting with SDXL with ComfyUI. In the Comfyui SDXL workflow example, the refiner is an integral part of the generation process. 4s, calculate empty prompt: 0. SD1. 0. But these improvements do come at a cost; SDXL 1. x during sample execution, and reporting appropriate errors. 以下のサイトで公開されているrefiner_v1. 5 models. But, as I ventured further and tried adding the SDXL refiner into the mix, things. Refiner: SDXL Refiner 1. The prompts aren't optimized or very sleek. As soon as you go out of the 1megapixels range the model is unable to understand the composition. 4/1. best settings for Stable Diffusion XL 0. Observe the following workflow (which you can download from comfyanonymous , and implement by simply dragging the image into your Comfy UI workflow. There are two ways to use the refiner: ; use the base and refiner models together to produce a refined image ;Got playing with SDXL and wow! It's as good as they stay. For reference, I'm appending all available styles to this question. 4. 9. Part 3 (this post) - we. Right now, I generate an image with the SDXL Base + Refiner models with the following settings: MacOS: 13. You will need a powerful Nvidia GPU or Google Colab to generate pictures with ComfyUI. Despite relatively low 0. AnimateDiff in ComfyUI Tutorial. update ComyUI. You must have sdxl base and sdxl refiner. See "Refinement Stage" in section 2. 78. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. Models and UI repoMostly it is corrupted if your non-refiner works fine. 1. Hires. jsonを使わせていただく。. +Use SDXL Refiner as Img2Img and feed your pictures. Double click an empty space to search nodes and type sdxl, the clip nodes for the base and refiner should appear, use both accordingly. On the ComfyUI Github find the SDXL examples and download the image (s). 9 base+refiner, my system would freeze, and render times would extend up to 5 minutes for a single render. 0! This workflow is meticulously fine tuned to accommodate LORA and Controlnet inputs, and demonstrates interactions with embeddings as well. ComfyUI Manager: Plugin for CompfyUI that helps detect and install missing plugins. 0 Comfyui工作流入门到进阶ep. These are examples demonstrating how to do img2img. It will crash eventually - possibly RAM but doesn't take the VM with it - but as a comparison that one "works". Img2Img. x for ComfyUI; Table of Content; Version 4. AI Art with ComfyUI and Stable Diffusion SDXL — Day Zero Basics For an Automatic1111 User. SEGS Manipulation nodes. FWIW latest ComfyUI does launch and renders some images with SDXL on my EC2. After an entire weekend reviewing the material, I think (I hope!) I got the implementation right: As the title says, I included ControlNet XL OpenPose and FaceDefiner models. But actually I didn’t heart anything about the training of the refiner. SDXL Workflow for ComfyBox - The power of SDXL in ComfyUI with better UI that hides the nodes graph Resource | Update I recently discovered ComfyBox, a UI fontend for ComfyUI. 0 with ComfyUI's Ultimate SD Upscale Custom Node in this illuminating tutorial. You can disable this in Notebook settingsYesterday, I came across a very interesting workflow that uses the SDXL base model, any SD 1. 这一期呢我们来开个新坑,来讲sd的另一种打开方式,也就是这个节点化comfyUI。那熟悉我们频道的老观众都知道,我一直是用webUI去做演示和讲解的. SDXL base → SDXL refiner → HiResFix/Img2Img (using Juggernaut as the model, 0. Updating ControlNet. After inputting your text prompt and choosing the image settings (e. 動作が速い. I discovered through a X post (aka Twitter) that was shared by makeitrad and was keen to explore what was available. I mean, it's also possible to use it like that, but the proper intended way to use the refiner is a two-step text-to-img. SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. It's a LoRA for noise offset, not quite contrast. Step 1: Download SDXL v1. 20:43 How to use SDXL refiner as the base model. This checkpoint recommends a VAE, download and place it in the VAE folder. Step 6: Using the SDXL Refiner. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Basic Setup for SDXL 1. 5 models) to do. 0 and Refiner 1. i'm probably messing something up im still new to this but you put the model and clip output nodes of the checkpoint loader to the. json: sdxl_v0. 0 ComfyUI. I upscaled it to a resolution of 10240x6144 px for us to examine the results. History: 18 commits. 今天,我们来讲一讲SDXL在comfyui中更加进阶的节点流逻辑。第一、风格控制第二、base模型以及refiner模型如何连接第三、分区提示词控制第四、多重采样的分区控制comfyui节点流程这个东西一通百通,逻辑正确怎么连都可以,所以这个视频我讲得并不仔细,只讲搭建的逻辑和重点,这东西讲太细过于. 5支. thibaud_xl_openpose also. Is the best balanced I could find between image size (1024x720), models, steps (10+5 refiner), samplers/schedulers, so we can use SDXL on our laptops without those expensive/bulky desktop GPUs. I have updated the workflow submitted last week, cleaning up a bit the layout and adding many functions I wanted to learn better. at least 8GB VRAM is recommended. With resolution 1080x720 and specific samplers/schedulers, I managed to get a good balanced and a good image quality, first image with base model not very high. In this ComfyUI tutorial we will quickly c. 0 seed: 640271075062843 To make full use of SDXL, you'll need to load in both models, run the base model starting from an empty latent image, and then run the refiner on the base model's output to improve detail. I think this is the best balanced I could find. Nevertheless, its default settings are comparable to. And to run the Refiner model (in blue): I copy the . 5. #ComfyUI is a node based powerful and modular Stable Diffusion GUI and backend. ComfyUI: An open source workflow engine, which is spezialized in operating state of the art AI models for a number of use cases like text to image or image to image transformations. SEGSDetailer - Performs detailed work on SEGS without pasting it back onto the original image. 🚀LCM update brings SDXL and SSD-1B to the game 🎮photo of a male warrior, modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, medieval armor, professional majestic oil painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, trending on CGSociety, Intricate, High. 今天,我们来讲一讲SDXL在comfyui中更加进阶的节点流逻辑。第一、风格控制第二、base模型以及refiner模型如何连接第三、分区提示词控制第四、多重采样的分区控制comfyui节点流程这个东西一通百通,逻辑正确怎么连都可以,所以这个视频我讲得并不仔细,只讲搭建的逻辑和重点,这东西讲太细过于. My 2-stage (base + refiner) workflows for SDXL 1. RTX 3060 12GB VRAM, and 32GB system RAM here. 1s, load VAE: 0. 1 Workflow - Complejo - for Base+Refiner and Upscaling; 1. 5 + SDXL Refiner Workflow but the beauty of this approach is that these models can be combined in any sequence! You could generate image with SD 1. 9 (just search in youtube sdxl 0. Use "Load" button on Menu. Here's the guide to running SDXL with ComfyUI. safetensors. Voldy still has to implement that properly last I checked. 0 links. 9 VAE; LoRAs. 5s, apply weights to model: 2. 5. eilertokyo • 4 mo. On the ComfyUI Github find the SDXL examples and download the image (s). 9 was yielding already. 9: The base model was trained on a variety of aspect ratios on images with resolution 1024^2. Hypernetworks. Upto 70% speed. The test was done in ComfyUI with a fairly simple workflow to not overcomplicate things. My advice, have a go and try it out with comfyUI, its unsupported but its likely to be the first UI that works with SDXL when it fully drops on the 18th. AnimateDiff-SDXL support, with corresponding model. Commit date (2023-08-11) I was having very poor performance running SDXL locally in ComfyUI to the point where it was basically unusable. 0, I started to get curious and followed guides using ComfyUI, SDXL 0. To use the Refiner, you must enable it in the “Functions” section and you must set the “refiner_start” parameter to a value between 0. ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) ComfyUI is hard. Fine-tuned SDXL (or just the SDXL Base) All images are generated just with the SDXL Base model or a fine-tuned SDXL model that requires no Refiner. 9 safetensors installed. Click “Manager” in comfyUI, then ‘Install missing custom nodes’. 9 safetensors + LoRA workflow + refiner The text was updated successfully, but these errors were encountered: All reactionsSaved searches Use saved searches to filter your results more quicklyA switch to choose between the SDXL Base+Refiner models and the ReVision model A switch to activate or bypass the Detailer, the Upscaler, or both A (simple) visual prompt builder To configure it, start from the orange section called Control Panel. June 22, 2023. These files are placed in the folder ComfyUImodelscheckpoints, as requested. will output this resolution to the bus. Google Colab updated as well for ComfyUI and SDXL 1. . I'm creating some cool images with some SD1. I had experienced this too, dont know checkpoint is corrupted but it’s actually corrupted Perhaps directly download into checkpoint folderDo you have ComfyUI manager. 20:43 How to use SDXL refiner as the base model. Andy Lau’s face doesn’t need any fix (Did he??). Searge-SDXL: EVOLVED v4. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion -. 2 Workflow - Simple - Easy to use and simple with Upscaling 4K, just. 5GB vram and swapping refiner too , use --medvram-sdxl flag when startingSDXL Prompt Styler Advanced: New node for more elaborate workflows with linguistic and supportive terms. 0! Usage17:38 How to use inpainting with SDXL with ComfyUI. . 9. The refiner is trained specifically to do the last 20% of the timesteps so the idea was to not waste time by. but if I run Base model (creating some images with it) without activating that extension or simply forgot to select the Refiner model, and LATER activating it, it gets OOM (out of memory) very much likely when generating images. Overall all I can see is downsides to their openclip model being included at all. Selector to change the split behavior of the negative prompt. The issue with the refiner is simply stabilities openclip model. Step 3: Download the SDXL control models. 0. SD XL. This is great, now all we need is an equivalent for when one wants to switch to another model with no refiner. Sometimes I will update the workflow, all changes will be on the same link. Host and manage packages. To use the Refiner, you must enable it in the “Functions” section and you must set the “End at Step / Start at Step” switch to 2 in the “Parameters” section. 5 and send latent to SDXL BaseIt has the SDXL base and refiner sampling nodes along with image upscaling. There are settings and scenarios that take masses of manual clicking in an. All images were created using ComfyUI + SDXL 0. Those are two different models. download the SDXL models. Before you can use this workflow, you need to have ComfyUI installed. 0の概要 (1) sdxl 1. . A CheckpointLoaderSimple node to load SDXL Refiner. json: sdxl_v1. u/Entrypointjip The two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. 0. Hand-FaceRefiner. Contribute to markemicek/ComfyUI-SDXL-Workflow development by creating an account on GitHub. It will destroy the likeness because the Lora isn’t interfering with the latent space anymore. Part 4 - we intend to add Controlnets, upscaling, LORAs, and other custom additions. 0, now available via Github. In the case you want to generate an image in 30 steps. This was the base for my. You can disable this in Notebook settings sdxl-0. 17:38 How to use inpainting with SDXL with ComfyUI. json and add to ComfyUI/web folder. g. In this post, I will describe the base installation and all the optional assets I use. ComfyUI插件使用. py --xformers. Opening_Pen_880. AP Workflow v3 includes the following functions: SDXL Base+RefinerBased on Sytan SDXL 1. Like, which denoise strength when switching to refiner in img2img etc… Can you/should you use. 0 with refiner. Using SDXL 1. latent to avoid this) Do the opposite and disable the nodes for the base model and enable the refiner model nodes. This is more of an experimentation workflow than one that will produce amazing, ultrarealistic images. json: 🦒. patrickvonplaten HF staff. . fix will act as a refiner that will still use the Lora. google colab安装comfyUI和sdxl 0. Part 3 ( link ) - we added the refiner for the full SDXL process. Traditionally, working with SDXL required the use of two separate ksamplers—one for the base model and another for the refiner model. SDXL 1. Searge-SDXL: EVOLVED v4. 0 model files. ComfyUI and SDXL. A EmptyLatentImage specifying the image size consistent with the previous CLIP nodes. 5 + SDXL Refiner Workflow : StableDiffusion Continuing with the car analogy, ComfyUI vs Auto1111 is like driving manual shift vs automatic (no pun intended). The difference is subtle, but noticeable. The result is a hybrid SDXL+SD1. update ComyUI. You will need ComfyUI and some custom nodes from here and here . Using the SDXL Refiner in AUTOMATIC1111. • 3 mo. 详解SDXL ComfyUI稳定工作流程:我在Stability使用的AI艺术内部工具接下来,我们需要加载我们的SDXL基础模型(改个颜色)。一旦我们的基础模型加载完毕,我们还需要加载一个refiner,但是我们会稍后处理这个问题,不用着急。此外,我们还需要对从SDXL输出的clip进行一些处理。The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. r/StableDiffusion • Stability AI has released ‘Stable. SDXL places very heavy emphasis at the beginning of the prompt, so put your main keywords. webui gradio sd stable-diffusion stablediffusion stable-diffusion-webui sdxl Updated Oct 28 , 2023. Now that you have been lured into the trap by the synthography on the cover, welcome to my alchemy workshop! 现在,你已经被封面上的合成图所吸引. 9 vào RAM. It didn't work out. Is there an explanation for how to use the refiner in ComfyUI? You can just use someone elses workflow of 0. r/linuxquestions. this creats a very basic image from a simple prompt and sends it as a source. Just wait til SDXL-retrained models start arriving. 9. 3-中文必备插件篇,stable diffusion教学,stable diffusion进阶教程3:comfyui深度体验以及照片转漫画工作流详解,ComfyUI系统性教程来啦!简体中文版整合包+全新升级云部署!预装超多模块组一键启动!I am very interested in shifting from automatic1111 to working with ComfyUI I have seen a couple templates on GitHub and some more on civitAI ~ can anyone recommend the best source for ComfyUI templates? Is there a good set for doing standard tasks from automatic1111?. 0 base and have lots of fun with it. For me its just very inconsistent. ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. I also used the refiner model for all the tests even though some SDXL models don’t require a refiner. If the noise reduction is set higher it tends to distort or ruin the original image.