Comfyui ksampler advanced reddit. The lower the steps, the closer to the original image your output Longer Generation While Switching Checkpoints. It aims to offer more sophisticated options for generating samples from a model, improving upon the basic KSampler functionalities. In the Reference Only load image, you put one of Node-Red (an event driven node based programming language) has this functionality so it could defintely work in a node based environment such as ComfyUI . py The model and denoise strength on the KSampler make a lot of difference. The Model output from your final Apply IDApapter should connect Or perhaps I can give you a link to the repo I forked today of "Advanced Ksampler" from the Reddit post about the Fooocus-like workflow from 9 months ago, and the basic fixes I've made up to the few hours I had to try tinkering with it. . 50. Aug 17, 2023 · So of course it’s time to test it out Get caught up: Part 1: Stable Diffusion SDXL 1. The KSampler Advanced node can be told not to add noise into More than 0. Multiple lora references for Comfy are simply non-existant, not even in Youtube where 1000 hours of video are uploaded every second. 2-0. I could add any flavor of ComfyUI ksampler node, however for vanilla text-image denoise is void. The ddim-uniform is really special with img2img turbo, I have the best result with it. Today, even through Comfyui manager, where FOOOCUS node is still available, and install it, the node is marked as "unloaded" and I Hi :) I am using AnimateDiff in ComfyUI to output videos, but the speed feels very slow. 5 based models with greater detail in SDXL 0. Just open one workflow, ctrl-A, ctrl-C. , Euler A) from the scheduler (e. #2 is especially common: when these 3rd party node suites change, and you update them, the existing nodes spot working because they don't preserve backward compatibility. Here is a table of Samplers and Schedulers with their name and corresponding "nice name". Completely delete all that negative prompt shit. I knew then that it was because of a core change in Comfy bit thought a new Fooocus node update might come soon It didn't happen. Any tips would be appreciated. I was able to get a far more decent image without all the blue funk by: Increase steps to 40 or 50. First one: add noise, steps:X, start step:Y, end step:Z. KSampler. 5 denoise with SD1. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Also, if this is new and exciting to you, feel free to post Fetch Updates in the ComfyUI Manager to be sure that you have the latest version. 4. The workflow is moderately affected by the last KSampler settings, but I think I move in a correct direction. The workflow goes through a KSampler (Advanced). You can shave 10 seconds off by reducing the number of steps of refinement without much loss of quality. 0, spmpp_2m_sde_gpu, karras, denoise 0. Belittling their efforts will get you banned. Next, install RGThree's custom node pack, from the manager. I have primarily been following this video My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. 0 Base and Refiners models downloaded and saved in the right place, it should work out of the box. Similarly, I think the VAE is also different such that you can't just pass it through. 5 sampler : dpm_2_ancestral, dpmpp_sde_gpu, dpmpp_2m, dpmpp_2 (or3)m_sde gxcells. [original post] I use the advanced Ksampler with cfg 8. If you want consistent clothes from image to image, it really helps to set up a Reference Only latent input to your main ksampler, instead of a blank latent image. for exemple: if you set to 30 total steps you need to tell the base's ksampler to start at 0 and stop at 25 and return with left noise while on the refiner's The ComfyUI workflow uses the latent upscaler (nearest/exact) set to 512x912 multiplied by 2 and it takes around 120-140 seconds per image at 30 steps with SDXL 0. It inspired me to try and make this video. Or, Jesus tapdancing Christ, man, you could just give a recommendation for a workflow or move on. it is caused due to the latest update as they added "cuda You would chain samplers and have a VAE decode and save image node for each step (assuming you are using the default nodes. 75 , KSampler Advanced should produce the same result with steps=20 , start_at_steps=5 , end_at_steps=20 ( (20 - 5) / 20 = 0. Having a computer science background, I feel that the potential for ComfyUI is huge if some basic branching and looping components are added, to unleash the creativity of developers. You can try to use the ModelMergeSimple node, it allows you to put in 2 models and then put them into a single KSampler. 11. In Automatic1111, we could control how much to change the source image by setting the denoising strength. You probably gonna need to lower by . Unlike the KSampler node, this node does not have a denoise setting but this process is instead controlled by the start_at_step and end_at_step I have a setup from comfy workflows that Im learning from and Im missing the Power KSampler Advanced (PPF Noise). Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. For those of you more knowledgable than me. One thing to note is that ComfyUI separates the sampler (e. 2. - lots of pieces to combine with other workflows: Install the ComfyUI dependencies. The KSampler is the core of any workflow and can be used to perform text to image and image to image generation tasks. bin in the clip_vision folder which is referenced as 'IP-Adapter_sd15_pytorch_model. •. I'm not the right person to answer, but the Hijack and UnHijack versions replace the noise function, and un-replace it again, so you can wrap these around any KSampler(s). json, but I followed the credit links you provided, and one of those pages led me here: Welcome to the unofficial ComfyUI subreddit. 9 but it looks like I need to switch my upscaling method. AMD or Nvidia ? Maybe the latent is a high number (a lot a frames/long video), to lower that number is good to use " select_every_nth" from "Load Video" in "ComfyUI-VideoHelperSuite" also play with the "force_rate" setting to make the duplicated frames as less as possible, and then you uses "FILM VFI" to get a good frame rate back. KSampler does not have a Start at Step option. Iterative Mixing KSampler - From 512x512 to 2048x2048 across three phases takes 47 seconds. 45 denoise it fails to actually refine it. However I'm not particularly happy with how it came out. I’ve trained two objects as Loras and I’d like to make a hybrid of these two. While the KSampler node always adds noise to the latent followed by completely denoising the noised up latent, the KSampler Advanced node provides extra settings to control this behavior. However, with the new custom node, I've combined ClipTextEncode (positive) -> ControlnetApply -> Use Everywhere. Using SD1. Automating the multiple character images. 5 and SDXL version. Please keep posted images SFW. Traditionally, working with SDXL required the use of two separate ksamplers—one for the base model and another for the refiner model. 75 ). I feed my image back into another ksampler with a controlnet (using control_v11f1e_sd15_tile. i'm wondering this too. If I increase the start_at_step, then the output doesn't stay close to Refiners should have at most half the steps that the generation has. You upload image -> unsample -> Ksampler advanced -> same recreation of the original image. Txt/Img2Vid + Upscale/Interpolation: This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. You need to use multiple KSampler (advanced) nodes which have start_at_step and end_at_step parameters. Hires fix KSampler: steps 13, cfg 8. The denoise parameter in KSampler simplifies this calculation. Launch ComfyUI by running python main. I think you should run a few experiments using same sampler, hav both Ksampler nodes 'Advanced', and be explicit about where each starts and stops over how many steps. Interestingly advanced samplers can be set to not apply any noise to the incoming information. see for yourself Suddenly ComfyUI won't run my workflow. Return leftover noise. The advanced sampler is adding new noise and essentially tries to fully denoise the latent which can result in many 768 tiles with only a passing similarity to the base image. checkpoint model. Feed a different prompt into each one. You can see the Ksampler & Eff. Lastly if you Ctrl+c and Ctrl+shift+v the pasted node (s) will also copy the inputs. This is the KSampler - essentially this is stable diffusion now that we have loaded everything needed to make the animation. windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_reference. 1) in ComfyUI is much stronger than (word:1. I've found very good results doing 15-20 steps with SDXL which produces a somewhat rough image, then 20 steps at 0. Steps - These matter and you need more than 20. The KSampler Advanced node can be told not to add noise You would essentially have to break the ksampler up into multiple nodes. You can copy paste across comfyui sessions or also Ctrl+drag select and right click "save as template". 1) in A1111. I find that much faster. This pack includes a node called "power prompt". . This was incredibly easy to setup in auto1111 with the composable lora + latent couple extensions, but it seems an impossible mission in Comfy. 5 Checkpoint. Hi, been playing with this for quite a while, and find consistency to be NiceGUI is an open-source Python library to write graphical user interfaces which run in the browser. nb. 3_fp16. When I'm switching Checkpoints, generation time goes from 1. Or, if you use ControlNetApplyAdvanced, which has inputs and outputs for both positive and negative conditioning, feed both the +ve and -ve ClipTextEncode nodes into the +ve and -ve inputs of ControlNetApplyAdvanced. Ksamplers Explained. pth) and strength like 0. For txt2img you send it an empty latent image which is what the EmptyLatentImage node generates. 5 and SDXL use different conditioners, you can't just pass one to the other as far as I'm aware. Use the image of the face you generated in the IP adapter in the load image box. These more advanced features can easily be added to THE LAB, but you need to download the relevant custom nodes and models first of course. For normal img2img2, the choice of scheduler and sampler make a huge difference and it is quite conter-intuitive. I think setting first sampler to do 12 steps of Euler then the second to steps 12 to 20 of a ddim karras might be an issue. You guys have been very supportive, so I'm posting here first. You can also use the Unsampler node, that comes with ComfyUI_Noise, and a KSampler Advanced node, to rewind the image some number of steps and Yet, it claims that with the right values, one can use KSampler Advanced to replicate the behavior of KSampler. All the next generations run fast, but with the slightest change in the prompt it begins with an at least 10x I've been experimenting with Ksampler and Ksampler Advanced Comfyui nodes and I can't grasp the relationship between Denoise (Ksampler) and Start at Step (Ksampler Advanced). Put in the models/checkpoints folder. The KSampler node is the same node for both txt2img and img2img. Honestly that's not a terrible way to do it. The repo isn't updated for a while now, and the forks doesn't seem to work either. For instance (word:1. Advanced Sampler 1 you would tell it to start at step 1, end at 20, 30 steps total. Aug 2, 2023 · Designed to handle SDXL, this ksampler node has been meticulously crafted to provide you with an enhanced level of control over image details like never before. My UI with renamed panels First day with ComfyUI and I am getting some pretty nice results, similar to what I was creating in A1111. but maybe it’s my fault, ksampler seems to run on the cpu🤔 edit : adding the below code to start. With SDXL I often have most accurate results with ancestral samplers. Welcome to the unofficial ComfyUI subreddit. Play around with it, it basically controls denoise by the fraction that you split it. There are some custom nodes/extensions to make generation between the two interfaces compatible. Updated TikTok Dance Workflow (link in the comments) This is a basic workflow is used to create AI Tiktok dance videos, with the action of the AI avatar driven by an actual dance video. Well I got into img2img last week, which made me switch back to the regular KSampler for simplified denoising, and then I got into Turbo just to see how fast it was. Need your help with something. The image created is flat, devoid of details and nuances, as if it were cut out or vector-based. It has a very gentle learning curve while still offering the option for advanced customizations. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to Welcome to the unofficial ComfyUI subreddit. Download the first image then drag-and-drop it on your ConfyUI web interface. ComfyUI - SDXL basic-to advanced workflow tutorial - part 5. Iterative Mixing KSampler 4x Upscale. Second: no add noise, steps:X start step:Z, end step:>=X, no return noise. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. 5cfg or so. Click on see more information to get a picture of it. Second, follow the below link. KSampler Advanced does not have a Denoise option. In A1111, using the highres fix, it seems like it just takes the original image, and then recreates the exact same image with a higher resolution. Hope this helps. wwwanderingdemon. It'll load a basic SDXL workflow that includes a bunch of notes explaining things. My current workflow runs an image generation passes, then 3 refinement passes (with latent or pixel upscaling in between). So I want to place the latent hiresfix upscale before the refiner, but the Advance KSamplers do not have denoise option and they require start and end steps. If I load them with one ksampler and try to control their order of appearance comfyui workflow. py you'll see: Upscaler roundup and comparison. If you have another Stable Diffusion UI you might be able to reuse the dependencies. This usually happens at the first generation with a new prompt even though the model (SDXL with refiner) is already loaded. I have been trying to make the transition to ComfyUi but have had an issue getting ControlNet working. So given steps=20 and a denoise=0. 0 with ComfyUI Part 2: SDXL with Offset Example LoRA in ComfyUI for Windows Part 3: CLIPSeg with SDXL in ComfyUI Part 4: Two Text Prompts (Text Encoders) in SDXL 1. SD1. 25 is the minimum but people do see better results with going higher. I think the noise is also generated differently where A1111 uses GPU by default and ComfyUI uses CPU by default, which makes using the same seed give different results. Like they said, do 2 advanced ksampler nodes. 00. OnlyEconomist4. Upscale : load upscale model node > Ksampler > VAE decode > output. If you copy your nodes from one workflow they will still be in memory to paste them in a new workflow. Try denoise between 0. ComfyUi and ControlNet Issues. I still have not…. I have the right files but I cannot get them to show up in ComfyUI. Below 0. Were the 2 KSampler needed? I feel that i could have used a bunch of ConditioningCombiner so everything leads to 1 node that goes to the KSampler. Is it the right way of doing this ? Yes. Some people were asking about how steps worked, and how to use two KSamplers together. • • Edited. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt Welcome to the unofficial ComfyUI subreddit. Comfyui has this standalone beta build which runs on python 3. I was wondering if anyone else faced KSampler Advanced. So if you want more advanced stuff, you can easily add it. If you double click and start typing 'seed', you'll find a couple seed generation nodes to use. I no longer use automatic unless I want to play around with Temporal kit. 0, dpmpp_sde_gpu, karras, denoise 1. The images look better than most 1. CFG - Feels free to increase this past you normally would for SD The Tiled KSampler forces the generation to obtain a seamless tile but t change the aesthetics considerably. LoRA model. Your favourite SD1. Its a little rambling, I like to go in depth with things, and I like to explain why things But if you want actual image you could add another additional KSampler (Advanced) with same steps values, start_at_step equal to it's corresponding sampler's end_at_step and end_at_step just +1 (like 20,21 or 10,11) to do only one step, finally make return_with_leftover_noise and add_noise disabled. Note that we use a denoise value of less than 1. I have never used it myself, but worth experimenting with For Ksamplers you can just pass the latent output of a ksampler into another ksampler, just make sure to put the denoising lower in the 2nd Ksampler My ComfyUI install did not have pytorch_model. Not within the prompt, no. 0 with SDXL-ControlNet: Canny RuntimeError: CUDA error: operation not supportedCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For VAE I'm using ForMeichiDark_ClearVAE_V2. Try swapping out the advanced sampler with a standard KSampler then set the denoise down to something more conservative like 0. I am wondering if this is normal. json make it switch to GPU again: "--lowvram --preview-method auto --use-split-cross-attention" Working great :) Wire in an IP adapter to the face detailer ksampler. And above all, BE NICE. And that's about it really. 0, Karras scheduler, 24 steps, return with leftover noise for further refinement. You can do use Tile Resample/Kohya-Blur to regenerate a 1. 0. 5. Just end it early, reduce the weight or increase the blurring to increase the amount of detail it can add. If you look in noise. 9, ddim-uniform scheduler, 3-4 steps and cfg 1. For example have a CarLora generate 50% of the steps and then swap to TankLora the rest. This doesn't happen every time, sometimes if I queue different models one after another 2nd model takes a longer time. The KSampler Advanced node can be told not to add noise into the latent with the add_noise setting. RunwayML template for text-image Inference has no mention of denoise value requirement for text-image. After you can use the same latent and tweak start and end to manipulate it. 9-0. Most everything in the model-path should be chained (FreeU, LoRAs, IPAdapter, RescaleCFG etc. In this moment Ksampler (Advanced) starts working on workplace. Reply. g. 5. There's an SD1. For initial testing, I put a Hijack node at the front of the SDXL10 KSampler chain (Base + Refiner), and Unhijack at the end, before the VAE Decode. Try DPM++ 2S a Karras, DPM++ SDE Karras, DPM++ 2M Karras, Euler a and DPM adaptive. 9. You can add/remove control nets or change the strength of them. Keep the steps constant across both nodes. You can also use the ‘split sigmas’ node. If you drag a noodle off it will give you some node options that have that variable type as an output. Now you can manage custom nodes within the app. Do these widgets accomplisb the same thing or are they unrelated? example. Output node: False. [solution] comfyanonymous over on the ComfyUI Github page said to remove the FreeU_Advanced node. Extension nodes will work slightly differently). It allows you to put Loras and Embeddings It changes the image too much and often adds mutations. I'm currently getting OK results in the workflow mentioned above with: First KSampler: steps 14, cfg 8. Pass the latent over and just use a model noodle without the lora. 6 or too many steps and it becomes a more fully SD1. But of course, the point of working in ComfyUI is the ability to modify the workflow. I had never thought of that. Bat_Fruit. Added "watermark, text, writing, letters, signature" to the negative to no avail - that checkpoint is cancer. The KSampler Advanced node is the more advanced version of the KSampler node. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions. In most cases you can find a happy medium at 20-25 steps ~6. The power prompt node replaces your positive and negative prompts in a comfy workflow. Instead, KSampler Advanced controls the application of denoise through the steps at which denoise is applied. Haven't seen this particular issue but you can try with a lower CFG like 5 and see if that removed the moiré. In an advanced ksampler, the denoising value is set by the starting step of the first sampler. If you have the SDXL 1. Reply reply More replies More replies More replies Install ComfyUI Manager. 75s/it to 114+s/it. My generations often have horizontal magenta/green moiré striping in places where I’ve been trying different techniques to apply two Loras at different steps of generation. Please poke holes and let me know if I need to re-do this. Play around with different Samplers and different amount of base Steps (30, 60, 90, maybe even higher). In general the aesthetics is very simple and far from what the chosen model would have with another Ksampler. 95. py; Note: Remember to add your models, VAE, LoRAs etc. Remove the node from the workflow and re-add it. You can right click on a node and change many selections to an input. KSampler Advanced node. 5/SDXL image without IP-Adapter. KSampler (Advanced) steps : 8 cfg : 2. It can also be made to return partially denoised images via the return_with_leftover_noise setting. The custom version of the KSampler and KSampler Advanced (and pipe samplers) use modified noise functions. Happy to share a preliminary version of my ComfyUI workflow (for SD prior to 1. Class name: KSamplerAdvanced. You can focus on writing Python code. bin' by IPAdapter_Canny. 5) that automates the generation of a frame featuring two characters each controlled by its own lora and the openpose. controlnet model. 0 Part 5: Scale and Composite Latents with SDXL Part 6: SDXL 1. ad it assumes a certain amount of steps have been done before it begins processing and uses much of the initial incoming information. ComfyUI is not using GPU1 (RTX 3080 Ti Laptop) every now and then, it uses GPU0 (Intel Iris Xe) and CPU instead. Jun 2, 2024 · Documentation. 5 cfg for every 5 steps you drop to avoid burning your images. Below are the details of my work environment. This worked for me. Vid2QR2Vid: You can see another powerful and creative use of ControlNet by Fictiverse here. This allows for the separation of a single sampling process into multiple nodes. NiceGUI follows a backend-first philosophy: it handles all the web development details. If you use ComfyUI you can instead use the Ksampler ps: Iv'e tried to pass the IPadapter into the model for the Lora, and then plug it to Ksampler. Upscale the refiner result or dont use the refiner. The example below shows how to use the KSampler in an image to image task, by connecting a model, a positive and negative embedding, and a latent image. Heya, part 5 of my series of step by step tutorials is out, it covers improving your adv ksampler setup and usage of prediffusion with an unco-operative prompt to get more out of your workflow. Mainly because 20 steps is way too low for dpmpp_3m_sde_gpu. I'm trying to make the img2img using the new SD Turbo, but as it uses this SamplerCustom, I didn't find how to control the strength of the image in the final result. You can add IP adapter. Here are some timings on my 4090: DeepShrink - Direct to 2048x2048 with LCM at 16 steps takes 25 seconds. , Karras). Nodes 'Hijack' and 'Unhijack' are found inside a 'noise' menu, the 'KSampler with Variations' and 'KSampler Advanced with Variations' are found in the 'sampling' menu. I'm not sure what the issue was though, because FreeU_Advanced was last updated last month (Jan 2024). So you need to say which step the base will stop and the refiner will start. Category: sampling. I have a ComfyUI inpaint workflow set up based on SDXL, but it seems to go for maximum deviation from the source image. Advanced Sampler 2, start at 21, end at 25, 30 steps total. The KSamplerAdvanced node is designed to enhance the sampling process by providing advanced configurations and techniques. input img frames. ) The order doesn't seem to matter that much either. Also added a second part where I just use a Rand noise in Latent blend. Then open your destination workflow, ctrl-V. Normal vs. Loader SDXL in the screenshot. 3 It’s very slow on my setup (12700k + 4070). Each Ksampler can then refine using whatever checkpoint you choose too. You have confused the gui toolkit with the basic requirements for text-image. 7) ControlNet, IP-Adapter, AnimateDiff, …. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second The pipeline for stable diffusion is to do some steps in base and finalizing with few steps using the refiner. Hi all! Fair warning, I am very new to AI image generation and have only played with ComfyUi for a few days, but have a few weeks of experience with Automatic1111. A lot of people are just discovering this technology, and want to show off what they created. 6 and 0. That plus how complicated the advanced KSampler is made latent too frustrating. In comfyUi, I tried ksampler -> vae decode -> upscale image using model -> upscale image (1280x1280) -> vae encode -> ksampler advanced -> vae decode. Ah, I understand. 5 version, losing most of the XL elements. Please share your tips, tricks, and workflows for using this software to create your AI art. Once you found your top cfg for the typical 35 steps, start dropping steps by 5. ad qz bl vu jw li qu lh jq xi