Comfyui img2img upscale workflow reddit

Comfyui img2img upscale workflow reddit. If you want actual detail at a reasonable amount of time you'll need a 2nd pass with a 2nd sampler. Put something like "highly detailed" in the prompt box. Please keep posted images SFW. Ignore the LoRA node that makes the result look EXACTLY Welcome to the unofficial ComfyUI subreddit. 1 of the workflow, to use FreeU load the new workflow from the . Trying to move away from Auto1111. diffusers/stable-diffusion-xl-1. Users of ComfyUI which premade workflows do you use? I read through the repo but it has individual examples for each process we use - img2img, controlnet, upscale and all. ControlNet Depth ComfyUI workflow. Thank you. I have a similar workflow but with Latent Hires I find 0. Mar 30, 2023 · current tile upscale looks like a small factory in factorio game and this is just for 4 tiles, you can only imagine how it gonna look like with more tiles, possible but makes no sense. Note: the images in the example folder are still embedding v4. Main difference is I've been going to SD 1. SDXL CLIP size vs Latent image size. Training a LoRA will cost much less than this and it costs still less to train a LoRA for just one stage of Stable Cascade. Belittling their efforts will get you banned. Instead, you need to go down to "Scripts" at the bottom and select the "SD Upscale" script. 5 models like epicRealism or Jaugeraut, but I know once more models come out with the SDXL base, we'll see incredible results. Change sampler to Euler or DPM series (DDIM series is not recommended for this setup). The main difference is I've got an img2img/upscale style saved that I add that's meant to be general purpose In any case, the workflow i have here is running with people using <6gb cards. I'm something of a novice but I think the effects you're getting are more related to your upscaler model your noise your prompt and your CFG Add a Comment. If you have the SDXL 0. The issue is that the upscale adds so much noise that refining step can basically craft a different image that may have newly introduced deformities. - Latent Upscale - glorified IMG2IMG and will result in subtle changes- Upscale Model (like ESRGAN, Swin, etc A workflow management system like a node system is only useful if the work process requires it. I was also getting weird generations and then I just switched to using someone else's workflow and they came out perfectly, even when I changed all my workflow settings the same as theirs for testing what it was, so that could be a bug. Use IP Adapter for face. 0-inpainting-0. 1. My workflow that works for me the most. I'm using a workflow that takes the characters I created in Daz3d and Cinema 4D and puts them into SD for comic transformation. SD upscaler and upscale from that. 35 denoise, controlnet tile, 4x ultrasharp upscaler and ultimate sd upscale. Allo! I am beginning to work with ComfyUI moving from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the better developed comfy workflows They depend on complex pipelines and/or Mixture of Experts (MoE) that enrich the prompt in many different ways. Can you maybe help me with my workflow? animatediff to get the starting file, this is 512x512 then ebsynth untility sage 1. Trying out IMG2IMG on ComfyUI and I like it much better than A1111. 0 ComfyUI workflow with a few changes, here's the sample json file for the workflow I was using to generate these images: sdxl_4k_workflow. Here is an alternative variant using the full sdxl and the established dual setup. For Controlnet Unit 1, set Model to "tile" and parameters: Weight 1. My nonscientific answer is that A1111 can do it around 60 seconds at 30 steps using a 1. I have to 2nd the comments here that this workflow is great. will output this resolution to the bus. ssitu/ComfyUI_UltimateSDUpscale: ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. One can get a hell lot of mileage from combining dreambooth + img2img models. This ComfyUI Upscale workflow utilizes the SUPIR (Scaling-UP Image Restoration), a state-of-the-art open-source model designed for advanced image and video enhancement. Reply. AP Workflow 5. It's based on the wonderful example from Sytan, but I un-collapsed it and removed upscaling to make it very simple to understand. These are examples demonstrating how to do img2img. WASasquatch on Mar 30, 2023. Comfyui batch img2img workflow. 5 to high 0. Increasing the mask blur lost details, but increasing the tile padding to 64 helped. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone posted. Go to "img2img" tab at the top. That’s a cost of about $30,000 for a full base model train. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. After that, they generate seams and combine everything together. save and load back into img2img and generate with same/slightly tweaked prompt relative to what the crop is looking at or what I want to add. 2-0. Kosinkadink developer of ComfyUI-AnimateDiff-Evolved has updated the cutsom node with a new funcionality in the AnimateDiff Loader Advanced node, that can reach higher number of frames. 4 alpha 0. I think it was 3DS Max. The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. • 1 yr. (caution, can cause chaos if your prompt is off by too much from what you use. upscale with gigapixel. I'm new to the channel and to ComfyUI, and I come looking for a solution to an upscaling problem. Thank you for this. Mines a dual model workflow, but its highly adaptive to whatever project im working on, includes img2img and prediffusion, includes multi-step upscaling - face fixes at multiple steps, tiled upscaling and all that. 3. img2img and depth2img are still one of the most underutilized techniques out there. json file in the workflow folder. As someone relatively new to AI imagery, I started off with Automatic 1111 but was tempted by the flexibility of ComfyUI but felt a bit overwhelmed. - To load the images to the TemporalNet, we will need that these are loaded from the previous Latent upscales require the second sampler to be set at over 0. Configure as in Step 1. then i use the images from animatediff as my key frames. Auto1111 has a linear workflow management although it is not as well organized. These values can be changed by changing the "Downsample" value, which has its own documentation in the workflow itself on values for sizes. Basically, Two nodes are doing the heavy lifting. Bug Fixes I've been running an almost identical workflow lately, except I've been doing an extra 768x1152 img2img step. Just that it serves me really well. View community ranking See how large this community is compared to the rest of Reddit. Enter this workflow to the rescue. Now it also can save the animations in other formats apart from gif. 9 leaked repo, you can read the README. Simple ComfyUI Img2Img Upscale Workflow comments sorted by Best Top New Controversial Q&A Add a Comment still experimenting and learning the basics of comfy and want to begin experimenting with img2img. You should use the base and left some noise for few steps of refiner and then think about img2img or not use refiner at all. Better than the abomination Disney is cooking. This looks great. no ipadapter, no controlnet, no addons of any sort With a higher config it seems to have decent results. Thank you for your help! I switched to the Ultimate SD Upscale (with Upscale), but the results appear less real to me and it seems like it is making my machine work 'harder'. Is there a way to make comfyUI loop back on itself so that it repeats/can be automated? Essentially I want to make a workflow that takes the output and feeds it back in on itself similar to what deforum does for x amount of images. ), slice it into tiles that have a size that Standard Diffusion can handle, pass each slice through img2img, and blend all the Decoding the latent 2. . my workflow is. Low denoising strength can result in artifacts, and high strength results in unnecessary details or a drastic change in the image. Aug 9, 2023 · Today we will use ComfyUI to upscale stable diffusion images to any resolution we want, and even add details along the way using an iterative workflow! This TopQuark67. 25 upscale to 640x960 with 0. NOT claiming it as best or anything. Its just not intended as an upscale from the resolution used in the base model stage. Can you please explain your process for the upscale?? As my test bed, i'll be downloading the thumbnail from say my facebook profile picture, which is fairly small. The idea is simple, it's exactly the same principle than txt2imghd but done manually : upscale the image with another software (ESRGAN, GigapixelAI etc. 5x upscale on 8GB VRAM NVIDIA GPU's without any major VRAM issues, as well as being able to go as high as 2. When rendering human creations, I still find significantly better results with 1. I have a ComfyUI workflow that produces great results. 0 is the first step in that direction. So I was using this super simple way of enhancing 3d models in auto1111 but I cannot get it right in comfyUI - is there any easy way to reproduce this in comfyUI? I tried Scott's img2img workflow but it just doesn't work - the people just come out deformed. ComfyUI has been far faster so far using my own tiled image to image workflow (even at 8000x8000) but the individual frames in my image are bleeding into each Thanks for the tips on Comfy! I'm enjoying it a lot so far. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. sharpen (radius 1 sigma 0. I would like them to go down to 768px on each side while not distorting their aspect ratios. Ah, you mean the GO BIG method I added to Easy Diffusion from ProgRockDiffusion. 0, Ending 0. 1 - get your 512x or smaller empty latent and plug it into a ksampler set to some rediculously low value like 10 steps at 1. For a 2 times upscale Automatic1111 is about 4 times quicker than ComfyUI on my 3090, I'm not sure why. For general upscaling of photos go: remacri 4x upscale. My current workflow sometimes will change some details a bit, it makes the image blurry or makes the image too sharp. You can Load these images in ComfyUI to get the full workflow. In researching InPainting using SDXL 1. Download one of the dozens of finished workflows from Sytan/Searge/the official ComfyUI examples. I personally use both of the upscale methods, but it depends on the situation. Motion LoRAs w/ Latent Upscale: This workflow by Kosinkadink is a good example of Motion LoRAs in action: 7. Still working on the the whole thing but I got the idea down Welcome to the unofficial ComfyUI subreddit. co) Thanks for sharing this setup. Create animations with AnimateDiff. Aug 3, 2023 · Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool Here is a workflow that I use currently with Ultimate SD Upscale. then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at (lower values like 1. 2 denoise and 4k ultrasharp upscaler. TLDR of video: First part he uses RevAnimated to generate an anime picture with Revs styling, then it I don't see any difference between the 'Upscaler' by Magnific or Krita and performing a simple Img2Img upscale at a CFG scale of 0. 0 denoise. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". I'm also looking for a upscaler suggestion. Apprehensive_Sky892. 0. SDXL most definitely doesn't work with the old control net. anyone have any recommendations or preexisting workflows Is there a version of ultimate SD upscale that has been ported to ComfyUI? I am hoping to find a way to implement image2image in a pipeline that includes multi controlnet and has a way that I can make it so that all generations automatically get passed through something like SD upscale without me having to run the upscaling as a separate step ComfyUI workflow for IMG2IMG tiled (for the purpose of TemporalKit) TiledVAE is very slow in Automatic but I do like Temporal Kit so I've switched to ComfyUI for the image to image step. img2img workflows are rare at the moment and even rare to use the new Stage B & Stage C latents from effnet encoder, again that's 3 hours old. 2. Txt/Img2Vid + Upscale/Interpolation: This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. win+shift+s screengrab a portion of interest in a roughly square crop. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. 0, Starting 0. I've been using TurboXL as a base for img2img since day of release. 5 models (seems pointless to go larger). However, if we can add IP-adapter for every tile, we would be able to generate a more consistent description Mar 30, 2023 · #stablediffusionart #stablediffusion #stablediffusionai In this Video I have Explained Text2img + Img2Img Workflow On ComfyUI With Latent Hi-res Fix and Ups This is your workflow to create, all I can do is show you some of the tools. 5x on 10GB NVIDIA GPU's. 1 but I resize with 4x-Ultrasharp set to x2 and in ComfyUI this workflow uses a nearest/exact latent upscale. Forget face swap. SDXL Default ComfyUI workflow. Although it has been a while since I last used ComfyUI, I haven't yet found much use for a node system in Stable Diffusion. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. 5 are usually a better idea than going 2+ here because latent upscale introduces This is the image in the file, converted to a jpg. My tutorials go from creating a very basic SDXL workflow from the ground up and slowly improving it with each tutorial until we end with a multipurpose advanced SDXL workflow that you will understand completely and be able to adapt to many purposes. Table of contents. And above all, BE NICE. 1 at main (huggingface. CUI can do a batch of 4 and stay within the 12 GB. Is there a workflow with all features and options combined together that I can simply load and use ? Like the leonardo AI upscaler. AnimateDiff Evolved in ComfyUI now can break the limit of 16 frames. 5 denoise starts to cause a high % of images to fall apart and with 0. Your efforts are much appreciated. Honestly you can probably just swap out the model and put in the turbo scheduler, i don't think loras are working properly yet but you can feed the images into a proper sdxl model to touch up during generation (slower and tbh doesn't save time over just using a normal SDXL model to begin with), or generate a large amount of stuff to pick and Looking for ComfyUi workflow that transforms IRL images. Im trying to upscale at this stage but i cant get it to work. It uses CN tile with ult SD upscale. Encoding it and doing a tiny refining step to sharpen up the edges. Please help, i want use img2img with reference only processor in comfy, if anyone know/have workflow please share it. Infinite Zoom: Here is the image I wanted to upscale : 768x512px image to upscale. I’m looking for a good img2img full body workflow that also has the ability to add an take the pose add an existing face over the ai one and the. If people want to use the new method, that's supported once you DL those checkpoints. 2 workflow. 5 models? just to make things crystal clear I do not use and do not have an intention of using anything other than vanilla comfyui. Running it through an image upscale on bilinear and 3. Honestly you can probably just swap out the model and put in the turbo scheduler, i don't think loras are working properly yet but you can feed the images into a proper sdxl model to touch up during generation (slower and tbh doesn't save time over just using a normal SDXL model to begin with), or generate a large amount of stuff to pick and I don't see any difference between the 'Upscaler' by Magnific or Krita and performing a simple Img2Img upscale at a CFG scale of 0. Img2Img ComfyUI workflow. Using controlnet with tile_resample allows me to push the Hires upscale to 2x with 0. Forgot to mention, you will have to download this inpaint model from huggingface and put it in your comfyUI "Unet" folder that can be found in the models folder. That would indeed be handy. Just my two cents. I recently started to learn ComfyUi and found this workflow from Olivio and Im looking for something that does a similar thing, but instead can start with a SD or Real image as an input. GFPGAN. haha thanks. In this stage I add detail adder LORA From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. ControlNet Workflow. What's new in v4. so a latent upscale is inherrantly lossy. Please help with reproducing simple img2img a1111 workflow in comfyUI. Latent quality is better but the final image deviates 1. been using this for a while. Unfortunately, it doesn't work well because apparently you can't just inpaint a mask; by Adding some updates to this since people are still likely coming here from a Google search and a lot has changed over the past several months. I’m leaning towards using the new face models in ipadaptor plus . ComfyUI workflow. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. This is the fastest way to test images vs an image I have a higher rez sample of for Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. In img2img upscale 2x with 0. Select the "SD upscale" button at the top. - We add the TemporalNet ControlNet from the output of the other CNs. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. UPDATE: In the most recent version (9/22), this button is gone. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. Haven't been able to find any batch image videos yet (I could just be missing them). Apparently you are making an image with base and doing img2img with refiner, isn't the recommended workflow. I_dont_want_karma_. Nov 13, 2023 · Support for FreeU has been added and is included in the v4. Workflow Included. Passed though face detailer and finally upscale . Made with A1111 Made with ComfyUI If I understand correctly how Ultimate SD Upscale + controlnet_tile works, they make an upscale, divide the upscaled image on tiles and then img2img through all the tiles. I know it's simple for now. json · cmcjas/SDXL_ComfyUI_workflows at main (huggingface. Try bypassing both nodes and see how bad the image is by comparison. But let me know if you need help replicating some of the concepts in my process. ComfyUI SUPIR for Image Resolution | ComfyUI Upscale Workflow. 2 options here. Merging 2 Images together. The trick is to skip a few steps on the initial image and it acts like choosing your denoiser settings, the more steps skipped the more of the original image passes thorugh. - lots of pieces to combine with other workflows: 6. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. Civitai has few workflows as well. . Oct 21, 2023 · Latent upscale method. For my first successful test image, I pulled out my personally drawn artwork again and I'm seeing a great deal of improvement. Just wanted to say that there are a few ways you can perform a 'hires fix' now with ComfyUI. 3 In SDLX, you can improve details by sharpening the source image with an image editor prior to upscaling. Upscaling ComfyUI workflow. true. Exactly this, don't try to learn ComfyUI by building a workflow from scratch. (206x206) when I'm then upscaling in photopea to 512x512 just to give me a base image that matches the 1. Based on Sytan SDXL 1. 5 based model and 30 seconds using 30 steps/SD 2. Allows you to choose the resolution of all output resolutions in the starter groups. My long-term goal is to use ComfyUI to create multi-modal pipelines that can reach results as good as the ones from the AI systems mentioned above without human intervention. •. Anyone have a decent turoial or workflow for batch img2img for comfyui? I'm looking at doing more vudeo render type deals but comfyui tutorials are all about sdxl. 'FreeU_V2' for better contrast and detail, and 'PatchModelAddDownscale' so you can generate at a higher resolution. co) . Reply reply More replies More replies More replies Bro, it's great that you made this and that it works for you, but these quick replies of "just use the workflow" and "load up some diffuser models, try this quick fix" ain't helping anybody. That's my fav way to make my images more beautiful is to use reference only with my generated image, in A111 img2img, i put my image with generation data, lower the denoise and start expirmenting with other images on reference Img2Img workflow: - First step is (if not done before), is to use the custom node Load Image Batch as input to the CN preprocessors and the Sampler (as latent image, via VAE encode). With LCM sampler on the SD1. Also embedding the full workflow into images is so nice coming from A1111, where half the extensions either don't embed their params, or don't reuse those params when How do I go about it? I’ve got to apply a simple style transfer to a picture, what nodes are supposed to be there in my workflow? I understand the general idea, load the picture with an image loader node, throw in a ControlNet node that analyses the picture’s depth, maybe another node that does the same with the shape of the subjects in the picture, but how exactly do I structure it all Hi, guys. Original art by me. There is an imposter among us. The latent upscaling consists of two simple steps: upscaling the samples in latent space and performing the second sampler pass. Breakdown of workflow content. get interesting base image. Otherwise the process is the same. 0 was released. 5 after initial Turbo pass. resize down to what you want. img2img with Low Denoise: this is the simplest solution, but unfortunately doesn't work b/c significant subject and background detail is lost in the encode/decode process Outline Mask: I was very excited about this one, and made a recent post about it. can anyone link a "workflow" that is good for img2img on SD 1. Generate at 512x768, 1. Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this software to create your AI art. 1? This update contains bug fixes that address issues found after v4. 1. Add a Comment. Both these are of similar speed. Hi i am also tring to solve roop quality issues,i have few fixes though right now I see 3 issues with roop 1 the faceupscaler takes 4x the time of faceswap on video frames 2 if there is lot of motion if the video the face gets warped with upscale 3 To process large number of videos pr photos standalone roop is better and scales to higher quality images but misses out on the img2img control Net In the end, it was 30 steps using Heun and Karras that got the best results though. I don't understand. Not sure how these will help those people. I generate an image that I like then mute the first ksampler, unmute Ult. In the end, it was 30 steps using Heun and Karras that got the best results though. If it's not a close up portrait, I'll also inpaint the face after the 1024x1536 step before the final upscale. 4 on denoiser due to the fact that to upscale the latent it basically grows a bunch of dark space between each pixel unlike an image upscale which adds more pixels. 5 side and latent upscale, I can produce some pretty high quality and detailed photoreal results at 1024px with total combined steps of 4 to 6, with CFG at 2. This new upscale workflow also runs very efficiently, being able to 1. There are a lot of options in regards to this, such as iterative upscale; in my experience, all of them are too intensive for bad GPUs or they are too inconsistent. The denoise controls the amount of noise added to the image. The lower the 5. The main issue with this method is denoising strength. So in this workflow each of them will run on your input image and you I came across comfyui purely by chance and despite the fact that there is something of a learning curve compared to a few others I have tried, it's well worth the effort since even on a low end machine image generation seems to be much quicker(at least when using the default workflow) Does anyone have a workflow for resizing an image to a particular size while keeping the aspect ratio the same? I'm doing a lot of img2img but sometimes the images I'm using are too large. ago. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. 4) Then you can cut out face and redo-it with IP Adapter. 5 denoise but it slows the workflow down a bit. In WebUI settings, open ControlNet options and set 'Multi Controlnet: Max models amount' to 2 or more. Taking the output of a KSampler and running it through a latent upscaling node results in major artifacts (lots of horizontal and vertical lines, and blurring). 6 most generations are unusable, but I believe it depends a lot on the model. get result and use photoshop to blend ComfyUI Workflow is here: If anyone sees any flaws in my workflow, please let me know. Obviously there are a number of solutions, like upscaling incrementally and keeping added noise low, but our primary focus is how to get the job down with as few complications as possible. A lot of people are just discovering this technology, and want to show off what they created. In this workflow, you will experience how SUPIR restores and upscales images to achieve photo-realistic results. mf va ow hc he ta in ny zd lc