Comfyui batch upscale images reddit. Don't listen to the haters, reading some comments, they criticize the changes to the image, when Magnific changes it too. 12/3. py ", line 643, in _save rawmode = RAWMODE [im. Apparently you are making an image with base and doing img2img with refiner, isn't the recommended workflow. Also, if this I was unable to find anything close to batch processing, is that possible in ComfyUI? I love the tool but without batch processing it becomes useless for my personal workflow : (. Hopefully someone can help. So you will upscale just one selected image. Anyone has a solution to stack 1-n images together without struggeling about missing images? "Make Image List" and "Make Image Batch" from the impact This method consists of a few steps: decode the samples into an image, upscale the image using an upscaling model, encode the image back Oldest. Hello, My ComfyUI workflow takes 2 images as input, and generates an output image combining those two images. This looks sexy, thanks. Will be interesting seeing LDSR ported to comfyUI OR any other powerful upscaler. Batch-processing images by folder on ComfyUI. The only option to use that node with sequences is to activate the auto-queue mode, but that's not a solution for me as this only loads one image at a time, and I need a batch containing all the images at the same time. So if you have a lower end GPU it might be better for you to work in batches of 1. 55. If you pre-upscale it with a GAN before denoising with low strength, it should take even less time. Also, both have a denoise value that drastically changes the result. Here is is upscaling pixel art to something realistic using IPA adapters. Like the leonardo AI upscaler. Set the tiles to 1024x1024 (or your sdxl resolution) and set the tile padding to 128. 75K views 7 months ago ComfyUI. Join. 2 ways, the easy way and the mess around for hours getting it perfect way. Top reroute goes to Preview Image, bottom reroute goes to the upscaling part of the 2. 1. This is not the case. What I see from your resulting image, it looks like a model (and maybe a prompt) had a big Ultimate SD Upscale works fine with SDXL, but you should probably tweak the settings a little bit. This ability emerged during the training phase of the AI, and was not programmed by people. As far as comfyui this could be awesome feature Using the Load Image Batch node from the WAS Suite repository, I can sequentially load all the images from a folder, but for upscale I also need the prompt with which this image was created. This is what I do, but not in ComfyUI directly. it crashed with "File "D:\stable-diffusion-webui\venv\lib\site-packages\PIL\ JpegImagePlugin. 1 🚀 Release - Working on AUTOMATIC1111/ComfyUI out-of-the-box, improved coherence. It will come with version 6. To move multiple nodes at once, select them and hold down SHIFT before moving. I'm trying to increment a float value (say, a LORA strenght) by 0. r/midjourney. 10K Members. • So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. 9k. Sometimes I drop it to 0. 5 ~ x2 - no need for model, can be a cheap latent upscale. Open comment sort Welcome to the unofficial ComfyUI subreddit. Pull requests 139. 6). Likewise, would pitch in! Could be a great way to check on these quick last second refiner passes. Those images have metadata, meaning you can drag and drop them into the comfy doomndoom. 5 denoise to fix the distortion (although obviously its going to change your image. ADMIN MOD. Also, if this Like the batch feature in A1111 img2img or controlnet. Nobody needs all that, LOL. All the ones I've found online seem to require a Welcome to the unofficial ComfyUI subreddit. And above all, BE NICE. nothing wrong with the webui, until i ran the img2imf batch. There’s a new Hands Refiner function. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. Then I combine it with a combination of either Depth, Canny and OpenPose ControlNets. My current workflow sometimes will change some details a bit, it makes the image blurry or makes the image too sharp. Just found out it's possible to use "Batch process" or "Batch from Directory" in the extras tab, so it's possible to upscale multiple images. Documentation for the SD Upscale Plugin is NULL. Usually tend to be better than waifu2x. From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. The new Only pause if batch and Pass through modes are brilliant. Hello all, I had a question around how some of you handle doing How to run an upscaling flow on all the images from a directory? Hi all, the title says it all, after launching a few batches of low res images I'd like to upscale all the Add a "Load Image" node, right click it: "Convert image to input". Members Online • No_Construction_8736 . Actions. pth or 4x_foolhardy_Remacri. Open comment sort options . 1/375. I'm aware that the option is in the empty latent image node, but it's not in the load image node. These comparisons are done using ComfyUI with default node settings and fixed seeds. Denoise 0. Heya, part 5 of my series of step by step tutorials is out, it covers improving your adv ksampler setup and usage of prediffusion with an unco-operative prompt to get more out of your workflow. Thank you, that did it. I have 2 images. View community ranking In the Top 10% of largest communities on Reddit. If you increase this above 1, you'll get more images from your batch up to the max # in your original batch. Also, if this is new and exciting to Images are now saved with metadata readable in A1111 WebUI, Vladmandic SD. I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. • 4 mo. Thank you. 4) Then you can cut out face and redo-it with IP Adapter. Also, if this Overall: - image upscale is less detailed, but more faithful to the image you upscale. Even with 4 regions and a global condition, they just combine them all 2 at a time until it becomes a single positive condition to plug into the sampler. I know of programs that can do it outside of comfyui, but I’m looking for something that can be part of a workflow. Thanks to them, I can now move the Image Chooser node to a more prominent and always-on position in the AP Workflow. - now change the first sampler's state to 'hold' (from 'sample') and unmute the second sampler. The real important part is to be sure that each slice is overlapping the other ones on a really important part. Nodes! eeeee!, so because you can move these around and connect them however you want you can also tell it to save out an image at any point along the way, which is great! because I often forget that stuff. I assume most everything is 512 and higher based on SD1. But also you can connect a whole bunch of sampler setups one after the other so r/StableDiffusion. alecubudulecu. Share Sort by: Best. Does anyone have any idea? ps: I mean the upscale in text2img ps2: The one I used is R-ESRGAN 4x+Anime6B The main features are: Works with SDXL, SDXL Turbo as well as earlier version like SD1. r/StableDiffusion. Here is a suggested workflow using nodes that are typically available in advanced stable New to Comfyui, so not an expert. I liked the ability in MJ, to choose an image from the batch and upscale just that image. Sort by: tobi1577. You set a folder, set to increment_image, and then set the number on batches on Oldest. To find the downscale factor in the second part, calculate by: factor = desired total upscale / fixed upscale. Upscaling is done with iterative latent scaling and a pass with 4x-ultrasharp. I share many results and many ask r/comfyui. The Image Blend node can be used to blend two images together. Starting with a 512 x 512 image, if you do two 4x upscales with a 1. The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. 60/17/18. I'm also aware you can change the batch count in the extra options of the main When I generate an image with the prompt "attractive woman" in ComfyUI, I get the exact same face for every image I create. I want ONE part of an image say a hand or a In researching InPainting using SDXL 1. Info. I want to create a character with animate anyone and background with svd. Maintainer. I find that setting my width and height to 1/2 makes a 2x2 grid per frame which with LCM can be quick and adds a good amount of detail. Double click the new "image" input that appeared on the left side of the node. Also, if this Best (simple) SDXL Inpaint Workflow. Ultimate SD Upscale 2x and Ultimate Upscale 3x. ultrasharp), then downscale. Training a LoRA will cost much less than this and it costs still less to train a LoRA for just one stage of Stable Cascade. No AUTOMATIC1111 has an option to upscale images with different upscale/restoration models. Extract the zip Welcome to the unofficial ComfyUI subreddit. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. Researchers discover that Stable Diffusion v1 uses internal representations of 3D geometry when generating an image. Initial Setup for Upscaling in ComfyUI. Take the output batch of images from SVD and run them through Ultimate SD Upscale nodes. I'm something of a novice but I think the effects you're getting are more related to your upscaler model your noise your prompt and your CFG. A new Prompt Enricher function, able to improve your prompt with the help of GPT-4 or GPT-3. 4, but use a relevent to your image control net so you don't lose to much of your original image, and combining that with the iterative upscaler and concat a secondary posative telling the model to add detail or improve detail. true. It’s not solely for upscaling. Also, try ESRGAN (there are also a lot of custom models on upscale. I use CFG of about 1. Can anyone suggest a good stand-alone batch upscaling solution for Windows/AMD? Currently I'm using Zyro and uploading each image separately but I'm looking for a solution to automate this. A new Image2Image function: choose an existing image, or a batch of images from a folder, and pass it through the Hand Detailer, Face Detailer, Welcome to the unofficial ComfyUI subreddit. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. I was also getting weird generations and then I just switched to using someone else's workflow and they came out perfectly, even when I changed all my workflow settings the same as theirs for testing what it was, so that could be a bug. 2 / 4. Img2Img Upscale - Upscale a real photo? Trying to expand my knowledge, and one of the things I am curious about is upscaling a photo - lets say I have a backup image, but its not the best quality. You should use the base and left some noise for few steps of refiner and then think about img2img or not use refiner at all. I think it works well for most cases. My postprocess includes a detailer sample stage and another big upscale. With Masquerades nodes (install using comfyui node manager), you can maskToregion, cropByregion (both the image and the large mask), inpaint the smaller image, pasteByMask into the smaller image, then pasteByRegion into the bigger image. > <. then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at Welcome to the unofficial ComfyUI subreddit. For example, I can load an image, select a model (4xUltrasharp I mean the image looks very grainy. On my Nvidia 3060 with 12 GB VRAM, ComfyUI will run the newer SDXL checkpoints quite well, whereas A1111 is marginal at best for SDXL (at least as of v1. 2 - Custom models/LORA's: Tried a lot of CivitAI, epicrealism, cyberrealistic, absolutereality, realistic This works best with Stable Cascade images, might still work with SDXL or SD1. I implemented the experimental Free Lunch optimization node. And bump the mask blur to 20 to help with seams. Check out this workflow, its was built to upscale beyond 12k but you can disable / pause when you reached your preferred size. Select the "SD upscale" button at the top. Code. I searched this forum but only found a few threads from a few months ago that didn't give a definitive answer. For instance if you did a batch of 4 and really just want to work on the second image, the batch index would be 1. miribeautyxo. 35, 10 steps or less. Previously, I upscaled using a landscape image, and the results were quite satisfactory. Fill in your prompts. 15K subscribers in the comfyui community. Also, if this TiledVAE is very slow in Automatic but I do like Temporal Kit so I've switched to ComfyUI for the image to image step. 8K. Also, if this Here is my current 1. =. "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. dr_lm. You can also do a regular upscale using bicubic or lanczos . Hi guys. ↑ Node setup 1: Generates image and then upscales it with USDU (Save portrait to your PC and then drag and drop it into you ComfyUI interface and replace prompt with your's, press "Queue Prompt") ↑ Node setup 2: Upscales any custom image Welcome to the unofficial ComfyUI subreddit. So if you want 2. factor = 2. An AI Splat, where I do the head (6 keyframes), the hands (25 keys), the clothes (4 keys) and the environment (4 keys) separately and then mask them all together. 3. I am building a workflow. Please share your tips, tricks, and workflows for using this Open menu Open navigation Go to Reddit Home. 4. 5 model of choice in a reasonable amount of time on a 2080 super (8 GB) To drag select multiple nodes, hold down CTRL and drag. However, • 19 days ago. You can try my method in Automatic1111. This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. Model: Swizz8-V2-FP16 Vae: baked Upscaler: 4x_foolhardy_Remacri Tile-Size: 512x448 Seed: 686465493884716 Steps: 50 Cfg: 4 Denoise: 0. 0 factor after the first upscale and 2. It's a bit annoying to do there . What has worked best for me has been 1. • 5 min. Top 7% Rank by size. I'm not at home so I can't share a workflow. sharpen (radius 1 sigma 0. Show more. Emad denys that this was authorized, and announced an internal investigation. Also added a second part where I just use a Rand noise in Latent blend. There’s a new optional node developed by u/Old_System7203 to select the best image of a batch before executing the rest of the Open the automatic1111 webui . Great idea! Id pitch in. fix and other upscaling methods like the Loopback Scaler script and SD Upscale. Projects. 5-Turbo. Anxious-Activity-777. I've struggled with Hires. I haven't really shared much and want to use other's ideas as I can't see it work with randomness. Does anyone have any suggestions, would it be better to do an iterative upscale, or All of these timings are using the same settings, with different random seeds and different batch size: Batch size/time in seconds per iteration/max GPU memory in GB/total time in seconds. Also, if this Thanks for the tips on Comfy! I'm enjoying it a lot so far. You can take any picture generated with comfy drop it into comfy and it loads everything. Use IP Adapter for face. Thought ipadapter could be a good way to control tiled upscaling. ago • Edited 4 mo. Note, this has nothing to do with my nodes, you can check ComfyUI's default workflow and see it yourself. This will run the workflow once, on a single seed, and generate three images all with the same seed. Text to image using a selection from initial batch. If the dimensions of the second image do not match those of the first it is rescaled and center-cropped to maintain its aspect ratio. • 2 mo. feed the 1. im using Animate diff a lot, but when i want to make an anim from a single image i need to make an image sequence of the same image, like duplicate the image 100 times for a 100 frame animateDiff. Different waifu2x will give different results. He published on HF: SD XL 1. without fundamentally altering the image’s content. and then rescale to 1024 x 1024. Also, if this While comfyUI is better than default A1111, TensorRT is supported on A1111, uses much less vram and image generation is 2-3X faster. You can try the image upscaling models on Tiyaro. Detailing the Upscaling Process in ComfyUI. Toggle if the seed should be included in the file name or not. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. then upscale to 2048 x 2048. 1 even, when SD generates ‘too much added detail’. SargeZT has published the first batch of Controlnet and T2i for XL. Also, if this It will swap images each run going through the list of images found in the folder. Once the image is set for enlargement, specific tweaks are made to refine the result; Adjust the image size to a width of 768 and a height of 1024 pixels, optimizing the aspect ratio, for a portrait view. g. That’s a cost of about $30,000 for a full base model train. Detailing the Upscaling Hi, I am upscaling a long sequence (batch - batch count) of images, 1 by 1, from 640x360 to 4k. We ask that you please take a minute to read through the rules and check out the resources provided before creating a post, especially if you are new here. However, this allows us to stay faithful to the base image, as much as possible. I am switching from Automatic to Comfy and am currently trying to upscale. Can anyone put me in the right direction or show me an example of how to do batch controlnet poses inside ComfyUI? I've been at it all day and can't figure out what node is used for this. There are many ways to upscale an image. Cheers, appreciate any pointers! Somebody else on Reddit mentioned this application to drop and read. I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. 30/9/19. Pass it to a conditional witha) standard ERGAN upscale using 4xultrasharp v10b) your node (LDSR), 25 steps, none/none settings. The inset show the initial-image at its corresponding scale. If you want upscale to specific size. Also, if this is new and exciting to I am currently using webui for such things however ComfyUI has given me a lot of creative flexibility compared to what’s possible with webui, so I would like to know. r/comfyui A chip A close button. 2 -- Cut the image into tiles. He continues to train others will be launched soon! huggingface. 5x upscale back to source image and upscale again to 2x. So, how do you make it. My current workflow that is pretty decent is to render at a low base resolution (something close to 512px), use highres fix to upscale 2x, and then use SD Upscale on img2img to upscale 2x again, which works better for me since it renders at the highest image size my card can handle, which isn't a lot, and helps minimize artifact generation. ComfyUI has been far faster so far using my own tiled image to image workflow (even at 8000x8000) but the individual frames in my image are bleeding into each other and are coming out inconsistent - and I'm not sure why. fix and Loopback Scaler either don't produce the desired output, meaning they change too much about the image (especially faces), or they don't increase the details enough which causes the end result to look too smooth (sometimes losing 43 votes, 16 comments. I upscaled it to a resolution of 10240x6144 px for us 2. " I recently switched to comfyui from AUTOMATIC1111 and I'm having trouble finding a way of changing the batch size within an img2img workflow. Hope that helps. Increase the factor to four times utilizing the capabilities of the 4x UltraSharp model. My sample pipeline has three sample steps, with options to persist controlnet and mask, regional prompting, and upscaling. repeat until you have an image you like, that you want to upscale. Pos Prompt: nebula in the cosmos, astrophotography, giant sprawling nebula that never ends, colorful. Hi, I am upscaling a long sequence (batch - batch count) of images, 1 by 1, from 640x360 to 4k. 5. positive image conditioning) is no If you pull the latest from rgthree-comfy and restart, you should see it as "Image Comparer (rgthree)" LMK what you think. Doing that manually is a pain in the ass to the point of not 1 - LDSR upscaler or more powerful upscaler: Saw a different grade of details upscaling in Automatic1111 vs. To disable/mute a node (or group of nodes) select them and press CTRL + m. Hello everyone! Not sure how to approach this. It seems that Upscayl only uses a upscaling model, so there (1) Upscale the generated image using 2x as the SD upscale factor. 5, but I have some really old images I'd like to add detail to. For a 2 times upscale Automatic1111 is about 4 times quicker than ComfyUI on my 3090, I'm not sure why. Workflow: generating a 12 step juggernaut cfg7 7:4AR , no lora, no nothing. Curious my best option/operation/workflow and upscale model. In case you are Dividing the image resolution size in two and then upscaling, works fairly well and easily changeable if needed to divide by different amounts for larger images. Also, if this Inpainting Workflow for ComfyUI. Discussions. You upload image -> unsample -> Ksampler advanced -> same recreation of the original image. Ultimate SD Upscale creates additional unnecessary persons . Inputs A bit of a loss for how to do something - which I thought would be super simple. In Comfy all the more so, the image simply looks unnatural after the upscaling. A video compare would be amazing too! Infinite image browser extension can be used as stand alone. The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. I don’t know why there these example workflows are being done so compressed together. Please share your tips, tricks, and workflows for using this software to create your AI art. natural or MJ) images. - latent upscale looks much more detailed, but gets rid of the detail of the original image. Add a Comment. ComfyUI is better for actually grokking what SD is doing, which in turn helps you go beyond the basics and If you want to upscale images using Ultimate SD Upscale check this video on the topic. The resolution is okay, but if possible I would like to get something better. A while back, I shared a program I cooked up for automatically upscaling textures from games that dump them. Open comment sort options. Share. 5, but appears to work poorly with external (e. Depending on the noise and strength it end up treating each square as an individual image. If someone can explain the meaning of the highlighted settings here, I would create a PR to update its README . GFPGAN. While the normal text encoders are not "bad", you can get better results if using the special encoders Welcome to the unofficial ComfyUI subreddit. I use it for SDXL and v1. Also, if this The img2img pipeline has an image preprocess group that can add noise and gradient, and cut out a subject for various types of inpainting. 118 Online. In my experience you need to change it to fixed the gen before the one you want to keep. A new Face Swapper function. I'm also looking for a upscaler suggestion. Every Sampler node (the step that actually generates the image) on ComfyUI requires a latent image as an input. 52. :) Generally a workflow like this gives good results: Generate initial image at 512x768. I did some simple comparison 8x upscaling 256x384 to 2048x3072. It does random one more time before going to fixed. If there are images with different prompts in the upscale folder, I don’t want to do the repetetive work of copying the prompt from the json file (node Heads up: Batch Prompt Schedule does not work with the python API templates provided by ComfyUI github. You can use the UpscaleImageBy node to scale up and down, by using a scale factor < 1. Because upscale amount is determined by upscale model itself. If anyone has been able to the quick fix is put your following ksampler on above 0. Although it is not yet perfect (his own words), you can use it and have fun. Each of my slices were 512x768px but it can be 512x512 or any size that SD can handle on your configuration. Also you can make batch and set node to select index number from batch (latent or image). 15 denoise strength adds plenty of minor details, smooth lines, etc. If I really want comfyui. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Sorry if this is 'too basic' but i'm probably overthinking this. the best part about it though >. Workflow Included. Members Online • Mefitico . I'm new to comfy, and I Upscale image using model to a certain size. 0. If I were to sign up for your This uses more steps, has less coherence, and also skips several important factors in-between. Open menu Open navigation Go to Reddit Home. Install and configure custom nodes for advanced upscaling Welcome to the unofficial ComfyUI subreddit. A portion of the control panel What’s new in 5. 0 = 0. • 1 mo. pth. We are Reddit's primary hub for all things modding, from troubleshooting for beginners to creation of mods by experts. The really cool thing is how it saves the whole workflow into the picture. 1. Give it a shot! and how is the quality for upscaling in respect to face details, skin detail, textiles and background? I don't know if lcm is ideal for upscaling as it Welcome to the unofficial ComfyUI subreddit. 385 upvotes · 159 comments. All it takes is taking a little time to compile the specific model with resolution settings you plan to use. New AnimateDiff on ComfyUI supports Unlimited Context Length - Vid2Vid will never be the same!!! [Full Guide/Workflow in Comments] Workflow Included Share Sort by: Best. and spit it out in some shape or form. batch overlay images on top of another. Then it passes the remaining image batch on. Did Update All in ComfyUI 6. you can just run upscale through img2img with the ultimate upscale extension it really doesnt do anything special I can do that to upscale a given image, yeah. Batch process from ControlNet images. the image size will upscale from 512 x 512 to 2048 x 2048. Just load your image, and prompt and go. I'm doing this, I use chatGPT+ to generate the scripts that change the input image using the comfyUI API. This a good starting point. This is amazing! Please, please, please give us a workflow? Thank you very much, If you want to always be updated on my workflows, follow me on other social networks, I will be happy to help you and I will always update you with new workflows and models🙌😊🙌. Its a little rambling, I like to go in depth with things, and I like to explain why things batch overlay images on top of another. Welcome to the unofficial ComfyUI subreddit. Batching large numbers of images for upscale / high res fix. Please keep posted images SFW. bat and the upload image option is back for me. If one could point "Load Image" at a folder instead of at an image, and cycle through the images as a sequence adhishthite. I upscaled it to a resolution of 10240x6144 px for us to examine the results. I was running some tests last night with SD1. If you're using ComfyUI and have a lot of RAM once you hit your max VRAM ComfyUI will offload some of that workload to RAM which is much slower than VRAM. A portion of the Control Panel What’s new in 5. LD2WDavid. zefy_zef • 3 mo. r/StableDiffusion •. I believe the problem comes from the interaction between the way Comfy's memory management loads checkpoint models (note that this issue still happens if smart memory is disabled) and Ultimate Upscale bypassing the torch's Image / latent batch number selector node I am searching for a node that does the following: I want to generate batches of images (like 4 or 8) and then select only specific latents/images of the batch (one or more images) to be used in the rest of the workflow for further processing like upscaling/Facedetailer. Thank you sd-webui-controlnet team! 12K subscribers in the comfyui community. I want a slider for how many images I want in a Welcome to the unofficial ComfyUI subreddit. You can use the Control Net Tile + LCM to be efficient. Today we will use ComfyUI to upscale stable diffusion images to any resolution we want, and even add CroustiBat. There are a lot of options in regards to this, such as iterative upscale; in my experience Welcome to the unofficial ComfyUI subreddit. I am trying This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. "Now I understand what you meant: since you load the batch image you prefer in a new ComfyUI session, the prompt associated with that image is loaded and processed by the Efficient Loader node, so the Inpainter function in 7. Star 28. <. 1 - get your 512x or smaller empty latent and plug it into a ksampler set to some rediculously low value like 10 steps at 1. wiki) or USRNet if you want to do stuff that is drawn. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. He's got a channel specifically for comfyui and comfy himself posts there daily. Thank you! Locked post. WASasquatch. I made a tutorial on the Youtubes. For general upscaling of photos go: remacri 4x upscale. For 'photorealistic' videos with lots of fine details it doesnt seem a great approach, the final I’m in the same situation, except I’m running on a laptop gpu. I believe it's due to the syntax within the scheduler node breaking the syntax of the overall prompt JSON load. workflow currently removes background from animate anyone video with rembg node and then I want to just layer frame by frame on to the svd frames. continuerevo • For A1111 users, I am the author, but you will unfortunately have to wait - Batch size in img2img (Comfyui)? Hey there, I recently switched to comfyui and I'm having trouble finding a way of changing the batch size within an img2img workflow. Also, if this is new and exciting to you, feel free to Create two masks via "Pad Image for Outpainting", one without feather (use it for fill, vae encode, etc) and one with feather (use only for merging generated image with original via alpha blend at the end) First grow the outpaint mask by N/2, then feather by N. So instead of one girl in an image you got 10 tiny girls stitch into one giant Enhance visual quality and details of your images for printing, web design, and art. Fork 2. Issues 1. Not sure if comfy has his own discord or anything but that would also be a good resource. Introduction. Sames as Swin4R which details a lot the image. 106. 0 factor after the second upscale, then. Customizing and Preparing the Image for Upscaling. We can share this when we are done if it would be helpful? From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. After borrowing Greetings, Community! As a newcomer to ComfyUI (though a seasoned A1111 user), I've been captivated by the potential of Comfy and have witnessed a significant Does anyone know if there is a way to load a batch of images from my drive into comfy for an image to image upscale? I have scoured the net but haven't found The Ultimate AI Upscaler (ComfyUI Workflow) For a dozen days, I've been working on a simple but efficient workflow for upscale. 2k. That's because latent upscale turns the base image into noise (blur). Im trying to use Ultimate SD upscale for upscaling images. Project. Hires. Here's an example: Krea, you can see the useful ROTATE/DIMENSION tool on the dogo image i pasted. 5 if you want to divide by 2) after upscaling by a model. I have tried The WF starts like this, I have a "switch" between a batch directory and a single image mode, going to a face detection and improvement (first use of the prompt) and then to an upscaling step to detail and increase image size (second use of the prompt). Go to "img2img" tab at the top. This website uses original linux waifu2x btw. A new Image2Image function: choose an existing image, or a batch of images from a folder, and pass it through the Hand Detailer, Face Detailer, Useful to create and control image batches. The workflow also has TXT2IMG, IMG2IMG, up to 3x IP Adapter, 2x Revision, predefined (and editable) styles, optional up-scaling, Control Net Canny, Control Net Depth, Lora, selection of recommended SDXL resolutions, adjusting input images to the closest SDXL resolution, etc. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. 0 should have the correct Positive and Negative CLIP values. New comments cannot be posted. Please share your tips, tricks, and workflows for using this. Combined Searge and some of the other custom nodes. upscale in smaller jumps, take 2 steps to reach double the resolution. Outcome: Pluses: LDSR PULVERISES the "highrest fix" - and i mean that. How to superimpose just ONE part of an image into another and let the ksampler continue its process from there (mainly upscaling with a bit of noise). Old. I tried 'primitive' but it won't let me increment by a predefined value. TheXenoth. This allow you to work on smaller part of the Simple ComfyUI Img2Img Upscale Workflow. mode] The above exception was the direct cause of the following exception: File "D:\stable-diffusion-webui Third, you have batch size set to 4 which will generate 4 images at a time but will use more VRAM. Encoding it and doing a tiny refining step to sharpen up the edges. I am curious both which nodes are the best for this, and which models. . 1/73. But I don't know how it works so I have no idea how to recreate the workflow. Hook a "Preview Image" node to they first output, then add a reroute node that you can easily hook/unhook for the upscale. Controversial. UPDATE: In the most recent version (9/22), this button is gone. I have a custom image resizer that ensures the input image matches the output dimensions. Single image works by just selecting the index of the image. Perhaps I can make a load images node like the one i have now where you can load all images in a directory that is compatible with that node. I want to create a character with animate anyone and background with 1. Wonder if anyone came accross this. It is not a problem in the seed, because I tried different seeds. It’s still pretty slow, but Thx for this workflow, I wanna experiment to get an upscaler similar to Magnific and this is going in the right direction, even if it's simple and nothing new. Decoding the latent 2. When I do the same in Automatic1111, I get completely different people and different compositions for every image. - queue the prompt again - this will now run the upscaler and second pass. 5 and 2. Nothing stops you from using both. So in this workflow each of them will run on your input image and Ultimate SD Upscaler uses a diffusion proccess that depends on the SD model and the prompt, and an upscaling model RESGRAN, it can also be combined with controlnet. The key observation here is that by using the efficientnet encoder from huggingface , you can immediately obtain what your image should look like after stage C if you were to create it with stage C, so if you only want upscaling, you Using the first case, if you were trying to generate three images you would set the batch_size to 3 in the `Empty Latent Image` node. Image Blend. "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. What happens: generate aerilyn235. Best. Amblyopius I believe I've seen a few of those upscale pipelines in olivio's discord. Do I have to upload frame by frame to upscale them? Btw, I don't know how to code. Put something like "highly detailed" in the prompt box. Went to Updates as suggested below and ran the ComfyUI and Python Dependencies batch files, but that didn't work for me. New. Like, yeah you can drag a workflow into the window and sure it's fast but even though I'm sure it's "flexible" it feels like pulling teeth to work with. Expand user menu Open settings menu. 2. I use them in my workflow regularly. on Mar 21, 2023. I was running the SDXL 1. ComfyUI is amazing. Installation . Newest. Notifications. SDXL most definitely doesn't work with the old control net. I uploaded the workflow in GH . a. Frone0910. Using the control panel, it'll make as many as you select, but it comfyui. This means that your prompt (a. then rescale to 512 x 512. Listed below are in "find name' / display name" format (God knows what ComfuIU node developers are smoking when they decide it's an awesome idea to make these two things different). Add a "Load Image" node, right click it: "Convert image to input". Turns out ComfyUI can generate 7680x1440 images on 10 GB VRAM. The title explains it, I am repeating the same action over and over on a number of input images and I would like, instead of having to manually load each image and then pressing on the "queue prompt", to be able to select a folder and have Comfy process all input images in that folder. • 3 days ago. on Apr 24, 2023. Also, if this ComfyUI handles the limitations of mid-level GPUs better than most alternatives. You should insert ImageScale node. AbyszOne. All of the batched items will process until they are all done. SDXL has 2 text encoders on its base, and a specialty text encoder on its refiner. Would prefer to do this in Comfy because of speed and workflow but I currently have to Personally, when I’m upscaling an image it’s usually because I like it the way it already looks, and upscaling at 0. 30 seconds. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. 0 Depth Vidit, Depth Faid Vidit, Depth, Zeed, Seg, Segmentation, Scribble. SD upscale enlarges the loaded reference image size by The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. Absolutely you can. I found chflame163/comfyUI_LayerStyle which dose To create a seamless workflow in ComfyUI that can handle rendering any image and produce a clean mask (with accurate hair details) for compositing onto any background, you will need to use nodes designed for high-quality image processing and precise masking. Thanks for the answer! I almost got it working but I have questions. Upscaling: Increasing the resolution and sharpness at the same time. Sample again, denoise=0. Yes for sure. You just have to use the node "upscale by" using bicubic method and a fractional value (0. You can upscale the latent space, upscale the image directly, use controlnets, and different upscaling models and methods. Belittling their efforts will get you banned. It's a bit cumbersome to get the regular ipadapter plus node to get conditioned differently for each image in a batch, so I created a custom node that applies ipadapter from scratch for each image. Also, if this I would like to create high resolution videos using stable diffusion, but my pc can't generate high resolution images. 3-0. • 22 days ago. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. Hello, A1111 user here, trying to make a transition to Comfyui, or at least to learn of ways to use both. And Also Bypass the AnimateDiff Loader model to Original Model loader in the To Basic Pipe node else It will give you Noise on the face (as AnimateDiff loader dont work on single image, you need 4 atleast maybe and facedetailer can handle only 1 ) Only Drawback is Welcome to the unofficial ComfyUI subreddit. The workflow is kept very simple for this test; Load image Upscale Save image. Next, and SD Prompt Reader. I want to replicate the "upscale" feature inside "extras" in A1111, where you can select a model and the final size of the image. 35/19. This means that in the upscaling process it can be added new details to the image depending on the denoising strength. But that's not what I want to do, I want to create a NEW image with high denoise and then automatically "hires fix" that. Put ImageBatchToImageList > Face Detailer > ImageListToImageBatch > Video Combine. 5 to get a 1024x1024 final image (512 *4*0. Upscale with Ultimate SD Upscale. ) These images are zoomed-in views that I created to examine the details of the upscaling process, showing how much detail Ultimate SD Upscale for ComfyUI. (I didn't use AI, latent noise, and a prompt to generate it) - What nodes/workflow would you guys use to get the best results? As my test bed, i'll Yes, with ultimate sd upscale. 5 Over multiple batches, the memory allocated to Python creeps up until it has entirely consumed all of RAM available. I don't get where the problem is, I have checked the comfyui examples and used one of their hires fix, but when I upscale the latent image I get a glitchy image (only the non masked part of the original I2I image) after the second pass, if I upscale the image out of the latent space then into latent again for the second pass the result is ok. Drag&drop to the frame of img2img. workflow - google drive link. For a personal project i need to create 100 How to batch load images? I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next comfyanonymous / ComfyUI Public. Well, it will work, but you can't get the exact same images if you 'pick' a different number than all the images in the initial batch. 3 and denoise at 0. To create a new image from scratch you input an Empty Latent Image node, and to do img2img you use a Load Image node and a VAE Encode to load the image and Ok ran my first test. Also, if this Images are now saved with metadata readable in A1111 WebUI, Vladmandic SD. 0 base, no LoRAs, 20 steps, 9 cfg, dpmpp_2m, karras using Comfy UI. Thanks! Reply reply EricRollei • Hi Chris, Thanks again for these nodes. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other There is making a batch using the Empty Latent Image node, batch_size widget, and there is making a batch in the control panel. THE SCIENTIST - 4096x2160. To duplicate parts of a workflow from one Can't really recommend caffe myself. • 7 mo. You should use this if you want to refine the base image without straying from it. The Upscale Image node can be used to resize pixel images. Batch image generation in ComfyUI Question | Help Is there a way to import prompts/settings from text/csv file for batch image generation in ComfyUI? Locked post. 50 votes, 29 comments. Almost identical. I am cleanly able to make 2048 images with my 1. comfyui. 2 or 0. At a minimum, you need just 4 nodes: That will upscale with no latent invention/injection of creative bits, but still intelligently adds pixels Welcome to the unofficial ComfyUI subreddit. I've put a few labels in the flow for clarity With CFG 1 use to works. I got this problem that I can't reproduce similar result in ComfyUI. Now, I'm attempting to upscale a larger-sized image of a person while maintaining clear facial details. The 4X upscalers I've tried aren't great with it, I suspect the starting detail is too low. 174. ) 3. Batch Generate Images. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. Great update, Chris. I recommend you do not use the same text encoders as 1. thatgentlemanisaggro • Your best Example 1 - Multi-Upscale. Has 5 parameters which will allow you to easily change the prompt and experiment. Then on the new node: control after generate: increment. 5k. Then on the new Wraithnaut. 4 alpha 0. My team is working on building a pipeline for processing images. Reply reply &nbsp; Welcome to the unofficial ComfyUI subreddit. Neg Prompt: text, watermark, bright, oversaturated, dark. 🧩 Comfyroll/🔍 Upscale. runebinder. Apprehensive_Sky892 • ssitu/ComfyUI_UltimateSDUpscale: ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. • 5 mo. Upscale x1. Yet it's not happening. You can't do batches with that website. I send the output of AnimateDiff to UltimateSDUpscale with 2x ControlNet Tile and 4xUltraSharp. Delete duplicates in image batch node? I’m looking for a node that takes an input of images in a batch, searches through them, and automatically deletes images that look identical. Correct me, if I'm wrong. Check out the example workflows on the GitHub. Also the exact same position of the body. Enjoy. Pixel Art XL v1. workflow for detailing and changing faces? I am looking for a workflow example on using facedetailer. Generated image at 2752x512 at 20 steps. you can use the "control filter list" to filter for the images you want. Get app Get the Reddit app Log In Log in to Reddit. Is there a custom node or a way to replicate the A111 ultimate upscale ext in ComfyUI? Share Sort by: Best. I'm using mm_sd_v15_v2. lookup latent upscale method as-well this performs a staggered upscale to your desired resolution in one workflow queue. CUI can do a batch of 4 and stay within the 12 GB. I found out that A1111's ESRGAN upscalers would split image into tiles for upscaling. k. Please share your tips, tricks, and Skip to main content. 5 and I was able to get some decent images by running my prompt through a sampler to get a decent form, then refining while doing an iterative upscale for 4-6 iterations with a low noise and bilinear model, negating the need for an advanced sampler to refine the image. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. 4. 48 (ControlNet depth map and softedge were used to get rid of unwanted artifacts during upscaling. I had some great feedback, and now I'll share the results! AutoCrispy now supports 6 different backends, plus ESRGAN (and its entire library of models!) These backends now also include RealSR, SRMD, Waifu2x, and Anime4k. Also, if this Repeat singe image or single image to video. 2x, upscale using a 4x model (e. 0 Alpha + SD XL Refiner 1. Please help me fix this issue. Go into the mask editor for each of the two and paint in where you want your subjects. I'm trying to find a way of upscaling the SD video up from its 1024x576. The Empty Latent Image will run however many you enter through each step of the workflow. 0 denoise. In Automatic it is quite easy and the picture at the end is also clean, color gradients are smoth, details on the body like the veins are not so strongly emphasized. The tiling is using my custom simpletiles node, which is much simpler than The batch index should match the picture index minus 1. I've submitted a bug to both ComfyUI and Fizzledorf as I'm not sure which side will need to correct it. That's the question. Keep the Queue the flow and you should get a yellow image from the Image Blank. Also, if this To this effect, I have a Tensor Batch to Image in my WAS Node Suite, where you can select a image to work on, and can use multiple to cover the whole batch. Stability AI accused by Midjourney of causing a server outage by attempting to scrape MJ image + prompt pairs. Copy that (clipspace) and paste it (clipspace) into the load image node directly above (assuming you want two subjects). This is faster than trying to do it all at once and keeps the high res. Had the same issue. To upscale images using AI see the Upscale These ComfyUI nodes can be used to restore faces in images similar to the face restore option in AUTOMATIC1111 webui. Please feel free to criticize and tell me what I may be doing silly. Please keep posted Upscale Image - ComfyUI Community Manual. Video tutorial on how to use ComfyUI, a powerful and modular Stable Diffusion GUI and backend, is here. What I want: generate with model A 512x512 -> upscale -> regenerate with model A higher res generate with model B 512x512 -> upscale -> regenerate with model B higher res and so on. Nothing special but easy to build off of. resize down to what you want. Running it through an image upscale on bilinear and 3. See comments made yesterday about this: #54 (comment) I did want it to be totally different but ComfyUI is pretty limited when it comes to the python nodes without customizing ComfyUI itself. The output doesn't seem overly blurry to my eyes. 65 votes, 16 comments. ( I am unable to upload the full-sized image. The last step is an "Image save" with prefix and path. r/comfyui. It's simple and straight to the point. I tried math operations but they are imcompatible with primitives, as i'm sure you know. ComfyUI - SDXL basic-to advanced workflow tutorial - part 5. But If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. Image-to-image was taking < 10s. 7K subscribers in the comfyui community. The nodes required are listed above each node hopefully you have comfyui manager and I didn't miss any custom nodes, let me know if you find this useful :D I may do more. A lot of people are just discovering this technology, and want to show off what they created. Usually I use two my I kinda need help, I use this workflow Idk why but the output of it, is always blurry I tried adjusting sampler and steps but the image are still blurry. ago. So, I just made this workflow ComfyUI . WorkFlow - Choose images from batch to upscale. And above all, BE It's popular to merge the latent up scale with the image upscale. Once your upscale is done, you will need to slice it into slices. The length should be 1 in this case. But somehow it creates additional person inside already generated images. There’s a new optional node developed by u/Old_System7203 to select the best image of a batch before executing the rest of the r/StableDiffusion •. I looked at WAS-node Load image batch but that one seems to look for incremental filenaming. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird 4. Wanted them to look sharp. SVDXT Upscaling with t2i adapter depth Zoe. 4 each 'step'. After Ultimate SD Upscale. There isn't a "mode" for img2img. Download If I want to make a batch of images, then upscale, and regenerate at higher resolution with the same model, it doesn't seem to be possible. After you can use the same latent and tweak start and end to manipulate it. but if it is possible to implement this type of changes on the fly in the node system, then yes, it can overcome 1111. thedyze. Top. Text-to-image generation is still on the works, because Stable-Diffusion was not trained on these dimensions, so it suffer from coherence. 5 txt>img workflow if anyone would like to criticize or use it. The little grey dot on the upper left of the various nodes will minimize a node if clicked. Basically, the Load Image node works well with 16 bit PNG, but it cannot load a sequence of images as a batch. I'm also aware you can change the batch count in the extra options of the main menu, but I'm specifically ComfyUI is the least user-friendly thing I've ever seen in my life. As you can see we can understand a number of things Krea is doing here: I did click on ComfyUI_windows_portable\update\update_comfyui_and _python_dependencies. ComfyUI x4 upscalers. •. I've found RepeatImageBatch node, but it has a max of 64 images. OP • 7 mo. (* The image size specification setting is ignored. 13. I want a checkbox that says "upscale" or whatever that I can turn on and off. Almost exaggerated. 26. I use SD mostly for upscaling real portrait photography so facial fidelity (accuracy to source) is my priority. Log In / Sign Up; Advertise on ComfyUI Node: 🔍 CR Upscale Image Category. It’s mainly focused on generating artificial images but the upscaling option is really handy. I am pretty sure it is possible just point me in the right direction :P Locked post. Best is to copy the seed before you do You could try to pp your denoise at the start of an iterative upscale at say . I've uploaded the workflow link and the generated pictures of after and before Ultimate SD Upscale for the reference. Got sick of all the crazy workflows. - Image Upscale does not give true high-resolution results, the quality of upscale is between the base resolution and the targeted one. 5=1024). Instead, you need to go down to "Scripts" at the bottom and select the "SD Upscale" script. Paper: "Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model". co. This will get to the low-resolution stage and stop. 5/32. But if it helps try different upscaled than real esrgan 4x, maybe the 4x ultrasharp upscaler and see if it's more The first is a tool that automatically remove bg of loaded images (We can do this with WAS), BUT it also allows for dynamic repositionning, the way you would do it in Krita. Q&A. ckpt motion with Kosinkadink Evolved . (*I think it's better to avoid 4x upscale generation) (2) Repeat step 1 multiple times to increase the size to x2, x4, x8, and so on. CUI is also faster. 5, don't need that many steps. hc qf ik vv wl gs rl nl ap bj