Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for additional weights #21

Closed
influgenai opened this issue Feb 17, 2024 · 4 comments
Closed

Request for additional weights #21

influgenai opened this issue Feb 17, 2024 · 4 comments

Comments

@influgenai
Copy link

Hello,
Thanks a lot for creating the project. Really appreciate - good job!:)

Could you please add some extra elements?

Upscaler:

  • ESRGAN/4x-UltraMix_Smooth.pth

Adetailer models

  • hand_yolov8s.pt
  • face_yolov8m.pt
  • person_yolov8m-seg.pt

Segmentation

  • sam_vit_b_01ec64.pth

I'm trying to run this workflow:

{ "73": { "inputs": { "ckpt_name": "epicrealism_naturalSinRC1VAE.safetensors", "vae_name": "Baked VAE", "clip_skip": -1, "lora_name": "None", "lora_model_strength": 0.03, "lora_clip_strength": 2, "positive": "cat", "negative": "dog", "token_normalization": "none", "weight_interpretation": "comfy", "empty_latent_width": 832, "empty_latent_height": 960, "batch_size": 1 }, "class_type": "Efficient Loader", "_meta": { "title": "Efficient Loader" } }, "74": { "inputs": { "seed": 584918518232653, "steps": 30, "cfg": 7, "sampler_name": "euler_ancestral", "scheduler": "karras", "denoise": 1, "preview_method": "auto", "vae_decode": "true", "model": [ "73", 0 ], "positive": [ "73", 1 ], "negative": [ "73", 2 ], "latent_image": [ "73", 3 ], "optional_vae": [ "73", 4 ] }, "class_type": "KSampler (Efficient)", "_meta": { "title": "KSampler (Efficient)" } }, "75": { "inputs": { "samples": [ "74", 3 ], "vae": [ "74", 4 ] }, "class_type": "VAEDecode", "_meta": { "title": "VAE Decode" } }, "81": { "inputs": { "upscale_by": 2, "seed": 838508927827370, "steps": 40, "cfg": 8, "sampler_name": "euler_ancestral", "scheduler": "karras", "denoise": 0.2, "mode_type": "Linear", "tile_width": 512, "tile_height": 512, "mask_blur": 8, "tile_padding": 32, "seam_fix_mode": "None", "seam_fix_denoise": 1, "seam_fix_width": 64, "seam_fix_mask_blur": 8, "seam_fix_padding": 16, "force_uniform_tiles": true, "tiled_decode": false, "image": [ "143", 0 ], "model": [ "74", 0 ], "positive": [ "74", 1 ], "negative": [ "74", 2 ], "vae": [ "74", 4 ], "upscale_model": [ "82", 0 ] }, "class_type": "UltimateSDUpscale", "_meta": { "title": "Ultimate SD Upscale" } }, "82": { "inputs": { "model_name": "ESRGAN/4x-UltraMix_Smooth.pth" }, "class_type": "UpscaleModelLoader", "_meta": { "title": "Load Upscale Model" } }, "83": { "inputs": { "filename_prefix": "ComfyUI", "images": [ "81", 0 ] }, "class_type": "SaveImage", "_meta": { "title": "Save Image" } }, "127": { "inputs": { "wildcard": "", "Select to add LoRA": "Select the LoRA to add to the text", "Select to add Wildcard": "Select the Wildcard to add to the text", "model": [ "74", 0 ], "clip": [ "73", 5 ], "vae": [ "73", 4 ], "positive": [ "73", 1 ], "negative": [ "73", 2 ], "bbox_detector": [ "128", 0 ], "sam_model_opt": [ "129", 0 ], "segm_detector_opt": [ "130", 1 ] }, "class_type": "ToDetailerPipe", "_meta": { "title": "ToDetailerPipe" } }, "128": { "inputs": { "model_name": "bbox/face_yolov8m.pt" }, "class_type": "UltralyticsDetectorProvider", "_meta": { "title": "UltralyticsDetectorProvider" } }, "129": { "inputs": { "model_name": "sam_vit_b_01ec64.pth", "device_mode": "AUTO" }, "class_type": "SAMLoader", "_meta": { "title": "SAMLoader (Impact)" } }, "130": { "inputs": { "model_name": "segm/person_yolov8m-seg.pt" }, "class_type": "UltralyticsDetectorProvider", "_meta": { "title": "UltralyticsDetectorProvider" } }, "131": { "inputs": { "guide_size": 768, "guide_size_for": true, "max_size": 1024, "seed": 838508927827370, "steps": 30, "cfg": 8, "sampler_name": "euler_ancestral", "scheduler": "karras", "denoise": 0.5, "feather": 5, "noise_mask": true, "force_inpaint": false, "bbox_threshold": 0.5, "bbox_dilation": 10, "bbox_crop_factor": 3, "sam_detection_hint": "center-1", "sam_dilation": 0, "sam_threshold": 0.93, "sam_bbox_expansion": 0, "sam_mask_hint_threshold": 0.7, "sam_mask_hint_use_negative": "False", "drop_size": 10, "refiner_ratio": 0.2, "cycle": 1, "inpaint_model": false, "noise_mask_feather": 10, "image": [ "75", 0 ], "detailer_pipe": [ "127", 0 ] }, "class_type": "FaceDetailerPipe", "_meta": { "title": "FaceDetailer (pipe)" } }, "133": { "inputs": { "images": [ "131", 1 ] }, "class_type": "PreviewImage", "_meta": { "title": "Preview Image" } }, "137": { "inputs": { "masks": [ "131", 3 ] }, "class_type": "Convert Masks to Images", "_meta": { "title": "Convert Masks to Images" } }, "138": { "inputs": { "images": [ "137", 0 ] }, "class_type": "PreviewImage", "_meta": { "title": "Preview Image" } }, "139": { "inputs": { "wildcard": "perfect hands ", "Select to add LoRA": "Select the LoRA to add to the text", "Select to add Wildcard": "Select the Wildcard to add to the text", "model": [ "74", 0 ], "clip": [ "73", 5 ], "vae": [ "73", 4 ], "positive": [ "73", 1 ], "negative": [ "73", 2 ], "bbox_detector": [ "140", 0 ], "sam_model_opt": [ "141", 0 ], "segm_detector_opt": [ "142", 1 ] }, "class_type": "ToDetailerPipe", "_meta": { "title": "ToDetailerPipe" } }, "140": { "inputs": { "model_name": "bbox/hand_yolov8s.pt" }, "class_type": "UltralyticsDetectorProvider", "_meta": { "title": "UltralyticsDetectorProvider" } }, "141": { "inputs": { "model_name": "sam_vit_b_01ec64.pth", "device_mode": "AUTO" }, "class_type": "SAMLoader", "_meta": { "title": "SAMLoader (Impact)" } }, "142": { "inputs": { "model_name": "bbox/hand_yolov8s.pt" }, "class_type": "UltralyticsDetectorProvider", "_meta": { "title": "UltralyticsDetectorProvider" } }, "143": { "inputs": { "guide_size": 768, "guide_size_for": true, "max_size": 1024, "seed": 730958200346548, "steps": 30, "cfg": 8, "sampler_name": "euler_ancestral", "scheduler": "karras", "denoise": 0.5, "feather": 5, "noise_mask": true, "force_inpaint": false, "bbox_threshold": 0.5, "bbox_dilation": 10, "bbox_crop_factor": 3, "sam_detection_hint": "center-1", "sam_dilation": 0, "sam_threshold": 0.93, "sam_bbox_expansion": 0, "sam_mask_hint_threshold": 0.7, "sam_mask_hint_use_negative": "False", "drop_size": 10, "refiner_ratio": 0.2, "cycle": 1, "inpaint_model": true, "noise_mask_feather": 10, "image": [ "131", 0 ], "detailer_pipe": [ "139", 0 ] }, "class_type": "FaceDetailerPipe", "_meta": { "title": "FaceDetailer (pipe)" } }, "144": { "inputs": { "images": [ "143", 1 ] }, "class_type": "PreviewImage", "_meta": { "title": "Preview Image" } }, "149": { "inputs": { "masks": [ "143", 3 ] }, "class_type": "Convert Masks to Images", "_meta": { "title": "Convert Masks to Images" } }, "150": { "inputs": { "images": [ "149", 0 ] }, "class_type": "PreviewImage", "_meta": { "title": "Preview Image" } } }

Thanks a lot,
W

@fofr
Copy link
Owner

fofr commented Feb 18, 2024

I think this PR will help:
#22

It adds:

sam_vit_b_01ec64.pth
sam_vit_h_4b8939.pth
sam_vit_l_0b3195.pth
face_yolov8m.pt
hand_yolov8s.pt
person_yolov8m-seg.pt

@fofr
Copy link
Owner

fofr commented Feb 18, 2024

This workflow is also using Convert Masks to Images which depends on the WAS nodes, that's a WIP I'm currently adding, but it's big so taking a while.

@fofr
Copy link
Owner

fofr commented Feb 18, 2024

I've also added 4x-UltraMix_Smooth.pth to that PR

@influgenai
Copy link
Author

Thank you, that's amazing.

@fofr fofr closed this as completed Feb 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants