Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Prompt2Prompt pipeline #4563

Merged
merged 36 commits into from
Sep 14, 2023
Merged

Conversation

UmerHA
Copy link
Contributor

@UmerHA UmerHA commented Aug 10, 2023

What does this PR do?

Adds the Prompt-to-Prompt pipeline, as discussed in #2121

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

Pipelines: @patrickvonplaten and @sayakpaul

@sayakpaul sayakpaul requested review from DN6 and yiyixuxu August 10, 2023 15:52
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

@UmerHA
Copy link
Contributor Author

UmerHA commented Aug 17, 2023

Hi @DN6 @yiyixuxu, pinging you as the PR has been stale for a week. :)

@DN6
Copy link
Collaborator

DN6 commented Aug 17, 2023

Hi @UmerHA. I should be able to review this by tomorrow.

@patrickvonplaten
Copy link
Contributor

I'm not sure how relevant this pipeline still is to be honest - @apolinario do you think there is still a big demand for this pipeline?

@Zeqiang-Lai
Copy link

As an user of this pipeline, I hope this pipeline could be merged or at least make it a community pipeline.

@patrickvonplaten
Copy link
Contributor

@UmerHA,

Do you think we could add this pipeline as a community pipeline instead? Our test suite and maintenance cost is exploding at the moment and we need to be a bit more selective which pipelines shall be added to "main"

@UmerHA
Copy link
Contributor Author

UmerHA commented Sep 5, 2023

@patrickvonplaten Absolutely, will do!

- Moved prompt2prompt pipeline from main to community
- Deleted tests
- Moved documentation to community and shorted it
@UmerHA
Copy link
Contributor Author

UmerHA commented Sep 8, 2023

@UmerHA,

Do you think we could add this pipeline as a community pipeline instead? Our test suite and maintenance cost is exploding at the moment and we need to be a bit more selective which pipelines shall be added to "main"

Have now:

  • moved the pipeline to community
  • deleted the tests
  • condensed the docs and moved them to community

I think I've done everything, but would appreciate if you could quickly check. Thanks!

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some final clean-ups :-)

@keturn
Copy link
Contributor

keturn commented Sep 13, 2023

As this has been an InvokeAI feature since before Invoke adopted diffusers, it's been on the list of things I've hoped to see supported here.

Partly to ease Invoke's maintenance for a feature that has required some messy kludges at times (factoring out the AttentionProcessor helped, but I lost track of where that whole thing ended up), but also because Invoke's implementation only does the replace operator and I've been looking forward to being able to use reweight.

  • deleted the tests

😵 a fully implemented feature with tests and then you deleted them? [keturn dies inside]

@patrickvonplaten
Copy link
Contributor

  • deleted the tests

Tests are exploding and becoming too expensive/slow (currently at 20min just for fast tests). We sadly need to be more selective with pipeline testing.

@patrickvonplaten
Copy link
Contributor

Thanks a mille for the PR @UmerHA

@keturn
Copy link
Contributor

keturn commented Sep 14, 2023

I've opened an issue over at invoke-ai/InvokeAI#4541 to figure out how to handle things on Invoke's side. @UmerHA, if you're interested in contributing prompt2prompt and attention-map-visualization code to Invoke, your input would be very welcome!

@UmerHA UmerHA deleted the prompt2prompt branch September 26, 2023 19:48
@betterze
Copy link

betterze commented Dec 2, 2023

How can I use this? I try

from diffusers.pipelines import Prompt2PromptPipeline

it outputs

ImportError: cannot import name 'Prompt2PromptPipeline' from 'diffusers.pipelines' (/opt/venv/lib/python3.10/site-packages/diffusers/pipelines/__init__.py)

I have the newest version of diffusers.

@patrickvonplaten
Copy link
Contributor

@betterze,

can you try:

from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="pipeline_prompt2prompt")

@betterze
Copy link

Does it support sdxl? when i run


import torch 
from pipeline_prompt2prompt import Prompt2PromptPipeline
pipe = Prompt2PromptPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0"   # stabilityai/sd-turbo, runwayml/stable-diffusion-v1-5
)
pipe = pipe.to("cuda")

prompts = ['a glass of cocktail on the table',
           'a glass of white on the table']


# prompts = ['a sad face',
           # 'a happy face']

cross_attention_kwargs = {
    "edit_type": "replace",
    "n_cross_replace": 1.0,
    # "n_self_replace": 1.0,
    "local_blend_words": ["cocktail", "white"]
    }

generator = torch.Generator(device="cuda").manual_seed(1)
outputs = pipe(prompt=prompts,generator=generator, height=512, width=512, num_inference_steps=50,guidance_scale=7.0, cross_attention_kwargs=cross_attention_kwargs)

the following error occurs

Cell In[3], line 16
      8 cross_attention_kwargs = {
      9     "edit_type": "replace",
     10     "n_cross_replace": 1.0,
     11     # "n_self_replace": 1.0,
     12     "local_blend_words": ["cocktail", "white"]
     13     }
     15 generator = torch.Generator(device="cuda").manual_seed(1)
---> 16 outputs = pipe(prompt=prompts,generator=generator, height=512, width=512, num_inference_steps=50,guidance_scale=7.0, cross_attention_kwargs=cross_attention_kwargs)
     17 # outputs = pipe(prompt=prompts,generator=generator, height=512, width=512, num_inference_steps=1,guidance_scale=0.0, cross_attention_kwargs=cross_attention_kwargs,latents=inv_latents)
     19 from matplotlib import pyplot as plt

File /opt/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
    112 @functools.wraps(func)
    113 def decorate_context(*args, **kwargs):
    114     with ctx_factory():
--> 115         return func(*args, **kwargs)

File /sensei-fs/tenants/Sensei-AdobeResearchTeam/share-zongzew/clio3/pipeline_prompt2prompt.py:245, in Prompt2PromptPipeline.__call__(self, prompt, height, width, num_inference_steps, guidance_scale, negative_prompt, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, output_type, return_dict, callback, callback_steps, cross_attention_kwargs, guidance_rescale)
    242 latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
    244 # predict the noise residual
--> 245 noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=prompt_embeds).sample
    247 # perform guidance
    248 if do_classifier_free_guidance:

File /opt/venv/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File /opt/venv/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py:967, in UNet2DConditionModel.forward(self, sample, timestep, encoder_hidden_states, class_labels, timestep_cond, attention_mask, cross_attention_kwargs, added_cond_kwargs, down_block_additional_residuals, mid_block_additional_residual, down_intrablock_additional_residuals, encoder_attention_mask, return_dict)
    964     aug_emb = self.add_embedding(text_embs, image_embs)
    965 elif self.config.addition_embed_type == "text_time":
    966     # SDXL - style
--> 967     if "text_embeds" not in added_cond_kwargs:
    968         raise ValueError(
    969             f"{self.__class__} has the config param `addition_embed_type` set to 'text_time' which requires the keyword argument `text_embeds` to be passed in `added_cond_kwargs`"
    970         )
    971     text_embeds = added_cond_kwargs.get("text_embeds")

TypeError: argument of type 'NoneType' is not iterable

Thank you for your help.

@UmerHA
Copy link
Contributor Author

UmerHA commented Dec 12, 2023

@betterze unfortunately no. SDXL versions of pipelines are their own classes. For this pipeline, there currently is no SDXL version.

Feel free to implement one!

AmericanPresidentJimmyCarter pushed a commit to AmericanPresidentJimmyCarter/diffusers that referenced this pull request Apr 26, 2024
* Initial commit P2P

* Replaced CrossAttention, added test skeleton

* bug fixes

* Updated docstring

* Removed unused function

* Created tests

* improved tests

- made fast inference tests faster
- corrected image shape assertions

* Corrected expected output shape in tests

* small fix: test inputs

* Update tests

- used conditional unet2d
- set expected image slices
- edit_kwargs are now not popped, so pipe can be run multiple times

* Fixed bug in int tests

* Fixed tests

* Linting

* Create prompt2prompt.md

* Added to docs toc

* Ran make fix-copies

* Fixed code blocks in docs

* Using same interface as StableDiffusionPipeline

* Fixed small test bug

* Added all options SDPipeline.__call_ has

* Fixed docstring; made __call__ like in SD

* Linting

* Added test for multiple prompts

* Improved docs

* Incorporated feedback

* Reverted formatting on unrelated files

* Moved prompt2prompt to community

- Moved prompt2prompt pipeline from main to community
- Deleted tests
- Moved documentation to community and shorted it

* Update src/diffusers/utils/dummy_torch_and_transformers_objects.py

Co-authored-by: Patrick von Platen <[email protected]>

---------

Co-authored-by: Patrick von Platen <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants