Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invert Images to W space #8

Closed
falloncandra opened this issue Mar 23, 2021 · 8 comments
Closed

Invert Images to W space #8

falloncandra opened this issue Mar 23, 2021 · 8 comments

Comments

@falloncandra
Copy link

Hi, thanks for your code!

I need to invert images into their latent representations of size (1, 512) each. However, I notice that each latent representation produced by your code is of size (1, 18, 512) (I suppose this is the dimension of W+ space).

Is there a way to get a latent representation of size (1, 512) using your code? (probably the representation in the W space)
Or do you think one of the layers in the (1, 18, 512) tensor is reasonable to use as the image representation for further editing in the latent space?

Thank you very much!

@yuval-alaluf
Copy link

Hi @falloncandra ,
May I ask why you need to invert your images specifically to a size of (1, 512)? Generally, this can be done, but your reconstruction will be quite poor as a single 512-dimensional vector is typically not expressive enough.

@falloncandra
Copy link
Author

falloncandra commented Mar 25, 2021

Hi, thanks for your reply.!

For my thesis project, I need to apply the semantic manipulation here (Figure 3, Equation 3) to the latent code of a real image (the result from GAN inversion). However, the method works only for 2D latent code. Moreover, from the discussion here, the semantics of StyleGAN reside on W(1, 512). In my case, the semantic is more important than the reconstruction result (I don't mind if the reconstructed image looks quite different from the original image, as long as it has the same attributes, e.g. still a female, still smiling). Therefore, I think I need to get the latent code in W(1, 512).

I really like your work because it uses pytorch and the inversion time is very fast (i.e. +/- 0.5s per image compared to other inversion methods which can be up to 8s). Hence, I really hope that I can use the pre-trained model in this repo for getting the latent codes W and generate new images after the latent code manipulation.

Could you please tell me how to achieve that with your code?

Thank you very much for you help!

Edit:
Hi, after reading your paper more thoroughly, I realised that each of the 18 style vectors come from the same vector w with small perturbations. Hence, I would like to know your opinion, which one makes more sense:

  1. learn semantic boundaries in the (1, 18 * 512) space and edit an image by manipulating the reshaped (1, 18 * 512) latent code, or
  2. learn the semantic boundaries in the (1, 512) space, then edit an image by performing the same manipulation to each of the 18 x (1, 512) latent code?

Any thoughts would be much appreciated! Thanks!

@omertov
Copy link
Owner

omertov commented Mar 26, 2021

Hi @falloncandra!

To fully understand your question, are you looking to apply an already learnt boundary on the inversion's latent code, or do you plan on training the boundary based on codes obtained from the encoder?

Manipulating each of the 18 latent code entries based on a learnt semantic boundary in the (1, 512) space should work fine (option 2). In fact, this is exactly how we apply the InterFaceGAN editing (based on a learnt (1, 512) boundary).

@falloncandra
Copy link
Author

Hi, thanks for your answer and clarification!

So first, I want to train the boundary based on the inversion's latent codes of some training images (if using Option 2, probably I will average the 18 style vectors so that each training image is represented by one (1, 512) vector. I think this should work fine because the 18 style vectors originate from the same w(1, 512) vector with small offsets. Do you think so?).

After that, I want to manipulate test images by applying that learnt boundary to the inversion's latent code of the test images.

Do you think training the boundary based on the inversion's latent codes (as opposed to random generated w vectors) will work fine too?

Thank you very much!

@omertov
Copy link
Owner

omertov commented Mar 26, 2021

Training the boundary using the inversion latent codes might not suit your needs.
Although trained for small perturbations, the encoder still yields 18x512 code close to the W subspace.
As each of the 18 style code entries correspond to different semantic attributes, averaging / applying only the main code will cause changes to the resulting image.

As an example, here is a visualization of the images produced by the inversion latent code (left), the main w style vector (middle), and the average code over the 18 entries (right).
As can be observed, while the overall geometry of the face is similar, changes in texture (colors) and some middle level details are causing a large change in the output image.
github_response

To test this behavior yourself, you can run the following commands from the notebook after obtaining the inversion latents:

comparison_latents = torch.cat([
                                latents,
                                latents[:, 0, :].unsqueeze(1).repeat(1,18,1),
                                latents.mean(dim=1).unsqueeze(1).repeat(1,18,1)
                                ])

and then to generate the comparison image (using the initialized LatentEditor object):

editor._latents_to_image(comparison_latents)

In case the attributes preservation of the above method is not sufficient for your needs,
you can opt to find the boundary in the 18x512 space, which might be challenging,
or alternatively train the e4e encoder yourself to output a single style code (repeated 18 times).

The latter can be achieved by using the --progressive_start training flag and setting it to a training step near the end of training (for example, you can train the encoder for 250k steps and start training the deltas (the perturbations) at the 250k step, resulting in encoder checkpoints trained to use only the main w style vector).

Hope it can help with your experiments.

@falloncandra
Copy link
Author

Hi @omertov!

Thank you very much for your clear answers, examples, and instructions! I will think more about this information. Can't thank you enough!

@aminaha1999
Copy link

hi, I have a problem with e4e. in every code I test it couldn't download encoder4editing from https://docs.google.com/uc?export=download&confirm=&id=1cUv_reLE6k3604or78EranS7XzuVMWeO so I couldn't upload my pic and convert it to a latents.pt for the next step. also, I put an image about that
imimimi

@omertov
Copy link
Owner

omertov commented Aug 17, 2021

For closure,
I have added a new encoder type which encodes into the W* space (512 dimensional vector repeated 18 times), which can be used to look for the interfacegan directions (although it needs testing).
In case this is still relevant I would love hearing about the results!

Best,
Omer

@omertov omertov closed this as completed Aug 17, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants