Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to reproduce the results showed in the paper? #26

Open
zechengtang opened this issue Aug 29, 2021 · 14 comments
Open

How to reproduce the results showed in the paper? #26

zechengtang opened this issue Aug 29, 2021 · 14 comments

Comments

@zechengtang
Copy link

Hi @Skylion007

I'm trying to reproduce the results showed in your paper, as follows:
image

But I only get the results like this:
Figure_1
Figure_4
Figure_2
Figure_3

So what is the problem?

@Skylion007
Copy link
Contributor

Are these the pretrained models or one that you have retrained? If it's inference code, this issue should provide some code that runs well out of the box: #17 (comment)

@zechengtang
Copy link
Author

Here's my inference code:

from utils import *
from model import Model

from matplotlib import pyplot as plt
import cv2


sess = tf.InteractiveSession()
model = Model()

with TowerContext('', is_training=False):
    input1, input2 = model.inputs()
    model.build_graph(input1, input2)

saver = tf.train.Saver()
saver.restore(sess, model_path)

img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

face_detector = cv2.CascadeClassifier('lbpcascade_frontalface_improved.xml')
face_coords = face_detector.detectMultiScale(img, scaleFactor=1.1, minNeighbors=5)

# x, y, w, h = face_coords[0]
# face = img[y:y+h, x:x+w]
# face = cv2.resize(face, (128, 128))
# face = face.reshape(-1, 128, 128, 3)
face = img.reshape(-1, 128, 128, 3)

res = sess.run([model.output1, model.output2], {input1: face, input2: face})
plt.imshow(res[0][0])
plt.show()
plt.imshow(res[1][0])
plt.show()

For convenience, I have changed the code in model.py in line 181, 182:
from

viz3('A_recon', A, AB, ABA)
viz3('B_recon', B, BA, BAB)

to

self.output1 = viz3('A_recon', A, AB, ABA)
self.output2 = viz3('B_recon', B, BA, BAB)

@zechengtang
Copy link
Author

Are these the pretrained models or one that you have retrained? If it's inference code, this issue should provide some code that runs well out of the box: #17 (comment)

These are pretrained models, 'human2doll' and 'cat2dog'.

@Skylion007
Copy link
Contributor

Skylion007 commented Aug 29, 2021 via email

@zechengtang
Copy link
Author

Let me know if you run into any issues with the linked code.

On Sat, Aug 28, 2021 at 10:47 PM newcomertzc @.***> wrote: Are these the pretrained models or one that you have retrained? If it's inference code, this issue should provide some code that runs well out of the box: #17 (comment) <#17 (comment)> These are pretrained models, 'human2doll' and 'cat2dog'. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#26 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAPVMXYPAXOJWBV2B7NLHMTT7GNVNANCNFSM5C7XGHWA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

My issue is:

I use the pretrained models you provided in Google Drive, taking the same input images showed in the paper, but get fuzzy or bizarre output images showed before.

@Skylion007
Copy link
Contributor

Skylion007 commented Aug 29, 2021 via email

@zechengtang
Copy link
Author

No. @Skylion007

@zechengtang
Copy link
Author

I have tested the code you linked earlier, and actually the outputs are the same as I showed before.

@Skylion007
Copy link
Contributor

Skylion007 commented Aug 31, 2021 via email

@zechengtang
Copy link
Author

Okay, this is pretty bizarre, can you list the versions of your dependencies?

On Sun, Aug 29, 2021 at 12:44 AM newcomertzc @.***> wrote: I have tested the code you linked earlier, and actually the outputs are the same as I showed before. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#26 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAPVMXY6LM5LAWAKVXKYSCDT7G3KZANCNFSM5C7XGHWA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

The versions of my dependencies are as follows:

  • python 3.6.12
  • tensorflow-gpu 1.13.1
  • tensorpack 0.8.9

@Skylion007
Copy link
Contributor

I just pushed a new branch test_gan_debug with the exact tensorpack that should work. Let me know if that still fails.

@zechengtang
Copy link
Author

The result is the same...
image

@Skylion007
Copy link
Contributor

@newcomertzc Did you try the webcam script as well? I want to make sure it isn't an alignment issue.

@zechengtang
Copy link
Author

@newcomertzc Did you try the webcam script as well? I want to make sure it isn't an alignment issue.

Yes, and the results are also the same.

In fact, the webcam script takes the screenshot of the computer camera. After I changed its input from a screenshot to an image on the computer, the results are the same as I showed before..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants