You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a way to use MobileStyleGAN as an image-to-image (A->B) style transfer model, similar to CycleGAN, rather than just an image synthesizer from no input? I have a custom dataset of cat faces, one set is real (domain A) and the other set are fake (domain B). I want an input of A to translate into domain B.
The text was updated successfully, but these errors were encountered:
Is there a way to use MobileStyleGAN as an image-to-image (A->B) style transfer model, similar to CycleGAN, rather than just an image synthesizer from no input? I have a custom dataset of cat faces, one set is real (domain A) and the other set are fake (domain B). I want an input of A to translate into domain B.
The text was updated successfully, but these errors were encountered: