Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get started of dataset file structure ? #4

Open
rrryan2016 opened this issue Dec 22, 2020 · 9 comments
Open

How to get started of dataset file structure ? #4

rrryan2016 opened this issue Dec 22, 2020 · 9 comments

Comments

@rrryan2016
Copy link

Hey, thanks for your great work and kind sharing.

I am just a beginner in face parsing, and I intend to get started with your work. :P

I'd like to first try on Helen, and I downloaded by the link (https://www.sifeiliu.net/face-parsing) provided by you.

The original folder structure is as below,

├── exemplars.txt
├── images
├── labels
├── points
├── README.txt
├── testing.txt
└── tuning.txt

But how can I get the same structure in README

dataset/
images/
labels/
edges/
train_list.txt
test_list.txt

Does train_list.txt includes all images (paths) in images/?

Does test_list.txt includes all images and labels matches, as each images may have multiple label png file ?

Looking forward your reply or any tutorial link of Helen.

@rrryan2016
Copy link
Author

Hello, sorry for the disturbance again.

Due to the fact that I have no way to deal with Helen as above, I further try the code on LaPa. But I came across a problem in compute_mean_ioU(). I just found no os.path.join(datadir, 'label_names.txt') or os.path.join(datadir, 'project', im_name + '.npy') in the Helen, LaPa or CelebAMask-HQ.

Could you please tell me what are these files, and how to get them if possible?

Thanks in advance.

@tegusi
Copy link
Owner

tegusi commented Jan 4, 2021

Actually we use existing projection matrix to make alignment, but due to confidentiality we can not provide the original data. You can refer to OpenCV or other related libraries to make alignment if necessary.

@RachelWang122
Copy link

hi, have you successfully run this project, can you give some guidance in the dataset? Thank u .

@RachelWang122
Copy link

Hey, thanks for your great work and kind sharing.

I am just a beginner in face parsing, and I intend to get started with your work. :P

I'd like to first try on Helen, and I downloaded by the link (https://www.sifeiliu.net/face-parsing) provided by you.

The original folder structure is as below,

├── exemplars.txt
├── images
├── labels
├── points
├── README.txt
├── testing.txt
└── tuning.txt

But how can I get the same structure in README

dataset/
images/
labels/
edges/
train_list.txt
test_list.txt

Does train_list.txt includes all images (paths) in images/?

Does test_list.txt includes all images and labels matches, as each images may have multiple label png file ?

Looking forward your reply or any tutorial link of Helen.

Hello, sorry for the disturbance again.

Due to the fact that I have no way to deal with Helen as above, I further try the code on LaPa. But I came across a problem in compute_mean_ioU(). I just found no os.path.join(datadir, 'label_names.txt') or os.path.join(datadir, 'project', im_name + '.npy') in the Helen, LaPa or CelebAMask-HQ.

Could you please tell me what are these files, and how to get them if possible?

Thanks in advance.

hi, have you successfully run this project, can you give some guidance in the dataset? Thank u .

@tegusi
Copy link
Owner

tegusi commented Mar 21, 2021

Sorry that I didn't provide the preprocessing code in advance. The parsing result is a segmentation map, you only need to calculate the facial pixels of each components and aggregate them into single parsing map.

@RachelWang122
Copy link

Sorry that I didn't provide the preprocessing code in advance. The parsing result is a segmentation map, you only need to calculate the facial pixels of each components and aggregate them into single parsing map.

thanks for your reply at first. Is this the content of the label folder? We can get it in the data download link you provided

@tegusi
Copy link
Owner

tegusi commented Mar 22, 2021

Sorry that I didn't provide the preprocessing code in advance. The parsing result is a segmentation map, you only need to calculate the facial pixels of each components and aggregate them into single parsing map.

thanks for your reply at first. Is this the content of the label folder? We can get it in the data download link you provided

You can prepare the label maps as described based on the original helen dataset .

@dreamlychina
Copy link

Hello, sorry for the disturbance again.

Due to the fact that I have no way to deal with Helen as above, I further try the code on LaPa. But I came across a problem in compute_mean_ioU(). I just found no os.path.join(datadir, 'label_names.txt') or os.path.join(datadir, 'project', im_name + '.npy') in the Helen, LaPa or CelebAMask-HQ.

Could you please tell me what are these files, and how to get them if possible?

Thanks in advance.

你跑成功??帅哥

@wxqlab
Copy link

wxqlab commented Nov 26, 2022

Hi, @rrryan2016, do you have solve this problem? Could you please provide some advice?

Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants