Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于命令行参数问题 #9

Closed
liuna0211 opened this issue Sep 26, 2024 · 7 comments
Closed

关于命令行参数问题 #9

liuna0211 opened this issue Sep 26, 2024 · 7 comments
Labels
documentation Improvements or additions to documentation

Comments

@liuna0211
Copy link

  1. When training the SRN, --images /path/to/images
    --edges_prefix /path/to/edge
    --output /path/to/output/dir I see that the code says path of input sketches, and I want to know if the path to the sketch generated here for data processing is the path or the original image path.
  2. When training SRN, I need to add evaluation: I am currently using the celeba-hq dataset, with a score of 1000 as the validation set and 1000 as the test set
    --images_val 1000 original image paths of the validation set
    --masks_val 我想知道这个masks自己下载吗
    --sketches_prefix_val 1000张验证集生成的草图路径
    --edges_prefix_val 1000张验证集生成的边缘图路径
    3.推理部分也有俩个mask路径 分别为 --masks /path/to/test/masks
    --masks /path/to/masks 这两个mask是下载测试协议中的mask吗 还是和训练部分相同是自己下载的。
    问题有点多,麻烦您了
@AlonzoLeeeooo
Copy link
Owner

AlonzoLeeeooo commented Sep 26, 2024

  1. --edges_prefix 指的就是ground truth edge,也就是BDCN从ground truth images中提取出来的edge maps,sketch会在定义数据集的对edge做形变产生,具体可以参考SRN_src/dataset.py__getitem__的这段代码:

    def __getitem__(self, index):
    data = {}
    data['image'] = cv2.imread(self.image_flist[index])
    filename = osp.basename(self.image_flist[index])
    if filename.split('.')[1] == "JPEG" or filename.split('.')[1] == "jpg":
    filename = filename.split('.')[0] + '.png'
    prefix = self.image_flist[index].split(filename)[0]
    data['edge'] = cv2.imread(osp.join(self.configs.edges_prefix, filename))
    # data['edge'] = cv2.imread(prefix + filename + '_edge.png')
    # generate free-form mask
    data['mask'] = generate_stroke_mask(im_size=[self.configs.size, self.configs.size])
    # normalize
    # images in range [-1, 1]
    # masks in range [0, 1]
    # edges in range [0, 1]
    data['image'] = data['image'] / 255.
    data['edge'] = data['edge'] / 255.
    # resize
    data['image'] = cv2.resize(data['image'], (self.configs.size, self.configs.size), interpolation=cv2.INTER_NEAREST)
    data['edge'] = cv2.resize(data['edge'], (self.configs.size, self.configs.size), interpolation=cv2.INTER_NEAREST)
    # binarize
    thresh = random.uniform(0.65, 0.75)
    _, data['edge'] = cv2.threshold(data['edge'], thresh=thresh, maxval=1.0, type=cv2.THRESH_BINARY)
    # to tensor
    # [H, W, C] -> [C, H, W]
    data['image'] = torch.from_numpy(data['image'].astype(np.float32)).permute(2,0,1).contiguous()
    data['mask'] = torch.from_numpy(data['mask'].astype(np.float32)).permute(2,0,1).contiguous()
    data['edge'] = torch.from_numpy(data['edge'].astype(np.float32)).permute(2,0,1).contiguous()
    # generate deform sketches
    data['sketch'] = self.deform_func(data['edge'].unsqueeze(0), self.max_move).squeeze(0)
    # compress RGB channels to 1 ([C, H, W])
    data['sketch'] = torch.sum(data['sketch'] / 3, dim=0, keepdim=True)
    data['edge'] = torch.sum(data['edge'] / 3, dim=0, keepdim=True)
    # return data consisting of: image, mask, sketch, edge
    data['sketch'] = data['sketch'].detach()
    return data

    其中,73行做了sketch deformation:
    data['sketch'] = self.deform_func(data['edge'].unsqueeze(0), self.max_move).squeeze(0)

  2. 做validation的话,验证集可以自己准备,可以参考我之前的设定:

  • --images_val从CelebA-HQ中抽1,000张训练没见过的图片;
  • --masks_val用DeepFill-v2的Free-form masks随机生成1,000张;
  • --edges_prefix_val用BDCN从1,000张验证集图片中抽取出的edge map;
  • --sketches_prefix_val单独用scripts/make_deform_sketch.py对1,000张ground truth edge map做deformation。
  1. 推理部分你说的这两个--masks应该是同一个,第一个是用于SRN对sketch做修复的,第二个是inpainting网络的mask。

以上,希望能解决你的问题。

@AlonzoLeeeooo AlonzoLeeeooo added the documentation Improvements or additions to documentation label Sep 26, 2024
@liuna0211
Copy link
Author

感谢您的回复, parser.add_argument('--images', type=str, default='', help='path of input sketches') 是这个命令行参数,是ground truth 路径 还是生成的sketches路径。另外,关于mask的路径,我还是有点不理解,这三个mask路径是否都是用DeepFill-v2的Free-form masks随机生成1,000张掩码图像,只是单纯的掩码,而不是将生成的掩码应用到图像上。那我是否可以使用https://blog.csdn.net/sinat_28442665/article/details/110872730#_NVIDIA_Irregular_Mask_Dataset_Testing_Setmask_6 NVIDIA_Irregular_Mask_Dataset_Testing_Setmask_6这种掩码图像呢。

@AlonzoLeeeooo
Copy link
Owner

AlonzoLeeeooo commented Sep 26, 2024

  1. --images这个命令行参数对应的应该是ground truth images的本地路径,comment部分是typo,已经在这条commit中修复;
  2. 训练时我们用的时候DeepFill-v2随机生成的mask,可以看到这行代码:
    data['mask'] = generate_stroke_mask(im_size=[self.configs.size, self.configs.size])

    理论上使用NVIDIA mask set肯定是没问题的,然后在命令行参数--masks中指定本地mask set的路径就可以了。代码上只需参照ValDataset数据集类里面的写法,再对应修改上面这行,将mask从本地读取即可。

@liuna0211
Copy link
Author

如果我想用您这种方式,不修改训练代码,我如何保存mask呢,是写一段脚本随机生成1000张。然后在命令行参数中指定本地mask 的路径,还是说使用这种方法,我就不需要加这条命令行了呢

@AlonzoLeeeooo
Copy link
Owner

如果我想用您这种方式,不修改训练代码,我如何保存mask呢,是写一段脚本随机生成1000张。然后在命令行参数中指定本地mask 的路径,还是说使用这种方法,我就不需要加这条命令行了呢

如果你做验证的时候想用free-form masks的话,单独把get_stroke_mask()拿出来写个脚本,然后生成固定的1,000张,在后续的验证中统一用这个集合即可。get_stroke_mask()可以直接从下面复制:

def generate_stroke_mask(im_size, parts=4, maxVertex=25, maxLength=80, maxBrushWidth=40, maxAngle=360):
mask = np.zeros((im_size[0], im_size[1], 1), dtype=np.float32)
for i in range(parts):
mask = mask + np_free_form_mask(maxVertex, maxLength, maxBrushWidth, maxAngle, im_size[0], im_size[1])
mask = np.minimum(mask, 1.0)
return mask

脚本大致就是numpycv2的一些基本操作,可以直接让cursor帮你写一个。

@liuna0211
Copy link
Author

好的 非常感谢

@AlonzoLeeeooo
Copy link
Owner

@liuna0211 您好,关于我们的工作您这边暂时没有其他问题的话,这个issue就先暂时关闭了哈~后续如果您有其他问题,欢迎随时在这个issue底下留言,或者提出一个新issue联系我们~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants