-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the test script #2
Comments
Dear sir/madam:
1. The test.lst only contains the left image, like this: ./RGB/RGBD_data_100.jpg. However, if u set the test.lst as ./nju2k/LR/000799_left.png ./depth/000799_left.jpg. It is also okay.
2. sal_lst means the list of final predicted salient object map. crf_lst means the list of the result after CRF. In the presented results, we don't use the CRF, so u can ignore it. The depth list is generated in the data layer.
The details are showed in /caffe/lib/ImageLabelDataTest.py. In this file, you can figure out how we generate the test data.
.
发件人:zhangqiudan <[email protected]>
发送日期:2019-08-05 12:01:23
收件人:JXingZhao/ContrastPrior <[email protected]>
抄送人:Subscribed <[email protected]>
主题:[JXingZhao/ContrastPrior] About the test script (#2)
Dear author,
Thank you very much for your public code. I am confused about your test script.
1. About the test.lst; Does the test.lst contain the path of the left image and the depth image?
./nju2k/LR/000799_left.png ./depth/000799_left.jpg
2. About the test.py; I didn't see the input of depth information from this test.py script. And
does the 'sal_lst' mean the final predicted salient object map? What does the 'crf_lst' mean?
Thank you so much.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Thank you very much for your reply.
JXingZhao <[email protected]> 于2019年8月5日周一 下午9:05写道:
… Dear sir/madam:
1. The test.lst only contains the left image, like this:
./RGB/RGBD_data_100.jpg. However, if u set the test.lst as
./nju2k/LR/000799_left.png ./depth/000799_left.jpg. It is also okay.
2. sal_lst means the list of final predicted salient object map. crf_lst
means the list of the result after CRF. In the presented results, we don't
use the CRF, so u can ignore it. The depth list is generated in the data
layer.
The details are showed in /caffe/lib/ImageLabelDataTest.py. In this file,
you can figure out how we generate the test data.
.
发件人:zhangqiudan ***@***.***>
发送日期:2019-08-05 12:01:23
收件人:JXingZhao/ContrastPrior ***@***.***>
抄送人:Subscribed ***@***.***>
主题:[JXingZhao/ContrastPrior] About the test script (#2)
Dear author,
Thank you very much for your public code. I am confused about your test
script.
1. About the test.lst; Does the test.lst contain the path of the left
image and the depth image?
./nju2k/LR/000799_left.png ./depth/000799_left.jpg
2. About the test.py; I didn't see the input of depth information from
this test.py script. And
does the 'sal_lst' mean the final predicted salient object map? What does
the 'crf_lst' mean?
Thank you so much.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2?email_source=notifications&email_token=AFNOWEG6HLRRKMIG3XTV4GDQDAQPZA5CNFSM4IJHIKJ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3RYAFI#issuecomment-518225941>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFNOWEEYVTJVORHKQFJXWY3QDAQPZANCNFSM4IJHIKJQ>
.
|
Dear author, |
I suggest that you can debug step by step. First, you can check whether the picture was successfully input into the network, and then check whether the input picture is correct.
发件人:zhangqiudan <[email protected]>
发送日期:2019-08-08 17:16:59
收件人:JXingZhao/ContrastPrior <[email protected]>
抄送人:JXingZhao <[email protected]>,Comment <[email protected]>
主题:Re: [JXingZhao/ContrastPrior] About the test script (#2)
Dear author,
When I use the provided final.caffemodel to predict the saliency map, the output is a black image. The printed 'res' is all zeros.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Another possible reason is that the input results of the network are not scaled to 0-255.
发件人:zhangqiudan <[email protected]>
发送日期:2019-08-08 17:16:59
收件人:JXingZhao/ContrastPrior <[email protected]>
抄送人:JXingZhao <[email protected]>,Comment <[email protected]>
主题:Re: [JXingZhao/ContrastPrior] About the test script (#2)
Dear author,
When I use the provided final.caffemodel to predict the saliency map, the output is a black image. The printed 'res' is all zeros.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Thank you very much for your reply, I will try to debug the program.
JXingZhao <[email protected]> 于2019年8月9日周五 下午2:55写道:
… Another possible reason is that the input results of the network are not
scaled to 0-255.
发件人:zhangqiudan ***@***.***>
发送日期:2019-08-08 17:16:59
收件人:JXingZhao/ContrastPrior ***@***.***>
抄送人:JXingZhao ***@***.***>,Comment <
***@***.***>
主题:Re: [JXingZhao/ContrastPrior] About the test script (#2)
Dear author,
When I use the provided final.caffemodel to predict the saliency map, the
output is a black image. The printed 'res' is all zeros.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2?email_source=notifications&email_token=AFNOWECTRQWMO5T64V7BMGLQDUIHPA5CNFSM4IJHIKJ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD35ZA7I#issuecomment-519803005>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFNOWEBALW7NNPIC247ZHYTQDUIHPANCNFSM4IJHIKJQ>
.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Dear author,
Thank you very much for your public code. I am confused about your test script.
1. About the test.lst; Does the test.lst contain the path of the left image and the depth image?
./nju2k/LR/000799_left.png ./depth/000799_left.jpg
2. About the test.py; I didn't see the input of depth information from this test.py script. And
does the 'sal_lst' mean the final predicted salient object map? What does the 'crf_lst' mean?
Thank you so much.
The text was updated successfully, but these errors were encountered: