Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dealing with sun/shade in stereo image #326

Open
1 of 2 tasks
ZongyangLi opened this issue Jun 1, 2017 · 12 comments
Open
1 of 2 tasks

Dealing with sun/shade in stereo image #326

ZongyangLi opened this issue Jun 1, 2017 · 12 comments
Assignees
Milestone

Comments

@ZongyangLi
Copy link
Contributor

ZongyangLi commented Jun 1, 2017

  • We are planning to deal with sun/shade in full field stitched rgb image by choosing a darker tile in a set of tiles, since we have overlap images.

  • If above method do not work well, we may try a variant of high-dynamic range rendering, this is not the best choice because the data is missing.

@ZongyangLi
Copy link
Contributor Author

Here is an initial result for this issue. Testing date 2017-04-27 which the same as those in #306
screen shot 2017-06-08 at 9 42 13 am

And this comes from a method can be describe as 'darker pixel choice from different set of tiles'.

@dlebauer Is that work for the purpose of sun/shade problem?

More visualization on 2017-05-27:
screen shot 2017-06-08 at 9 41 59 am

@max-zilla
Copy link
Contributor

@ZongyangLi what an improvement!

@dlebauer
Copy link
Member

dlebauer commented Jun 8, 2017 via email

@ZongyangLi
Copy link
Contributor Author

More test should be done to see whether it is robust enough.

If there are no shaded pixels, it will choose a darker pixel in the tiles set.

@ghost ghost added this to the July 2017 milestone Jun 22, 2017
@ghost ghost added the help wanted label Jun 22, 2017
@max-zilla
Copy link
Contributor

@ZongyangLi can you share sunshade code?

@ZongyangLi
Copy link
Contributor Author

@max-zilla
Copy link
Contributor

max-zilla commented Jun 29, 2017

@ZongyangLi I think you also need to upload an imported file:

import geotiff_to_tiles
geotiff_to_tiles.createVrt(out_dir,tif_file_list)

...don't think this file was included unless it was renamed.

do we want to do this as 2 separate extractors? @ZongyangLi suggests we may not always want to use darker pixels so we may want to preserve the original version as well and might want to retain old version + this version

I wonder if we can do this using parameters.

@dlebauer dlebauer reopened this Jul 6, 2017
@terraref terraref deleted a comment from max-zilla Jul 6, 2017
@dlebauer
Copy link
Member

dlebauer commented Jul 6, 2017

@pless can you please review / comment on / approve or suggest further changes to the updated algorithm?

@pless
Copy link

pless commented Jul 11, 2017

I think the current algorithm is appropriate for making a stitched whole field image for quality assurance and some simple processing. I'm not sure that any simple stitching process (or any more complicated process that would be called stitching) is good enough to be used for more complicated analysis.

@dlebauer
Copy link
Member

dlebauer commented Jul 12, 2017 via email

@pless
Copy link

pless commented Jul 12, 2017

I think the current stitched image is suitable for most data quality and coverage assessments and some data analysis.

For data quality, the stitched image, as is, is good for:

  • Understanding what part of the field was imaged,
  • Understanding if the imaging script is correctly capturing the plots (in the context of not imaging the whole field), or if there is a problem it is missing some of the plots.
  • Understanding if the image capture has good lighting, no motion blur, etc.

For data analysis, the stitched image, as is, is good for:

  • an extractor for Canopy Coverage Percentage, and
  • an extractor for some phenotype analysis (emergence date, leaf color, flower/panacle detection)

For some data analysis, the stitched image will cause some problems. To ground this discussion, here is an example of a stitched image

:

  • Any stitched image introduces new artifacts into the image data; it always introduces edges at the boundary of where one image turns into another --- either an explicitly black line boundary or an implicit boundary that is there because you can't exactly stitch images of a complicated 3D world (without making a full 3D model). Even if you could stitch them (say, it is just flat dirt), the same bit of the world is usually a different brightness when viewed from different directions.
  • The particular stitching strategy of "choose the darker pixel" is a nice way to automatically choose a good image when there is bright sunshine effects. It may create additional artifacts because the algorithm is allowed to integrate pixels from both images in potentially complicated patterns. These artifacts may be hard to account for.

The alternative is to always to all initial feature selection or image analysis on the original images, and to then create derived features or extracted features from those images and save those derived or extracted features per plot.

@dlebauer
Copy link
Member

@pless thank you for the description.

Regarding the alternative you suggest 'always do all initial feature selection or image analysis on the original images', what do we need to do to support algorithms that use the original images? Should we support both the current (stitch-->clip-->analyze) approach as well as the alternative (clip-->analyze-->combine)?

For the clip-->analyze-->combine approach, should we organize by plot e.g. in a directory named sensor/date/plot? Is there a general way to combine results (e.g. weighting by area covered and accounting for resampling of the same area due to image overlap ?).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants