Skip to content

Latest commit

 

History

History
48 lines (46 loc) · 2.82 KB

data_acquisition.md

File metadata and controls

48 lines (46 loc) · 2.82 KB

Data Acquisition

Introduction

This document will guide you through the process of extracting the average NDVI of a given set of polygons of farm field boundaries. This process relies on Google Earth Engine's Python API for much of this processes back-end. The goal here is to obtain properly formatted NDVI timeseries data in multiple .csv files to use for the F.A.M. algorithm's input. Here is a link to pre-extracted input data for California, Washington, and Nevada.

You only need to obtain the raw .csv outputs after completing step 5 in the document linked above. F.A.M. contains routines to format and merge the extracted files in accordance to the algorithm's requirements. Once the extractions are completed there will be a series of files which look something like this:

Directory Setup

You will need to initially set up a directory structure in this format. Much of this will be completed after cloning the repository, however the input folders will need to be configured for your specific application as the years you intend to analyze may vary from others.

    .
    ├── docs                    # documentation
    ├── maps                    # map generation tool
    └── states
        ├── California
        │   ├── input
        │   │   ├── 2008
        │   │   ├── ...         # a directory for each year of data
        │   │   ├── 2020
        │   │   └── crop_data   # perennial data to be stored here
        │   ├── output
        │   └── cache           # will be created automatically after initial run
        │
        ├── Nevada
        │   ├── input
        │   │   ├── 2008
        │   │   ├── ...
        │   │   └── 2020
        │   ├── output
        │   └── cache
        │
        └── Washington
            ├── input
            │   ├── 2008
            ├── ├── ...
            │   ├── 2020
            │   └── crop_data
            ├── output
            └── cache

In the input directories within each folder labeled according its corresponding year, place the output files of the extractions. You will note that there is a folder named crop_data. This contains a .csv file which lists known perennial sites within the state. This information is provided for both Washington and California, the latter can be found here.

Now that everything is in place, we can proceed to running F.A.M.