forecastVeg

Forecasting Vegetation Health at a High Spatial Resolution

View the Project on GitHub JohnNay/forecastVeg

Forecasting Vegetation Health at High Spatial Resolution

Drought threatens food and water security around the world, and this threat is likely to become more severe under climate change. High resolution predictive information can help farmers, water managers, and others to manage the effects of drought. We have created a tool to produce short-term forecasts of vegetation health at high spatial resolution, using open source software and NASA satellite data that are global in coverage. The tool automates downloading and processing Moderate Resolution Imaging Spectroradiometer (MODIS) datasets, and training gradient-boosted machine models on hundreds of millions of observations to predict future values of the Enhanced Vegetation Index. We compared the predictive power of different sets of variables (raw spectral MODIS data and Level-3 MODIS products) in two regions with distinct agro-ecological systems, climates, and cloud coverage: Sri Lanka and California. Our tool provides considerably greater predictive power on held-out datasets than simpler baseline models.

This website hosts the supplementary material for this project by John J. Nay, Emily Burchfield, and Jonathan Gilligan, listing the external software requirements and the exact commands to be run in a terminal for completing our process.

The data downloading and processing requires a computer with significant amounts of RAM (> 100 GB) because the data must be held in memory to manipulate it. The modeling and hyper-parameters search can be run on weaker machines but the training time will take months if run on a laptop. To complete model training and hyper-parameters search in a few days, train the models on a computer with >= available 32 threads and >= 100 GB RAM.

If you use these scripts, cite this paper:

Nay, John J., Burchfield, Emily, Gilligan, Jonathan. (2016) "Forecasting Vegetation Health at High Spatial Resolution." eprint arXiv:1602.06335

The paper can be downloaded as a pdf here: http://arxiv.org/abs/1602.06335.

United States National Science Foundation grant EAR-1204685 funded this research.

Poster

Overview:

The figure below illustrates our process. We downloaded and processed eleven years of remotely sensed imagery. We combined this data with ancillary datasets and reshaped it into a single matrix where each row corresponds to a pixel at one time and each column is a measured variable. We divided the observations into Training Data 1 and Testing Data 1 by sampling from large spatial grid indices without replacement. We then divided Training Data 1 into Training Data 2 and Testing Data 2 with the same spatial sampling process, and trained multiple models on Training Data 2, varying the hyper-parameters for each model estimation. We used Testing Data 2 to assess the performance of each model’s predictions. We repeated this loop of learning on Training Data 2 and testing on Testing Data 2 for each of the four different data types, and chose the combination of data type and hyper-parameter setting that achieved the highest performance in predicting Testing Data 2. Finally, we validated the best-performing model from the previous step by testing its performance on the held-out data in Testing Data 1. We repeated this entire process separately for Sri Lanka and California.

Methods Diagram

The next figure displays predictions of agricultural land in Testing Data 1, the hold-out data, in California.

Methods Diagram

Requirements

[sudo] pip install requests # for h2o
[sudo] pip install tabulate # for h2o
[sudo] pip install numpy # for reshaping and saving data
[sudo] pip install pandas # for reshaping and saving data
[sudo] pip install hyperopt # for estimating hyper-parameters of h2o models
[sudo] pip install annoy # for baseline model

The optional visualizations of validation performance requires R, and the R packages dplyr, ggplot2, and ggExtra.

Data construction:

python -u 0_matrix_construction.py spectral directory username password tiles today enddate referenceImage > 0_construct.log &

Pre-processing (spectral and non-spectral use different scripts):

For non-spectral:

python -u 1_pre_process.py load_data_fp save_data_fp load_extra_file intervals > 1_process.log &

For spectral:

python -u 1_pre_processS.py load_data_fp save_data_fp old_data_fp intervals load_extra_file > 1_processS.log &

For h2o:

For non-spectral:

python -u 2_h2o_process.py load_data_fp save_data_fp > 2a_h2o.log &

python -u 2_h2o_process_2.py load_data_fp save_training_data_fp save_holdout_data_fp save_training_ind_fp > 2b_h2o.log &

save_training_ind_fp is an optional argument for the 2_h2o_process_2.py script. If it is provided, then the script will create a column indicating whether each row is in the training or testing data. This column will be used by subsequent scripts in dividing data into training and testing (not hold-out data, that was done previously). When we run the spectral we usually do not specify this argument because we don't want to overwite the file we created for the level-3 data: this allows us to use the same training/test split and compare performance across predictor variable data type.

For spectral:

python -u 2_h2o_processS.py load_data_fp save_data_fp > 2a_h2oS.log &

python -u 2_h2o_process_2.py load_data_fp save_training_data_fp save_holdout_data_fp > 2b_h2oS.log &

For baseline:

python -u 2_baseline_process.py load_data_fp save_data_fp > 2_baseline.log &

Modeling (spectral and non-spectral use same scripts, just different arguments for predictor variables)

Modeling in h2o with GBM:

For non-spectral:

python -u 3_h2o_gbm.py load_data_fp load_train_ind_fp saving_fp GWP_lag LST_lag NDVI_lag FPAR_lag LAI_lag GP_lag PSN_lag nino34_lag time_period EVI_lag landuse > 3_gbm.log &

For spectral:

python -u 3_h2o_gbm.py load_data_fp load_train_ind_fp saving_fp B1_lag B2_lag B3_lag B4_lag B5_lag B6_lag B7_lag GWP_lag nino34_lag time_period EVI_lag landuse > 3_gbm.log &

Modeling in h2o with deep learning (both model-imputed and mean-imputed):

For non-spectral:

python -u 3_h2o_deeplearning_imputation.py load_data_fp saving_meanImputed_fp saving_modelImputed_fp saving_means_fp saving_models_fp GWP_lag LST_lag NDVI_lag FPAR_lag LAI_lag GP_lag PSN_lag nino34_lag time_period EVI_lag landuse > 3_dl_imp.log &

python -u 3_h2o_deeplearning.py load_data_fp load_train_ind_fp saving_fp GWP_lag LST_lag NDVI_lag FPAR_lag LAI_lag GP_lag PSN_lag nino34_lag time_period EVI_lag landuse > 3_dl_mean.log &

For spectral:

python -u 3_h2o_deeplearning_imputation.py load_data_fp saving_meanImputed_fp saving_modelImputed_fp saving_means_fp saving_models_fp B1_lag B2_lag B3_lag B4_lag B5_lag B6_lag B7_lag GWP_lag nino34_lag time_period EVI_lag landuse > 3_dl_imp.log &

python -u 3_h2o_deeplearning.py load_data_fp load_train_ind_fp saving_fp B1_lag B2_lag B3_lag B4_lag B5_lag B6_lag B7_lag GWP_lag nino34_lag time_period EVI_lag landuse > 3_dl_meanS.log &

Predicting holdout:

This data is reserved for final testing of the best model.

Only run the spectral or the level 3 (non-spectral) on the hold out data, not both. Choose the one that did the best on the test data in the previous scripts. Only run the deep learning or the GBM, not both. Choose the one that did the best on the test data in the previous scripts.

With Baseline:

python -u 4_baseline.py load_data_fp saving_model saving_fp saving_predictions_fp Trees Neighbs K > 4_bline_holdout.log &

With Models:

python -u 4_holdout_models.py load_data_fp train_data_fp training_res_fp saving_fp saving_predictions_fp saving_varimp_fp predictors > 4_model_holdout.log &

For non-spectral and GBM:

For spectral and GBM:

Create plots of validation performance:

For model selection, the plot comparing the performance of the different data types and locations:

Rscript paper_plots_modelSelection.R &

For final model validation on hold-out data, the many plots illustrating performance over space and time:

Rscript paper_plots.R &