Please wait for the process to complete.
Checking for non-preferred file/folder path names (may take a long time depending on the number of files/folders) ...
This resource contains some files/folders that have non-preferred characters in their name. Show non-conforming files/folders.
||This resource does not have an owner who is an active HydroShare user. Contact CUAHSI (firstname.lastname@example.org) for information on this resource.|
|Storage:||The size of this resource is 2.0 MB|
|Created:||Sep 06, 2017 at 8:29 p.m.|
|Last updated:|| Dec 04, 2018 at 9:09 p.m.
|Citation:||See how to cite this resource|
|+1 Votes:||1 other +1 this|
|Comments:||No comments (yet)|
This iPython notebook demonstrates the workflow for obtaining and processing gridded meteorology data files with the Observatory for Gridded Hydrometeorology Python library.
Using the Sauk-Suiattle, Elwha, and Upper Rio Salado watersheds as the study sites of interest, each Jupyter notebook will guide the user through assembling the datasets and analyses from each of seven gridded data product.
In Usecase 1, users may inspect their study site of interest given in the form of summary spatial visualizations. The treatgeoself() function will yield a mapping file per study site, which reduces the gridded cell centroids to the subset that intersects with the study area (i.e., within the watershed). Within treatgeoself(), the user may determine the amount of buffer space to include outside of the study site (default is 0.06-degree buffer region).
In Usecase 2, each of the mapping files are used to guide data retrieval from each of the gridded data products. A series of _get_ functions then downloads the files to designated subfolders. The resulting file paths are cataloged into the mapping file, which can be summarized for data availability according to the elevation gradient using the mappingfileSummary() function. These downloaded files are compressed into tar.gz files, then migrated with their respective mapping files as content files within a new HydroShare resource, for ease of collaborative use.
In Usecase 3, the downloaded files from Usecase 2 are processed in to spatial and temporal summary statistics. The gridclim_dict() function compiles and computes daily, monthly, annual, and monthly-yearly average values for each variable described in the gridded data product metadata (e.g., the ogh_meta class dictionary). Monthly averages are then visualized as time-series plots, while spatial averages are visualized as spatial heatmaps. Finally, the dictionary of dataframes (the product of the spatial-temporal analyses) is saved into a json file and migrated out as a content file within a new HydroShare resource.
This resource was created using funding from the following sources:
|Agency Name||Award Title||Award Number|
|Bureau of Indian Affairs|
People or Organizations that contributed technically, materially, financially, or provided general support for the creation of the resource's content but are not considered authors.
|Sauk-Suiattle Indian Tribe|
|Skagit Climate Consortium|
How to Cite
This resource is shared under the Creative Commons Attribution CC BY.http://creativecommons.org/licenses/by/4.0/