Hi, I'm an error. x

Amber Spackman Jones

Utah State University | Research Engineer

Subject Areas: Hydrology, Water Quality Monitoring, Water Quality Modeling, Hydroinformatics, Cyberinfrastructure, Data Management

 Recent Activity

ABSTRACT:

This is a really bad abstract.

Show More

ABSTRACT:

CUAHSI’s Water Data Services are community developed, open access, and available to everyone. Workshops are used to share and learn how these services can help researchers and teams on a variety of research tasks. We include an overview of how to develop data management plans, which are increasingly required by most funders. Materials describe how to discover and find a broad array of water data-time series, samples, spatial coverages, published datasets, and case study workflows. CUAHSI apps and tools are introduced for expediting and documenting workflows. We have provided interactive curriculum and tutorials with examples of how toShare your data within a group and publish your data with a DOI. Future training opportunities and funding opportunities for graduate students are listed.

This workshop was a featured event at the 2019 UCOWR Annual Water Resources Conference, Tuesday, June 11 from 1:00 p.m. – 3:50 p.m., White Pine Meeting Room, Cliff Lodge Snowbird, Utah

Show More

ABSTRACT:

Since the closing of Glen Canyon Dam, the clear waters of the Colorado River have stripped sediment from beaches and sandbars in the Grand Canyon. In an attempt to distribute sand to rebuild beaches, high flow experiments (HFE) have been conducted wherein large releases from Glen Canyon Dam are made over several days. The HFE events are timed to follow the summer/fall monsoon season when sand delivery from the Paria River is typically high given that the Paria is the primary source of sand to the Colorado River in Marble Canyon. Unrelated reservoir operating rules coordinate annual releases from Lake Powell so that the storage contents of Lakes Powell and Mead are equalized. If these “equalization flows” are released when there is relatively little sand supplied from the Paria River, they are likely to erode downstream sandbars, including those created by HFEs. Currently, there is no connection between the operations for reservoir equalization and for implementation of HFEs. Our analysis examines potential changes to the equalization protocols to explore whether equalization flows can be delayed to avoid releases that cause sandbar depletion. Results indicate that delaying equalization in favor of sediment supply results in some inequity for Lakes Powell and Mead, but the imbalance is less than anticipated and less than with no equalization present. Jointly considering sediment supply and equalization could help retain sediment within the Grand Canyon, however, even in years of sand load that meets the threshold for HFE experiments, the sediment supply may not be sufficient to balance out the volumes of equalization flows.

This data resource consists of the files used to support this work. The word document and the power point presentation present the results of this work. The folder CRSS contains two other folders. One folder, 'model' contains a saved version of the Colorado River Simulation System - a model that may be implemented in Riverware. This saved model includes slots corresponding to estimated sediment and slots generated by the implemented ruleset to govern equalization (Sediment Equalization Trigger, Years Without Sediment, 1-yr, 2-yr, 3-yr Equalization Delay). The 'ruleset' folder contains rulesets used in this analysis. There are four rulesets - each corresponding to scenarios run. The folder Data contains R code for running statistical analysis on input sediment data and flow data. The raw input files to run the code are included that correspond to natural flow inputs(obtained from the Bureau of Reclamation) and sand load from the Paria River (obtained from the Grand Canyon Monitoring and Research Center). The Results folder includes 1. a table of Estimated Summer Sandload and 2. a spreadsheet of CRSS results for the various scenarios run along with plots for comparing between them.

Show More

ABSTRACT:

Hurricane Maria is an example of a natural disaster that caused disruptions to infrastructure resulting in concerns with water treatment failures and potential contamination of drinking water supplies. This dataset is focused on the water quality data collected in Puerto Rico after Hurricane Maria and is part of the larger collaborative RAPID Hurricane Maria project.

This resource consists of Excel workbooks and a SQLite database. Both were populated with data and metadata corresponding to discrete water quality analysis of drinking water systems in Puerto Rico impacted by Hurricane Maria collected as part of the RAPID Maria project. Sampling and analysis was performed by a team from Virginia Tech in February-April 2018. Discrete samples were collected and returned to the lab for ICPMS analysis. Sampling was also conducted in the field for temperature, pH, free and total chlorine, turbidity, and dissolved oxygen. Complete method and variable descriptions are contained in the workbooks and database. There are two separate workbooks: one for ICPMS data and one for field data. All results are contained in the single database. Sites were sampled corresponding to several water distribution systems and source streams in southwestern Puerto Rico. Coordinates are included for the stream sites, but to preserve the security of the water distribution sites, the locations are only identified as within Puerto Rico.

The workbooks follow the specifications for YAML Observations Data Archive (YODA) exchange format (https://github.com/ODM2/YODA-File). The workbooks are templates with sheets containing tables that are mapped to entities in the Observations Data Model 2 (ODM2 - https://github.com/ODM2). Each sheet in the workbook contains directions for its completion and brief descriptions of the attributes. The data in the sheets was converted to an SQLite database following the ODM2 schema that is also contained in this resource. Conversion was performed using a prototype Python translation software (https://github.com/ODM2/YODA-Tools).

Show More

ABSTRACT:

For environmental data measured by a variety of sensors and compiled from various sources, practitioners need tools that facilitate data access and data analysis. Data are often organized in formats that are incompatible with each other and that prevent full data integration. Furthermore, analyses of these data are hampered by the inadequate mechanisms for storage and organization. Ideally, data should be centrally housed and organized in an intuitive structure with established patterns for analyses. However, in reality, the data are often scattered in multiple files without uniform structure that must be transferred between users and called individually and manually for each analysis. This effort describes a process for compiling environmental data into a single, central database that can be accessed for analyses. We use the Logan River watershed and observed water level, discharge, specific conductance, and temperature as a test case. Of interest is analysis of flow partitioning. We formatted data files and organized them into a hierarchy, and we developed scripts that import the data to a database with structure designed for hydrologic time series data. Scripts access the populated database to determine baseflow separation, flow balance, and mass balance and visualize the results. The analyses were compiled into a package of scripts in Python, which can be modified and run by scientists and researchers to determine gains and losses in reaches of interest. To facilitate reproducibility, the database and associated scripts were shared to HydroShare as Jupyter Notebooks so that any user can access the data and perform the analyses, which facilitates standardization of these operations.

Show More

 Contact

Resources
All 0
Collection 0
Composite Resource 0
Generic 0
Geographic Feature 0
Geographic Raster 0
HIS Referenced Time Series 0
Model Instance 0
Model Program 0
MODFLOW Model Instance Resource 0
Multidimensional (NetCDF) 0
Script Resource 0
SWAT Model Instance 0
Time Series 0
Web App 0
Generic Generic
iUTAH Research Data Policy
Created: Aug. 25, 2016, 9:30 p.m.
Authors: Jeffery S. Horsburgh · Amber Jones

ABSTRACT:

iUTAH (innovative Urban Transitions and Aridregion Hydrosustainability) is a collaborative research and training program in Utah. As part of project requirements, iUTAH developed a data policy that seeks to maximize the impact and broad use of datasets collected within iUTAH facilities and by iUTAH research teams. This policy document focuses on assisting iUTAH investigators in creating and sharing high-quality data. The policy defines the data types generated as part of iUTAH and clarifies timelines for associated data publication. It specifies the requirements for submittal of a data collection plan, the creation of metadata, and the publication of datasets. It clarifies requirements for cases involving human subjects as well as raw data and analytical products. The Policy includes guidelines for data and metadata standards, storage and archival, curation, and data use and citation. Agreements for data publishers and data use are also included as appendices.

Show More
Generic Generic
Test Resource
Created: Jan. 19, 2017, 1:28 p.m.
Authors: Amber Jones · Jeffery S. Horsburgh · Aanderud, Zach ·

ABSTRACT:

This is a test resource created to demonstrate HydroShare functionality.

Show More
Generic Generic
iUTAH Research Data Policy
Created: Jan. 26, 2017, midnight
Authors: Jeffery S. Horsburgh · Amber Jones

ABSTRACT:

iUTAH (innovative Urban Transitions and Aridregion Hydrosustainability) is a collaborative research and training program in Utah. As part of project requirements, iUTAH developed a data policy that seeks to maximize the impact and broad use of datasets collected within iUTAH facilities and by iUTAH research teams. This policy document focuses on assisting iUTAH investigators in creating and sharing high-quality data. The policy defines the data types generated as part of iUTAH and clarifies timelines for associated data publication. It specifies the requirements for submittal of a data collection plan, the creation of metadata, and the publication of datasets. It clarifies requirements for cases involving human subjects as well as raw data and analytical products. The Policy includes guidelines for data and metadata standards, storage and archival, curation, and data use and citation. Agreements for data publishers and data use are also included as appendices.

Show More
Composite Resource Composite Resource
Quality Control Experiment
Created: Feb. 22, 2018, 11:37 p.m.
Authors: Amber Jones · Dave Eiriksson · Jeffery S. Horsburgh

ABSTRACT:

These are data resulting from and related to an effort to examine subjectivity in the process of performing quality control on water quality data measured by in situ sensors. Participants (n=27) included novices unfamiliar with and technicians experienced in quality control. Each participant performed quality control post processing on the same datasets: one calendar year (2014) of water temperature, pH, and specific conductance. Participants were provided with a consistent set of guidelines, field notes, and tools. Participants used ODMTools (https://github.com/ODM2/ ODMToolsPython/) to perform the quality control exercise. This resource consists of:
1. Processed Results: Each file in this folder corresponds to one of the variables for which quality control was performed. Each row corresponds to a single time stamp and each column corresponds to the processed results generated by each participant. The first column corresponds to the original, raw data.
2. Survey Data: The files in this folder are related to an exit survey administered to participants upon completion of the exercise. It includes the survey questions (pdf), the full Qualtrics output (QualityControlSurvey.pdf), data and metadata files organized and encoded for display in the Survey Data Viewer (http://data.iutahepscor.org/surveys/survey/QCEXP) (QCExperimentSurveyDataFile.csv, QCExperimentSurveyMetadata.csv), and a file used to organize data for plots for the associated paper.
3. Field Record: Participants were provided this document, which gives information about the field maintenance activities relevant to performing QC.
4. Scripts: Each file in this folder corresponds to a script automatically generated by ODMTools while performing quality control. The files are organized by user ID and by variable.
5. Code and Analysis: Script used to generate the figures for this work in the associated paper. It is important to note that novice users correspond to IDs 1-22 and experienced users correspond to IDs 25-38. This folder also includes subsets of the data organized in supporting files used to generate Figure 6 (ExpGapVals.xlsx) and Table 5 (NoDataCount.xlsx).

Show More
Composite Resource Composite Resource

ABSTRACT:

This resource contains a Jupyter Notebook that uses Python to access and visualize data for the USGS flow gage on the Colorado River at Lee’s Ferry, AZ (09380000). This site monitors water quantity and quality for water released from Glen Canyon Dam that then flows through the Grand Canyon. To call these services in Python, the suds-py3 package was used. Using this package, a “GetValuesObject” request, as defined by WaterOneFlow, was passed to the server using inputs for the web service url, site code, variable code, and dates of interest. For this case, 15-minute discharge from August 1, 2018 to the current date was used. The web service returned an object from which the dates and the data values were obtained, as well as the site name. The Python libraries Pandas and Matplotlib were used to manipulate and view the results. The time series data were converted to lists and then to a Pandas series object. Using the “resample” function of Pandas, values for mean, minimum, and maximum were determined on a daily basis from the 15-minute data. Using Matplotlib, a figure object was created to which Pandas series objects were added using the Pandas plot method. The daily mean, minimum, maximum, and the 15-minute flow values were added to illustrate the differences in the daily ranges of data.

Show More
Composite Resource Composite Resource

ABSTRACT:

For environmental data measured by a variety of sensors and compiled from various sources, practitioners need tools that facilitate data access and data analysis. Data are often organized in formats that are incompatible with each other and that prevent full data integration. Furthermore, analyses of these data are hampered by the inadequate mechanisms for storage and organization. Ideally, data should be centrally housed and organized in an intuitive structure with established patterns for analyses. However, in reality, the data are often scattered in multiple files without uniform structure that must be transferred between users and called individually and manually for each analysis. This effort describes a process for compiling environmental data into a single, central database that can be accessed for analyses. We use the Logan River watershed and observed water level, discharge, specific conductance, and temperature as a test case. Of interest is analysis of flow partitioning. We formatted data files and organized them into a hierarchy, and we developed scripts that import the data to a database with structure designed for hydrologic time series data. Scripts access the populated database to determine baseflow separation, flow balance, and mass balance and visualize the results. The analyses were compiled into a package of scripts in Python, which can be modified and run by scientists and researchers to determine gains and losses in reaches of interest. To facilitate reproducibility, the database and associated scripts were shared to HydroShare as Jupyter Notebooks so that any user can access the data and perform the analyses, which facilitates standardization of these operations.

Show More
Composite Resource Composite Resource

ABSTRACT:

Hurricane Maria is an example of a natural disaster that caused disruptions to infrastructure resulting in concerns with water treatment failures and potential contamination of drinking water supplies. This dataset is focused on the water quality data collected in Puerto Rico after Hurricane Maria and is part of the larger collaborative RAPID Hurricane Maria project.

This resource consists of Excel workbooks and a SQLite database. Both were populated with data and metadata corresponding to discrete water quality analysis of drinking water systems in Puerto Rico impacted by Hurricane Maria collected as part of the RAPID Maria project. Sampling and analysis was performed by a team from Virginia Tech in February-April 2018. Discrete samples were collected and returned to the lab for ICPMS analysis. Sampling was also conducted in the field for temperature, pH, free and total chlorine, turbidity, and dissolved oxygen. Complete method and variable descriptions are contained in the workbooks and database. There are two separate workbooks: one for ICPMS data and one for field data. All results are contained in the single database. Sites were sampled corresponding to several water distribution systems and source streams in southwestern Puerto Rico. Coordinates are included for the stream sites, but to preserve the security of the water distribution sites, the locations are only identified as within Puerto Rico.

The workbooks follow the specifications for YAML Observations Data Archive (YODA) exchange format (https://github.com/ODM2/YODA-File). The workbooks are templates with sheets containing tables that are mapped to entities in the Observations Data Model 2 (ODM2 - https://github.com/ODM2). Each sheet in the workbook contains directions for its completion and brief descriptions of the attributes. The data in the sheets was converted to an SQLite database following the ODM2 schema that is also contained in this resource. Conversion was performed using a prototype Python translation software (https://github.com/ODM2/YODA-Tools).

Show More
Composite Resource Composite Resource

ABSTRACT:

Since the closing of Glen Canyon Dam, the clear waters of the Colorado River have stripped sediment from beaches and sandbars in the Grand Canyon. In an attempt to distribute sand to rebuild beaches, high flow experiments (HFE) have been conducted wherein large releases from Glen Canyon Dam are made over several days. The HFE events are timed to follow the summer/fall monsoon season when sand delivery from the Paria River is typically high given that the Paria is the primary source of sand to the Colorado River in Marble Canyon. Unrelated reservoir operating rules coordinate annual releases from Lake Powell so that the storage contents of Lakes Powell and Mead are equalized. If these “equalization flows” are released when there is relatively little sand supplied from the Paria River, they are likely to erode downstream sandbars, including those created by HFEs. Currently, there is no connection between the operations for reservoir equalization and for implementation of HFEs. Our analysis examines potential changes to the equalization protocols to explore whether equalization flows can be delayed to avoid releases that cause sandbar depletion. Results indicate that delaying equalization in favor of sediment supply results in some inequity for Lakes Powell and Mead, but the imbalance is less than anticipated and less than with no equalization present. Jointly considering sediment supply and equalization could help retain sediment within the Grand Canyon, however, even in years of sand load that meets the threshold for HFE experiments, the sediment supply may not be sufficient to balance out the volumes of equalization flows.

This data resource consists of the files used to support this work. The word document and the power point presentation present the results of this work. The folder CRSS contains two other folders. One folder, 'model' contains a saved version of the Colorado River Simulation System - a model that may be implemented in Riverware. This saved model includes slots corresponding to estimated sediment and slots generated by the implemented ruleset to govern equalization (Sediment Equalization Trigger, Years Without Sediment, 1-yr, 2-yr, 3-yr Equalization Delay). The 'ruleset' folder contains rulesets used in this analysis. There are four rulesets - each corresponding to scenarios run. The folder Data contains R code for running statistical analysis on input sediment data and flow data. The raw input files to run the code are included that correspond to natural flow inputs(obtained from the Bureau of Reclamation) and sand load from the Paria River (obtained from the Grand Canyon Monitoring and Research Center). The Results folder includes 1. a table of Estimated Summer Sandload and 2. a spreadsheet of CRSS results for the various scenarios run along with plots for comparing between them.

Show More
Composite Resource Composite Resource

ABSTRACT:

CUAHSI’s Water Data Services are community developed, open access, and available to everyone. Workshops are used to share and learn how these services can help researchers and teams on a variety of research tasks. We include an overview of how to develop data management plans, which are increasingly required by most funders. Materials describe how to discover and find a broad array of water data-time series, samples, spatial coverages, published datasets, and case study workflows. CUAHSI apps and tools are introduced for expediting and documenting workflows. We have provided interactive curriculum and tutorials with examples of how toShare your data within a group and publish your data with a DOI. Future training opportunities and funding opportunities for graduate students are listed.

This workshop was a featured event at the 2019 UCOWR Annual Water Resources Conference, Tuesday, June 11 from 1:00 p.m. – 3:50 p.m., White Pine Meeting Room, Cliff Lodge Snowbird, Utah

Show More
Composite Resource Composite Resource
New Resource For My Best Demo Ever
Created: June 11, 2019, 7:51 p.m.
Authors: Jeffery S. Horsburgh · Amber Spackman Jones

ABSTRACT:

This is a really bad abstract.

Show More