Amber Spackman Jones

Utah State University | Research Engineer

Subject Areas: Hydrology, Water Quality Monitoring, Water Quality Modeling, Hydroinformatics, Cyberinfrastructure, Data Management

 Recent Activity

ABSTRACT:

This resource contains the supporting data and code files for the analyses presented in "Toward automating post processing of aquatic sensor data," a manuscript submitted to the journal Environmental Modelling and Software. This paper describes PyHydroQC, a Python package developed to identify and correct anomalous values in time series data collected by in situ aquatic sensors. For more information on PyHydroQC, see the code repository (https://github.com/AmberSJones/PyHydroQC) and the documentation (https://ambersjones.github.io/PyHydroQC/). The package may be installed from the Python Package Index (more info: https://packaging.python.org/tutorials/installing-packages/).

Included in this resource are input data, Python scripts to run the package on the input data (anomaly detection and correction), results from running the algorithm, and Python scripts for generating the figures in the manuscript. The organization and structure of the files are described in detail in the readme file. The input data were collected as part of the Logan River Observatory (LRO). The data in this resource represent a subset of data available for the LRO and were compiled by querying the LRO’s operational database. All available data for the LRO can be sourced at http://lrodata.usu.edu/tsa/ or on HydroShare: https://www.hydroshare.org/search/?q=logan%20river%20observatory.

There are two sets of scripts in this resource: 1.) Scripts that reproduce plots for the paper using saved results, and 2.) Code used to generate the complete results for the series in the case study. While all figures can be reproduced, there are challenges to running the code for the complete results (it is computationally intensive, different results will be generated due to the stochastic nature of the models, and the code was developed with an early version of the package), which is why the saved results are included in this resource. For a simple example of running PyHydroQC functions for anomaly detection and correction on a subset of data, see this resource: https://www.hydroshare.org/resource/92f393cbd06b47c398bdd2bbb86887ac/.

Show More

ABSTRACT:

This resource contains an example script for using the software package PyHydroQC. PyHydroQC was developed to identify and correct anomalous values in time series data collected by in situ aquatic sensors. For more information, see the code repository: https://github.com/AmberSJones/PyHydroQC and the documentation: https://ambersjones.github.io/PyHydroQC/. The package may be installed from the Python Package Index.

This script applies the functions to data from a single site in the Logan River Observatory, which is included in the repository. The data collected in the Logan River Observatory are sourced at http://lrodata.usu.edu/tsa/ or on HydroShare: https://www.hydroshare.org/search/?q=logan%20river%20observatory.

Anomaly detection methods include ARIMA (AutoRegressive Integrated Moving Average) and LSTM (Long Short Term Memory). These are time series regression methods that detect anomalies by comparing model estimates to sensor observations and labeling points as anomalous when they exceed a threshold. There are multiple possible approaches for applying LSTM for anomaly detection/correction. - Vanilla LSTM: uses past values of a single variable to estimate the next value of that variable. - Multivariate Vanilla LSTM: uses past values of multiple variables to estimate the next value for all variables. - Bidirectional LSTM: uses past and future values of a single variable to estimate a value for that variable at the time step of interest. - Multivariate Bidirectional LSTM: uses past and future values of multiple variables to estimate a value for all variables at the time step of interest.

The correction approach uses piecewise ARIMA models. Each group of consecutive anomalous points is considered as a unit to be corrected. Separate ARIMA models are developed for valid points preceding and following the anomalous group. Model estimates are blended to achieve a correction.

The anomaly detection and correction workflow involves the following steps:
1. Retrieving data
2. Applying rules-based detection to screen data and apply initial corrections
3. Identifying and correcting sensor drift and calibration (if applicable)
4. Developing a model (i.e., ARIMA or LSTM)
5. Applying model to make time series predictions
6. Determining a threshold and detecting anomalies by comparing sensor observations to modeled results
7. Widening the window over which an anomaly is identified
8. Aggregating detections resulting from multiple models
9. Making corrections for anomalous events

Instructions to run the notebook through the CUAHSI JupyterHub:
1. Click "Open with..." at the top of the resource and select the CUAHSI JupyterHub. You may need to sign into CUAHSI JupyterHub using your HydroShare credentials.
2. Select 'Python 3.8 - Scientific' as the server and click Start.
2. From your JupyterHub directory, click on the PyHydroQC_example.ipynb file.
3. Execute each cell in the code by clicking the Run button.

Show More

ABSTRACT:

CUAHSI’s Water Data Services are community developed, open access, and available to everyone. Workshops are used to share and learn how these services can help researchers and teams on a variety of research tasks. We include an overview of how to develop data management plans, which are increasingly required by most funders. Materials describe how to discover and find a broad array of water data-time series, samples, spatial coverages, published datasets, and case study workflows. CUAHSI apps and tools are introduced for expediting and documenting workflows. We have provided interactive curriculum and tutorials with examples of how toShare your data within a group and publish your data with a DOI. Future training opportunities and funding opportunities for graduate students are listed.

This workshop was a featured event at the 2019 UCOWR Annual Water Resources Conference, Tuesday, June 11 from 1:00 p.m. – 3:50 p.m., White Pine Meeting Room, Cliff Lodge Snowbird, Utah

Show More

ABSTRACT:

This project used Budyko-based methods to determine the elasticity and sensitivity of 29 subbasins in the Colorado River Basin. Elasticity and sensitivity are metrics used to determine the relative expected changes in runoff given changes in precipitation and temperature, respectively. We used publicly available data to determine long term averages for temperature, precipitation, and runoff for principal Colorado River subbasins. Given those data, we used Budyko-based methods to estimate the elasticity and sensitivity of each subbasin to changes in temperature and precipitation. We determined the aridity index of each subbasin and Budyko parameter (w), which aggregates watershed storage characteristics. Subcatchments located in the Upper Basin, driven mostly by snowmelt, have a lower aridity index and higher w value than those in the Lower Basin, driven by monsoonal storm events. The Paria and the Little Colorado River subbasins are particularly sensitive to changes in precipitation and temperature. To identify the initialization of direct human impacts, we used a double mass curve break point analysis on a single subcatchment. Two breakpoints were identified, 1963 and 1988, corresponding to human impact and climate change, respectively.

This data resource includes a document and power point reporting the key findings of this work. We include the code, input, and output files used to perform analyses, all of which are described in the readme.

Show More

ABSTRACT:

Since the closing of Glen Canyon Dam, the clear waters of the Colorado River have stripped sediment from beaches and sandbars in the Grand Canyon. In an attempt to distribute sand to rebuild beaches, high flow experiments (HFE) have been conducted wherein large releases from Glen Canyon Dam are made over several days. The HFE events are timed to follow the summer/fall monsoon season when sand delivery from the Paria River is typically high given that the Paria is the primary source of sand to the Colorado River in Marble Canyon. Unrelated reservoir operating rules coordinate annual releases from Lake Powell so that the storage contents of Lakes Powell and Mead are equalized. If these “equalization flows” are released when there is relatively little sand supplied from the Paria River, they are likely to erode downstream sandbars, including those created by HFEs. Currently, there is no connection between the operations for reservoir equalization and for implementation of HFEs. Our analysis examines potential changes to the equalization protocols to explore whether equalization flows can be delayed to avoid releases that cause sandbar depletion. Results indicate that delaying equalization in favor of sediment supply results in some inequity for Lakes Powell and Mead, but the imbalance is less than anticipated and less than with no equalization present. Jointly considering sediment supply and equalization could help retain sediment within the Grand Canyon, however, even in years of sand load that meets the threshold for HFE experiments, the sediment supply may not be sufficient to balance out the volumes of equalization flows.

This data resource consists of the files used to support this work. The word document and the power point presentation present the results of this work. The folder CRSS contains two other folders. One folder, 'model' contains a saved version of the Colorado River Simulation System - a model that may be implemented in Riverware. This saved model includes slots corresponding to estimated sediment and slots generated by the implemented ruleset to govern equalization (Sediment Equalization Trigger, Years Without Sediment, 1-yr, 2-yr, 3-yr Equalization Delay). The 'ruleset' folder contains rulesets used in this analysis. There are four rulesets - each corresponding to scenarios run. The folder Data contains R code for running statistical analysis on input sediment data and flow data. The raw input files to run the code are included that correspond to natural flow inputs(obtained from the Bureau of Reclamation) and sand load from the Paria River (obtained from the Grand Canyon Monitoring and Research Center). The Results folder includes 1. a table of Estimated Summer Sandload and 2. a spreadsheet of CRSS results for the various scenarios run along with plots for comparing between them.

Show More

 Contact

Resources
All 0
Collection 0
Composite Resource 0
Generic 0
Geographic Feature 0
Geographic Raster 0
HIS Referenced Time Series 0
Model Instance 0
Model Program 0
MODFLOW Model Instance Resource 0
Multidimensional (NetCDF) 0
Script Resource 0
SWAT Model Instance 0
Time Series 0
Web App 0
Composite Resource Composite Resource
iUTAH Research Data Policy
Created: Aug. 25, 2016, 9:30 p.m.
Authors: Jeffery S. Horsburgh · Amber Jones

ABSTRACT:

iUTAH (innovative Urban Transitions and Aridregion Hydrosustainability) is a collaborative research and training program in Utah. As part of project requirements, iUTAH developed a data policy that seeks to maximize the impact and broad use of datasets collected within iUTAH facilities and by iUTAH research teams. This policy document focuses on assisting iUTAH investigators in creating and sharing high-quality data. The policy defines the data types generated as part of iUTAH and clarifies timelines for associated data publication. It specifies the requirements for submittal of a data collection plan, the creation of metadata, and the publication of datasets. It clarifies requirements for cases involving human subjects as well as raw data and analytical products. The Policy includes guidelines for data and metadata standards, storage and archival, curation, and data use and citation. Agreements for data publishers and data use are also included as appendices.

Show More
Composite Resource Composite Resource
iUTAH Research Data Policy
Created: Jan. 26, 2017, midnight
Authors: Jeffery S. Horsburgh · Amber Jones

ABSTRACT:

iUTAH (innovative Urban Transitions and Aridregion Hydrosustainability) is a collaborative research and training program in Utah. As part of project requirements, iUTAH developed a data policy that seeks to maximize the impact and broad use of datasets collected within iUTAH facilities and by iUTAH research teams. This policy document focuses on assisting iUTAH investigators in creating and sharing high-quality data. The policy defines the data types generated as part of iUTAH and clarifies timelines for associated data publication. It specifies the requirements for submittal of a data collection plan, the creation of metadata, and the publication of datasets. It clarifies requirements for cases involving human subjects as well as raw data and analytical products. The Policy includes guidelines for data and metadata standards, storage and archival, curation, and data use and citation. Agreements for data publishers and data use are also included as appendices.

Show More
Composite Resource Composite Resource
Quality Control Experiment
Created: Feb. 22, 2018, 11:37 p.m.
Authors: Amber Jones · Dave Eiriksson · Jeffery S. Horsburgh

ABSTRACT:

These are data resulting from and related to an effort to examine subjectivity in the process of performing quality control on water quality data measured by in situ sensors. Participants (n=27) included novices unfamiliar with and technicians experienced in quality control. Each participant performed quality control post processing on the same datasets: one calendar year (2014) of water temperature, pH, and specific conductance. Participants were provided with a consistent set of guidelines, field notes, and tools. Participants used ODMTools (https://github.com/ODM2/ ODMToolsPython/) to perform the quality control exercise. This resource consists of:
1. Processed Results: Each file in this folder corresponds to one of the variables for which quality control was performed. Each row corresponds to a single time stamp and each column corresponds to the processed results generated by each participant. The first column corresponds to the original, raw data.
2. Survey Data: The files in this folder are related to an exit survey administered to participants upon completion of the exercise. It includes the survey questions (pdf), the full Qualtrics output (QualityControlSurvey.pdf), data and metadata files organized and encoded for display in the Survey Data Viewer (http://data.iutahepscor.org/surveys/survey/QCEXP) (QCExperimentSurveyDataFile.csv, QCExperimentSurveyMetadata.csv), and a file used to organize data for plots for the associated paper.
3. Field Record: Participants were provided this document, which gives information about the field maintenance activities relevant to performing QC.
4. Scripts: Each file in this folder corresponds to a script automatically generated by ODMTools while performing quality control. The files are organized by user ID and by variable.
5. Code and Analysis: Script used to generate the figures for this work in the associated paper. It is important to note that novice users correspond to IDs 1-22 and experienced users correspond to IDs 25-38. This folder also includes subsets of the data organized in supporting files used to generate Figure 6 (ExpGapVals.xlsx) and Table 5 (NoDataCount.xlsx).

Show More
Composite Resource Composite Resource

ABSTRACT:

This resource contains a Jupyter Notebook that uses Python to access and visualize data for the USGS flow gage on the Colorado River at Lee’s Ferry, AZ (09380000). This site monitors water quantity and quality for water released from Glen Canyon Dam that then flows through the Grand Canyon. To call these services in Python, the suds-py3 package was used. Using this package, a “GetValuesObject” request, as defined by WaterOneFlow, was passed to the server using inputs for the web service url, site code, variable code, and dates of interest. For this case, 15-minute discharge from August 1, 2018 to the current date was used. The web service returned an object from which the dates and the data values were obtained, as well as the site name. The Python libraries Pandas and Matplotlib were used to manipulate and view the results. The time series data were converted to lists and then to a Pandas series object. Using the “resample” function of Pandas, values for mean, minimum, and maximum were determined on a daily basis from the 15-minute data. Using Matplotlib, a figure object was created to which Pandas series objects were added using the Pandas plot method. The daily mean, minimum, maximum, and the 15-minute flow values were added to illustrate the differences in the daily ranges of data.

Show More
Composite Resource Composite Resource

ABSTRACT:

For environmental data measured by a variety of sensors and compiled from various sources, practitioners need tools that facilitate data access and data analysis. Data are often organized in formats that are incompatible with each other and that prevent full data integration. Furthermore, analyses of these data are hampered by the inadequate mechanisms for storage and organization. Ideally, data should be centrally housed and organized in an intuitive structure with established patterns for analyses. However, in reality, the data are often scattered in multiple files without uniform structure that must be transferred between users and called individually and manually for each analysis. This effort describes a process for compiling environmental data into a single, central database that can be accessed for analyses. We use the Logan River watershed and observed water level, discharge, specific conductance, and temperature as a test case. Of interest is analysis of flow partitioning. We formatted data files and organized them into a hierarchy, and we developed scripts that import the data to a database with structure designed for hydrologic time series data. Scripts access the populated database to determine baseflow separation, flow balance, and mass balance and visualize the results. The analyses were compiled into a package of scripts in Python, which can be modified and run by scientists and researchers to determine gains and losses in reaches of interest. To facilitate reproducibility, the database and associated scripts were shared to HydroShare as Jupyter Notebooks so that any user can access the data and perform the analyses, which facilitates standardization of these operations.

Show More
Composite Resource Composite Resource

ABSTRACT:

Hurricane Maria is an example of a natural disaster that caused disruptions to infrastructure resulting in concerns with water treatment failures and potential contamination of drinking water supplies. This dataset is focused on the water quality data collected in Puerto Rico after Hurricane Maria and is part of the larger collaborative RAPID Hurricane Maria project.

This resource consists of Excel workbooks and a SQLite database. Both were populated with data and metadata corresponding to discrete water quality analysis of drinking water systems in Puerto Rico impacted by Hurricane Maria collected as part of the RAPID Maria project. Sampling and analysis was performed by a team from Virginia Tech in February-April 2018. Discrete samples were collected and returned to the lab for ICPMS analysis. Sampling was also conducted in the field for temperature, pH, free and total chlorine, turbidity, and dissolved oxygen. Complete method and variable descriptions are contained in the workbooks and database. There are two separate workbooks: one for ICPMS data and one for field data. All results are contained in the single database. Sites were sampled corresponding to several water distribution systems and source streams in southwestern Puerto Rico. Coordinates are included for the stream sites, but to preserve the security of the water distribution sites, the locations are only identified as within Puerto Rico.

The workbooks follow the specifications for YAML Observations Data Archive (YODA) exchange format (https://github.com/ODM2/YODA-File). The workbooks are templates with sheets containing tables that are mapped to entities in the Observations Data Model 2 (ODM2 - https://github.com/ODM2). Each sheet in the workbook contains directions for its completion and brief descriptions of the attributes. The data in the sheets was converted to an SQLite database following the ODM2 schema that is also contained in this resource. Conversion was performed using a prototype Python translation software (https://github.com/ODM2/YODA-Tools).

Show More
Composite Resource Composite Resource

ABSTRACT:

Since the closing of Glen Canyon Dam, the clear waters of the Colorado River have stripped sediment from beaches and sandbars in the Grand Canyon. In an attempt to distribute sand to rebuild beaches, high flow experiments (HFE) have been conducted wherein large releases from Glen Canyon Dam are made over several days. The HFE events are timed to follow the summer/fall monsoon season when sand delivery from the Paria River is typically high given that the Paria is the primary source of sand to the Colorado River in Marble Canyon. Unrelated reservoir operating rules coordinate annual releases from Lake Powell so that the storage contents of Lakes Powell and Mead are equalized. If these “equalization flows” are released when there is relatively little sand supplied from the Paria River, they are likely to erode downstream sandbars, including those created by HFEs. Currently, there is no connection between the operations for reservoir equalization and for implementation of HFEs. Our analysis examines potential changes to the equalization protocols to explore whether equalization flows can be delayed to avoid releases that cause sandbar depletion. Results indicate that delaying equalization in favor of sediment supply results in some inequity for Lakes Powell and Mead, but the imbalance is less than anticipated and less than with no equalization present. Jointly considering sediment supply and equalization could help retain sediment within the Grand Canyon, however, even in years of sand load that meets the threshold for HFE experiments, the sediment supply may not be sufficient to balance out the volumes of equalization flows.

This data resource consists of the files used to support this work. The word document and the power point presentation present the results of this work. The folder CRSS contains two other folders. One folder, 'model' contains a saved version of the Colorado River Simulation System - a model that may be implemented in Riverware. This saved model includes slots corresponding to estimated sediment and slots generated by the implemented ruleset to govern equalization (Sediment Equalization Trigger, Years Without Sediment, 1-yr, 2-yr, 3-yr Equalization Delay). The 'ruleset' folder contains rulesets used in this analysis. There are four rulesets - each corresponding to scenarios run. The folder Data contains R code for running statistical analysis on input sediment data and flow data. The raw input files to run the code are included that correspond to natural flow inputs(obtained from the Bureau of Reclamation) and sand load from the Paria River (obtained from the Grand Canyon Monitoring and Research Center). The Results folder includes 1. a table of Estimated Summer Sandload and 2. a spreadsheet of CRSS results for the various scenarios run along with plots for comparing between them.

Show More
Composite Resource Composite Resource

ABSTRACT:

This project used Budyko-based methods to determine the elasticity and sensitivity of 29 subbasins in the Colorado River Basin. Elasticity and sensitivity are metrics used to determine the relative expected changes in runoff given changes in precipitation and temperature, respectively. We used publicly available data to determine long term averages for temperature, precipitation, and runoff for principal Colorado River subbasins. Given those data, we used Budyko-based methods to estimate the elasticity and sensitivity of each subbasin to changes in temperature and precipitation. We determined the aridity index of each subbasin and Budyko parameter (w), which aggregates watershed storage characteristics. Subcatchments located in the Upper Basin, driven mostly by snowmelt, have a lower aridity index and higher w value than those in the Lower Basin, driven by monsoonal storm events. The Paria and the Little Colorado River subbasins are particularly sensitive to changes in precipitation and temperature. To identify the initialization of direct human impacts, we used a double mass curve break point analysis on a single subcatchment. Two breakpoints were identified, 1963 and 1988, corresponding to human impact and climate change, respectively.

This data resource includes a document and power point reporting the key findings of this work. We include the code, input, and output files used to perform analyses, all of which are described in the readme.

Show More
Composite Resource Composite Resource

ABSTRACT:

CUAHSI’s Water Data Services are community developed, open access, and available to everyone. Workshops are used to share and learn how these services can help researchers and teams on a variety of research tasks. We include an overview of how to develop data management plans, which are increasingly required by most funders. Materials describe how to discover and find a broad array of water data-time series, samples, spatial coverages, published datasets, and case study workflows. CUAHSI apps and tools are introduced for expediting and documenting workflows. We have provided interactive curriculum and tutorials with examples of how toShare your data within a group and publish your data with a DOI. Future training opportunities and funding opportunities for graduate students are listed.

This workshop was a featured event at the 2019 UCOWR Annual Water Resources Conference, Tuesday, June 11 from 1:00 p.m. – 3:50 p.m., White Pine Meeting Room, Cliff Lodge Snowbird, Utah

Show More
Composite Resource Composite Resource
PyHydroQC Sensor Data QC: Single Site Example
Created: Dec. 9, 2020, 4:21 a.m.
Authors: Jones, Amber Spackman

ABSTRACT:

This resource contains an example script for using the software package PyHydroQC. PyHydroQC was developed to identify and correct anomalous values in time series data collected by in situ aquatic sensors. For more information, see the code repository: https://github.com/AmberSJones/PyHydroQC and the documentation: https://ambersjones.github.io/PyHydroQC/. The package may be installed from the Python Package Index.

This script applies the functions to data from a single site in the Logan River Observatory, which is included in the repository. The data collected in the Logan River Observatory are sourced at http://lrodata.usu.edu/tsa/ or on HydroShare: https://www.hydroshare.org/search/?q=logan%20river%20observatory.

Anomaly detection methods include ARIMA (AutoRegressive Integrated Moving Average) and LSTM (Long Short Term Memory). These are time series regression methods that detect anomalies by comparing model estimates to sensor observations and labeling points as anomalous when they exceed a threshold. There are multiple possible approaches for applying LSTM for anomaly detection/correction. - Vanilla LSTM: uses past values of a single variable to estimate the next value of that variable. - Multivariate Vanilla LSTM: uses past values of multiple variables to estimate the next value for all variables. - Bidirectional LSTM: uses past and future values of a single variable to estimate a value for that variable at the time step of interest. - Multivariate Bidirectional LSTM: uses past and future values of multiple variables to estimate a value for all variables at the time step of interest.

The correction approach uses piecewise ARIMA models. Each group of consecutive anomalous points is considered as a unit to be corrected. Separate ARIMA models are developed for valid points preceding and following the anomalous group. Model estimates are blended to achieve a correction.

The anomaly detection and correction workflow involves the following steps:
1. Retrieving data
2. Applying rules-based detection to screen data and apply initial corrections
3. Identifying and correcting sensor drift and calibration (if applicable)
4. Developing a model (i.e., ARIMA or LSTM)
5. Applying model to make time series predictions
6. Determining a threshold and detecting anomalies by comparing sensor observations to modeled results
7. Widening the window over which an anomaly is identified
8. Aggregating detections resulting from multiple models
9. Making corrections for anomalous events

Instructions to run the notebook through the CUAHSI JupyterHub:
1. Click "Open with..." at the top of the resource and select the CUAHSI JupyterHub. You may need to sign into CUAHSI JupyterHub using your HydroShare credentials.
2. Select 'Python 3.8 - Scientific' as the server and click Start.
2. From your JupyterHub directory, click on the PyHydroQC_example.ipynb file.
3. Execute each cell in the code by clicking the Run button.

Show More
Composite Resource Composite Resource

ABSTRACT:

This resource contains the supporting data and code files for the analyses presented in "Toward automating post processing of aquatic sensor data," a manuscript submitted to the journal Environmental Modelling and Software. This paper describes PyHydroQC, a Python package developed to identify and correct anomalous values in time series data collected by in situ aquatic sensors. For more information on PyHydroQC, see the code repository (https://github.com/AmberSJones/PyHydroQC) and the documentation (https://ambersjones.github.io/PyHydroQC/). The package may be installed from the Python Package Index (more info: https://packaging.python.org/tutorials/installing-packages/).

Included in this resource are input data, Python scripts to run the package on the input data (anomaly detection and correction), results from running the algorithm, and Python scripts for generating the figures in the manuscript. The organization and structure of the files are described in detail in the readme file. The input data were collected as part of the Logan River Observatory (LRO). The data in this resource represent a subset of data available for the LRO and were compiled by querying the LRO’s operational database. All available data for the LRO can be sourced at http://lrodata.usu.edu/tsa/ or on HydroShare: https://www.hydroshare.org/search/?q=logan%20river%20observatory.

There are two sets of scripts in this resource: 1.) Scripts that reproduce plots for the paper using saved results, and 2.) Code used to generate the complete results for the series in the case study. While all figures can be reproduced, there are challenges to running the code for the complete results (it is computationally intensive, different results will be generated due to the stochastic nature of the models, and the code was developed with an early version of the package), which is why the saved results are included in this resource. For a simple example of running PyHydroQC functions for anomaly detection and correction on a subset of data, see this resource: https://www.hydroshare.org/resource/92f393cbd06b47c398bdd2bbb86887ac/.

Show More