Ayman Nassar

Utah State University

Subject Areas: Hydroinformatics, Cyberinfrastructure, Hydrologic Modeling, Remote Sensing, GIS, Physical Hydrology, Machine Learning

 Recent Activity

ABSTRACT:

This HydroShare resource is designed for retrieving and subsetting the NWM retrospective analysis datasets in Zarr format, which are hosted on the AWS bucket accessible at https://noaa-nwm-retrospective-3-0-pds.s3.amazonaws.com/index.html. The dataset offers spatial coverage across the entire CONUS and spans a time range from February 01, 1979, to January 31, 2023.

Show More

ABSTRACT:

This HydroShare resource provides instructions on accessing and subsetting the AORC 1.1 dataset in Zarr format, which is hosted on an AWS bucket (https://noaa-nwm-retrospective-3-0-pds.s3.amazonaws.com/index.html). The AORC dataset encompasses the entire contiguous United States (CONUS) and spans from February 1, 1979, to January 31, 2023. The resource contains two Jupyter Notebooks:

1. "AORC1.1_data_retrieval_for_point.ipynb": This notebook illustrates how to retrieve the AORC dataset for a specific point location.
2. "AORC1.1_data_retrieval_for_spatial_domain.ipynb": This notebook demonstrates how to retrieve the AORC dataset for a specific spatial domain.

Show More

ABSTRACT:

The objective of this HydroShare resource is to query AORC v1.0 Forcing data stored on HydroShare's Thredds server and create a subset of this dataset for a designated watershed and timeframe. The user is prompted to define their temporal and spatial frames of interest, which specifies the start and end dates for the data subset. Additionally, the user is prompted to define a spatial frame of interest, which could be a bounding box or a shapefile, to subset the data spatially.

Before the subsetting is performed, data is queried, and geospatial metadata is added to ensure that the data is correctly aligned with its corresponding location on the Earth's surface. To achieve this, two separate notebooks were created - [this notebook](https://github.com/CUAHSI/notebook-examples/blob/main/thredds/query-aorc-thredds.ipynb) and [this notebook] (https://github.com/CUAHSI/notebook-examples/blob/main/thredds/aorc-adding-spatial-metadata.ipynb) - which explain how to query the dataset and add geospatial metadata to AORC v1.0 data in detail, respectively. In this notebook, we call functions from the AORC.py script to perform these preprocessing steps, resulting in a cleaner notebook that focuses solely on the subsetting process.

Show More

ABSTRACT:

This HydroShare resource includes a shapefile of HUC12 for the entire CONUS.

Show More

ABSTRACT:

This HydroShare resource include a shapefile for HUC12 for the entire CONUS.

Show More

 Contact

Mobile 4358905557
Email (Log in to send email)
Resources
All 0
Collection 0
Resource 0
App Connector 0
Resource Resource
Progress report
Created: Oct. 26, 2017, 3:35 a.m.
Authors: Ayman Nassar

ABSTRACT:

This is the progress report of Ayman Nassar

Show More
Resource Resource
Soil_properties_netcdf
Created: Aug. 31, 2021, 9:08 p.m.
Authors: Nassar, Ayman

ABSTRACT:

Soil Properties

Show More
Resource Resource
FlowFromSnow-Sciunit
Created: Nov. 29, 2021, 10:20 p.m.
Authors: Nassar, Ayman · Tarboton, David · Ahmad, Raza

ABSTRACT:

This resource illustrates how data and code can be combined together to support hydrologic analyses. It was developed June 2020 as part of a HydroLearn Hackathon.

Show More
Resource Resource
scinunit simple demo
Created: Dec. 17, 2021, 12:35 a.m.
Authors: Tarboton, David

ABSTRACT:

Sciunit simple demo

Show More
Resource Resource
Sciunit testing
Created: Jan. 18, 2022, 12:38 a.m.
Authors: Tarboton, David

ABSTRACT:

Illustration of General idea of use case for sciunit container.
1. User creates sciunit (sciunit create Project1)
2. User initiates interactive capturing (sciunit exec -i)
3. User does their work. For now assume this is a series of shell commands
4. User saves or copies the sciunit
5. User opens the sciunit on a new computer and can re-execute the commands exactly as they would have on the old computer, from command line, from bash shell escapes or python in Jupyter
6. User sees a list of the commands that were in the sciunit and could use editing of them to reproduce

The setup.
On CUAHSI JupyterHub the user has a resource (the one above) with some code that is a simple example for modeling the relationship between streamflow and snow

There is a python "dependency" GetDatafunctions.py in a folder on CUAHSI JupyterHub. This is not part of the directory where the user is working. It is added to the python path for the programs to execute. This is a simple example of what could be a dependency the user may not exactly be aware of (e.g. if it is part of the CUAHSI JupyterHub platform, but not part of other platforms).

An export PYTHONPATH command is used to add the dependency to the python path.

Then the analysis is illustrated outside of sciunit.

Then sciunit is installed and the analysis repeated using sciunit exec.

Finally sciunit copy copies the sciunit to the sciunit repository

Then on a new computer

Sciunit open retrieves the sciunit

After repeating one of the executions, the sciunit directory has the dependency container unpacked

Setting the PYTHONPATH to the unpacked dependency allows the commands to be run on the new computer, just as they were on the old computer.

This is the vision - running on the new computer with dependencies from the old computer resolved.

Would like the dependencies to be “installed” on the new computer so that they work with Jupyter and Jupyter escape bash commands.

All is done from the command line - the Jupyter Notebook is just used as a convenient notepad.

Show More
Resource Resource

ABSTRACT:

This notebook demonstrates how to prepare a WRFHydro model on CyberGIS-Jupyter for Water (CJW) for execution on a supported High-Performance Computing (HPC) resource via the CyberGIS-Compute service. First-time users are highly encouraged to go through the [NCAR WRFHydro Hands-on Training on CJW](https://www.hydroshare.org/resource/d2c6618090f34ee898e005969b99cf90/) to get familiar WRFHydro model basics including compilation of source code, preparation of forcing data and typical model configurations. This notebook will not cover those topics and assume users already have hands-on experience with local model runs.

CyberGIS-Compute is a CyberGIS-enabled web service sits between CJW and HPC resources. It acts as a middleman that takes user requests (eg. submission of a model) originated from CJW, carries out the actual job submission of model on the target HPC resource, monitors job status, and retrieves outputs when the model execution has completed. The functionality of CyberGIS-Compute is exposed as a series of REST APIs. A Python client, [CyberGIS-Compute SDK](https://github.com/cybergis/cybergis-compute-python-sdk), has been developed for use in the CJW environment that provides a simple GUI to guide users through the job submission process. Prior to job submission, model configuration and input data should be prepared and arranged in a certain way that meets specific requirements, which vary by models and their implementation in CyberGIS-Compute. We will walk through the requirements for WRFHydro below.

The general workflow for WRFHydro in CyberGIS-Compute works as follows:

1. User picks a Model_Version of WRFHydro to use;
2. User prepares configuration files and data for the model on CJW;
3. User submits configuration files and data to CyberGIS-Compute;
4. CyberGIS-Compute transfers configuration files and data to target HPC;
5. CyberGIS-Compute downloads the chosen Model_Version of WRFhydro codebase on HPC;
6. CyberGIS-Compute applies compile-time configuration files to the codebase, and compiles the source code on the fly;
7. CyberGIS-Compute applies run-time configuration files and data to the model;
8. CyberGIS-Compute submits the model job to HPC scheduler for model execution;
9. CyberGIS-Compute monitors job status;
10. CyberGIS-Compute transfers model outputs from HPC to CJW upon user request;
11. User performs post-processing work on CJW;

Some steps in this notebook require user interaction. "Run cell by cell" is recommended. "Run All" may not work as expected.

How to run the notebook:
1) Click on the OpenWith button in the upper-right corner;
2) Select "CyberGIS-Jupyter for Water";
3) Open the notebook and follow instructions;

Show More
Resource Resource
NWM_IGUIDE_DEMO
Created: Aug. 31, 2022, 3:43 p.m.
Authors: Nassar, Ayman · Tarboton, David · Kalyanam, Rajesh · Li, Zhiyu/Drew · Baig, Furqan

ABSTRACT:

This notebook demonstrates the setup for a typical WRF-Hydro model on HydroShare leveraging different tools or services throughout the entire end-to-end modelling workflow. The notebook is designed in such a way that the user/modeler is able to retrieve datasets only relevant to a user-defined spatial domain (space domain), for example, a watershed domain of interest and time domain using a graphical user interface (GUI) linked to HPC. In order to help users submitting a job on HPC to run the model, they are provided with a user-friendly interface that abstracts away details and complexities involved in the HPC use such as authorization, authentication, monitoring and scheduling of the jobs, data and job management, and transferring data back and forth. Users can interact with this GUI to perform modeling work. This GUI is designed in such a way to allow users/modeler to 1) select the remote server where the HPC job will run, 2) upload the simulation directory, which contains the configuration files, 3) specify the parameters of the HPC job that the user is allowed to utilize, 4) set some parameters related to the model compilation, 5) follow-up on the submitted job status and 6) retrieve the model output files back to local workspace. Once the model execution is completed, users can easily have access to the model outputs on HPC and retrieve them to the local workspace for visualization and analysis.

Show More
Resource Resource
AORC Subsetting and Simulation (AWS vs. Cheyenne)
Created: Oct. 25, 2022, 3:52 p.m.
Authors: Nassar, Ayman

ABSTRACT:

This HydroShare resource is developed for comparing between two AORC data sources. We retrieved the old version of AORC dataset from Amazon S3 Bucket, while the new version (AORC v 1.1) obtained from Cheyenne supercomputer. We include a couple of functions in the Jupyter Notebook that helps in subsetting and manipulating the netCDF dataset, not restricted only for the AORC data.

Show More
Resource Resource
WBD HUC12 - Great Basin
Created: Feb. 8, 2023, 7:08 p.m.
Authors: Castronova, Anthony M.

ABSTRACT:

These are the HUC 12 watershed boundaries for the Great Basin.

Show More
Resource Resource
FLINC_DEMO_May_08_2023
Created: May 8, 2023, 6:40 p.m.
Authors: Nassar, Ayman

ABSTRACT:

This is a demo for FLINC using "FlowfromSnow" as an example

Show More
Resource Resource
CONUS_HUC12
Created: May 11, 2023, 7:35 p.m.
Authors: Nassar, Ayman

ABSTRACT:

This HydroShare resource include a shapefile for HUC12 for the entire CONUS.

Show More
Resource Resource
WBD12
Created: May 30, 2023, 8:46 p.m.
Authors: Nassar, Ayman

ABSTRACT:

This HydroShare resource includes a shapefile of HUC12 for the entire CONUS.

Show More
Resource Resource
AORC Subset
Created: Nov. 14, 2023, 5:40 p.m.
Authors: Nassar, Ayman · Tarboton, David · Castronova, Anthony M.

ABSTRACT:

The objective of this HydroShare resource is to query AORC v1.0 Forcing data stored on HydroShare's Thredds server and create a subset of this dataset for a designated watershed and timeframe. The user is prompted to define their temporal and spatial frames of interest, which specifies the start and end dates for the data subset. Additionally, the user is prompted to define a spatial frame of interest, which could be a bounding box or a shapefile, to subset the data spatially.

Before the subsetting is performed, data is queried, and geospatial metadata is added to ensure that the data is correctly aligned with its corresponding location on the Earth's surface. To achieve this, two separate notebooks were created - [this notebook](https://github.com/CUAHSI/notebook-examples/blob/main/thredds/query-aorc-thredds.ipynb) and [this notebook] (https://github.com/CUAHSI/notebook-examples/blob/main/thredds/aorc-adding-spatial-metadata.ipynb) - which explain how to query the dataset and add geospatial metadata to AORC v1.0 data in detail, respectively. In this notebook, we call functions from the AORC.py script to perform these preprocessing steps, resulting in a cleaner notebook that focuses solely on the subsetting process.

Show More
Resource Resource
AORC 1.1 Subset
Created: March 18, 2024, 7:24 p.m.
Authors: Nassar, Ayman · Tarboton, David

ABSTRACT:

This HydroShare resource provides instructions on accessing and subsetting the AORC 1.1 dataset in Zarr format, which is hosted on an AWS bucket (https://noaa-nwm-retrospective-3-0-pds.s3.amazonaws.com/index.html). The AORC dataset encompasses the entire contiguous United States (CONUS) and spans from February 1, 1979, to January 31, 2023. The resource contains two Jupyter Notebooks:

1. "AORC1.1_data_retrieval_for_point.ipynb": This notebook illustrates how to retrieve the AORC dataset for a specific point location.
2. "AORC1.1_data_retrieval_for_spatial_domain.ipynb": This notebook demonstrates how to retrieve the AORC dataset for a specific spatial domain.

Show More
Resource Resource

ABSTRACT:

This HydroShare resource is designed for retrieving and subsetting the NWM retrospective analysis datasets in Zarr format, which are hosted on the AWS bucket accessible at https://noaa-nwm-retrospective-3-0-pds.s3.amazonaws.com/index.html. The dataset offers spatial coverage across the entire CONUS and spans a time range from February 01, 1979, to January 31, 2023.

Show More