Ayman Nassar

Utah State University

Subject Areas: Hydroinformatics, Cyberinfrastructure, Hydrologic Modeling, Remote Sensing, GIS, Physical Hydrology, Machine Learning

 Recent Activity

ABSTRACT:

This notebook demonstrates how to prepare a WRFHydro model on CyberGIS-Jupyter for Water (CJW) for execution on a supported High-Performance Computing (HPC) resource via the CyberGIS-Compute service. First-time users are highly encouraged to go through the [NCAR WRFHydro Hands-on Training on CJW](https://www.hydroshare.org/resource/d2c6618090f34ee898e005969b99cf90/) to get familiar WRFHydro model basics including compilation of source code, preparation of forcing data and typical model configurations. This notebook will not cover those topics and assume users already have hands-on experience with local model runs.

CyberGIS-Compute is a CyberGIS-enabled web service sits between CJW and HPC resources. It acts as a middleman that takes user requests (eg. submission of a model) originated from CJW, carries out the actual job submission of model on the target HPC resource, monitors job status, and retrieves outputs when the model execution has completed. The functionality of CyberGIS-Compute is exposed as a series of REST APIs. A Python client, [CyberGIS-Compute SDK](https://github.com/cybergis/cybergis-compute-python-sdk), has been developed for use in the CJW environment that provides a simple GUI to guide users through the job submission process. Prior to job submission, model configuration and input data should be prepared and arranged in a certain way that meets specific requirements, which vary by models and their implementation in CyberGIS-Compute. We will walk through the requirements for WRFHydro below.

The general workflow for WRFHydro in CyberGIS-Compute works as follows:

1. User picks a Model_Version of WRFHydro to use;
2. User prepares configuration files and data for the model on CJW;
3. User submits configuration files and data to CyberGIS-Compute;
4. CyberGIS-Compute transfers configuration files and data to target HPC;
5. CyberGIS-Compute downloads the chosen Model_Version of WRFhydro codebase on HPC;
6. CyberGIS-Compute applies compile-time configuration files to the codebase, and compiles the source code on the fly;
7. CyberGIS-Compute applies run-time configuration files and data to the model;
8. CyberGIS-Compute submits the model job to HPC scheduler for model execution;
9. CyberGIS-Compute monitors job status;
10. CyberGIS-Compute transfers model outputs from HPC to CJW upon user request;
11. User performs post-processing work on CJW;

Some steps in this notebook require user interaction. "Run cell by cell" is recommended. "Run All" may not work as expected.

How to run the notebook:
1) Click on the OpenWith button in the upper-right corner;
2) Select "CyberGIS-Jupyter for Water";
3) Open the notebook and follow instructions;

Show More

ABSTRACT:

Illustration of General idea of use case for sciunit container.
1. User creates sciunit (sciunit create Project1)
2. User initiates interactive capturing (sciunit exec -i)
3. User does their work. For now assume this is a series of shell commands
4. User saves or copies the sciunit
5. User opens the sciunit on a new computer and can re-execute the commands exactly as they would have on the old computer, from command line, from bash shell escapes or python in Jupyter
6. User sees a list of the commands that were in the sciunit and could use editing of them to reproduce

The setup.
On CUAHSI JupyterHub the user has a resource (the one above) with some code that is a simple example for modeling the relationship between streamflow and snow

There is a python "dependency" GetDatafunctions.py in a folder on CUAHSI JupyterHub. This is not part of the directory where the user is working. It is added to the python path for the programs to execute. This is a simple example of what could be a dependency the user may not exactly be aware of (e.g. if it is part of the CUAHSI JupyterHub platform, but not part of other platforms).

An export PYTHONPATH command is used to add the dependency to the python path.

Then the analysis is illustrated outside of sciunit.

Then sciunit is installed and the analysis repeated using sciunit exec.

Finally sciunit copy copies the sciunit to the sciunit repository

Then on a new computer

Sciunit open retrieves the sciunit

After repeating one of the executions, the sciunit directory has the dependency container unpacked

Setting the PYTHONPATH to the unpacked dependency allows the commands to be run on the new computer, just as they were on the old computer.

This is the vision - running on the new computer with dependencies from the old computer resolved.

Would like the dependencies to be “installed” on the new computer so that they work with Jupyter and Jupyter escape bash commands.

All is done from the command line - the Jupyter Notebook is just used as a convenient notepad.

Show More

ABSTRACT:

Sciunit simple demo

Show More

ABSTRACT:

This resource illustrates how data and code can be combined together to support hydrologic analyses. It was developed June 2020 as part of a HydroLearn Hackathon.

Show More

ABSTRACT:

Soil Properties

Show More

 Contact

Mobile 4358905557
Email (Log in to send email)
Resources
All 0
Collection 0
Composite Resource 0
Generic 0
Geographic Feature 0
Geographic Raster 0
HIS Referenced Time Series 0
Model Instance 0
Model Program 0
MODFLOW Model Instance Resource 0
Multidimensional (NetCDF) 0
Script Resource 0
SWAT Model Instance 0
Time Series 0
Web App 0
Composite Resource Composite Resource
Progress report
Created: Oct. 26, 2017, 3:35 a.m.
Authors: Ayman Nassar

ABSTRACT:

This is the progress report of Ayman Nassar

Show More
Composite Resource Composite Resource
Soil_properties_netcdf
Created: Aug. 31, 2021, 9:08 p.m.
Authors: Nassar, Ayman

ABSTRACT:

Soil Properties

Show More
Composite Resource Composite Resource
FlowFromSnow-Sciunit
Created: Nov. 29, 2021, 10:20 p.m.
Authors: Nassar, Ayman · Tarboton, David · Ahmad, Raza

ABSTRACT:

This resource illustrates how data and code can be combined together to support hydrologic analyses. It was developed June 2020 as part of a HydroLearn Hackathon.

Show More
Composite Resource Composite Resource
scinunit simple demo
Created: Dec. 17, 2021, 12:35 a.m.
Authors: Tarboton, David

ABSTRACT:

Sciunit simple demo

Show More
Composite Resource Composite Resource
Sciunit testing
Created: Jan. 18, 2022, 12:38 a.m.
Authors: Tarboton, David

ABSTRACT:

Illustration of General idea of use case for sciunit container.
1. User creates sciunit (sciunit create Project1)
2. User initiates interactive capturing (sciunit exec -i)
3. User does their work. For now assume this is a series of shell commands
4. User saves or copies the sciunit
5. User opens the sciunit on a new computer and can re-execute the commands exactly as they would have on the old computer, from command line, from bash shell escapes or python in Jupyter
6. User sees a list of the commands that were in the sciunit and could use editing of them to reproduce

The setup.
On CUAHSI JupyterHub the user has a resource (the one above) with some code that is a simple example for modeling the relationship between streamflow and snow

There is a python "dependency" GetDatafunctions.py in a folder on CUAHSI JupyterHub. This is not part of the directory where the user is working. It is added to the python path for the programs to execute. This is a simple example of what could be a dependency the user may not exactly be aware of (e.g. if it is part of the CUAHSI JupyterHub platform, but not part of other platforms).

An export PYTHONPATH command is used to add the dependency to the python path.

Then the analysis is illustrated outside of sciunit.

Then sciunit is installed and the analysis repeated using sciunit exec.

Finally sciunit copy copies the sciunit to the sciunit repository

Then on a new computer

Sciunit open retrieves the sciunit

After repeating one of the executions, the sciunit directory has the dependency container unpacked

Setting the PYTHONPATH to the unpacked dependency allows the commands to be run on the new computer, just as they were on the old computer.

This is the vision - running on the new computer with dependencies from the old computer resolved.

Would like the dependencies to be “installed” on the new computer so that they work with Jupyter and Jupyter escape bash commands.

All is done from the command line - the Jupyter Notebook is just used as a convenient notepad.

Show More
Composite Resource Composite Resource

ABSTRACT:

This notebook demonstrates how to prepare a WRFHydro model on CyberGIS-Jupyter for Water (CJW) for execution on a supported High-Performance Computing (HPC) resource via the CyberGIS-Compute service. First-time users are highly encouraged to go through the [NCAR WRFHydro Hands-on Training on CJW](https://www.hydroshare.org/resource/d2c6618090f34ee898e005969b99cf90/) to get familiar WRFHydro model basics including compilation of source code, preparation of forcing data and typical model configurations. This notebook will not cover those topics and assume users already have hands-on experience with local model runs.

CyberGIS-Compute is a CyberGIS-enabled web service sits between CJW and HPC resources. It acts as a middleman that takes user requests (eg. submission of a model) originated from CJW, carries out the actual job submission of model on the target HPC resource, monitors job status, and retrieves outputs when the model execution has completed. The functionality of CyberGIS-Compute is exposed as a series of REST APIs. A Python client, [CyberGIS-Compute SDK](https://github.com/cybergis/cybergis-compute-python-sdk), has been developed for use in the CJW environment that provides a simple GUI to guide users through the job submission process. Prior to job submission, model configuration and input data should be prepared and arranged in a certain way that meets specific requirements, which vary by models and their implementation in CyberGIS-Compute. We will walk through the requirements for WRFHydro below.

The general workflow for WRFHydro in CyberGIS-Compute works as follows:

1. User picks a Model_Version of WRFHydro to use;
2. User prepares configuration files and data for the model on CJW;
3. User submits configuration files and data to CyberGIS-Compute;
4. CyberGIS-Compute transfers configuration files and data to target HPC;
5. CyberGIS-Compute downloads the chosen Model_Version of WRFhydro codebase on HPC;
6. CyberGIS-Compute applies compile-time configuration files to the codebase, and compiles the source code on the fly;
7. CyberGIS-Compute applies run-time configuration files and data to the model;
8. CyberGIS-Compute submits the model job to HPC scheduler for model execution;
9. CyberGIS-Compute monitors job status;
10. CyberGIS-Compute transfers model outputs from HPC to CJW upon user request;
11. User performs post-processing work on CJW;

Some steps in this notebook require user interaction. "Run cell by cell" is recommended. "Run All" may not work as expected.

How to run the notebook:
1) Click on the OpenWith button in the upper-right corner;
2) Select "CyberGIS-Jupyter for Water";
3) Open the notebook and follow instructions;

Show More