Checking for non-preferred file/folder path names (may take a long time depending on the number of files/folders) ...
This resource contains some files/folders that have non-preferred characters in their name. Show non-conforming files/folders.
This resource contains content types with files that need to be updated to match with metadata changes. Show content type files that need updating.
| Authors: |
|
|
|---|---|---|
| Owners: |
|
This resource does not have an owner who is an active HydroShare user. Contact CUAHSI (help@cuahsi.org) for information on this resource. |
| Type: | Resource | |
| Storage: | The size of this resource is 428.7 MB | |
| Created: | Mar 08, 2026 at 4:35 p.m. (UTC) | |
| Last updated: | Apr 23, 2026 at 5:24 p.m. (UTC) | |
| Citation: | See how to cite this resource |
| Sharing Status: | Public |
|---|---|
| Views: | 35 |
| Downloads: | 2 |
| +1 Votes: | Be the first one to this. |
| Comments: | No comments (yet) |
Abstract
This project investigates the ability of GRIME AI and the embedded Segment Anything Model 2 (SAM2) to detect shade regions at a United States Geological Survey river site in Wisconsin, U.S.A. Our aim was to identify the deeply shadowed portions of water and snow under the bridge over time as shadows moved. We sparsely annotated five images from February 2026 (the 11th through 23rd) with the GRIME AI SAGE module and using two classes: "shade" and "non-shade". The annotations were used to fine-tune and validate SAM2 in the "Deep Learning" toolkit in GRIME AI. The toolkit was also used to segment images from all days in February 2025 between the hours of 10 a.m. and 2 p.m. CST. Output "panel" files that include the raw image, overlay mask, binary mask, and probability heatmap were used to create a GIF and video of the predicted shade regions over the month. We found that with the small training set and sparse annotations, the model only detects the darkest shade areas and fails to detect shade areas closer to the shade/non-shade boundary. Further refinement (more training data and richer annotations) are necessary to improve the segmentation performance.
Subject Keywords
Coverage
Spatial
Temporal
| Start Date: | |
|---|---|
| End Date: |
Content
Related Resources
| Title | Owners | Sharing Status | My Permission |
|---|---|---|---|
| PEP2026: GRIME AI Data and Products for the Pixels to Environmental Patterns Workshop | Troy Gilmore · Nawaraj Shrestha · John Stranzl Jr. · Zach Nickerson | Public & Shareable | Open Access |
Credits
Funding Agencies
This resource was created using funding from the following sources:
| Agency Name | Award Title | Award Number |
|---|---|---|
| National Science Foundation | Innovative Resources: Cyberinfrastructure and community to leverage ground-based imagery in ecohydrological studies | 2411065 |
Contributors
People or Organizations that contributed technically, materially, financially, or provided general support for the creation of the resource's content but are not considered authors.
| Name | Organization | Address | Phone | Author Identifiers |
|---|---|---|---|---|
| John Stranzl | University of Nebraska | NC, US | ||
| Troy E. Gilmore | University of Nebraska | NE, US | 4024701741 |
How to Cite
This resource is shared under the Creative Commons Attribution CC BY.
http://creativecommons.org/licenses/by/4.0/
Comments
There are currently no comments
New Comment