HydroForecast
HydroForecastHydroForecast
HydroForecast
HydroForecast
/
Product
ProductProductProductProductProductProductProduct
Product
Product
/
Insight

Reflections of an Upstream Tech intern: national hydropower assessment project

Kathryn Garcia is a Stanford University undergraduate student studying computer science with an emphasis on artificial intelligence. She interned with Upstream Tech this winter and spring to help assess the hydropower potential of man-made conduits in the Western United States. This work is a part of a larger nationwide hydropower assessment in partnership with Oak Ridge National Labs.

UPSTREAM TECH
Jun 25, 2021
Table of contents

Kathryn Garcia is a Stanford University undergraduate student studying computer science with an emphasis on artificial intelligence. She interned with Upstream Tech this winter and spring to help assess the hydropower potential of man-made conduits in the Western United States. This work is a part of a larger nationwide hydropower assessment in partnership with Oak Ridge National Labs.

Did you know that machine learning techniques can be used to harness a previously untapped, cost effective source of hydroelectric power?

With the addition of a hydropower turbine, one elevation drop of 4 meters would yield an annual energy output of 4,000 MWh — enough clean energy to power 400 homes at a low cost. We are particularly interested in elevation drops in man-made conduits (e.g., pipelines, aqueducts, irrigation ditches and water conveyance canals) because this type of small hydropower development does not require the construction of new dams, and involves minimum environmental concerns.

The challenge has been finding these small physical sites across the vast landscape of the United States. Until relatively recently, identifying promising locations manually has made it infeasible to efficiently locate these elevation drops. However, with improvements in the accuracy of machine learning technology and an increase in computing power, we now have the capability to locate these drops and to scale our searches to a national level.

Over the last six months, I’ve worked with Kevin Altamirano, a machine learning engineer on the Upstream Tech team, to develop a convolutional neural network (CNN) that identifies drop locations in man-made canals, and to compute metrics that estimate the flow capacity at each location. Our CNN can be scaled to national and international levels to search for drops, and in addition, our object detection pipeline can be applied to other use cases in satellite imagery.

An example of a viable drop location in an irrigation canal.
An example of a viable drop location in an irrigation canal.

In the pilot phase, our team was able to leverage Upstream Tech’s infrastructure to build an object detection pipeline that located elevation drops in Colorado. They additionally quantified the drops' hydropower potential. My task was to increase the accuracy of the CNN and our hydropower estimations, as well as increase the efficiency of the search pipeline. I was excited to expand the project to analyze the other Western states, scaling it far beyond the initial geographic scope.

The states of interest are colored above.
The states of interest are colored above.

Over the course of the next several months, I went through two stages of development. First, I fine-tuned our object detection model to identify and locate the drops of interest. Second, I augmented each drop with information on the flow capacity at that location.

Stage 1: CNN Development

At the start of this project, I thought I could locate these drops myself. I began using an application that analyzes geospatial data (QGIS) to locate the drops. Not only were there 17 states to analyze, but the networks of water, or flowlines, look different across each state and have inconsistencies that are difficult to track. After hours of searching for one viable drop, I knew this could not be done by hand. This is why it was important to develop a CNN that could search an entire state within a day and could efficiently process the flowlines to identify the drops of interest.  

Object Detection

The CNN's task of drop identification and location falls under the class of techniques called object detection. By analyzing the pixels of satellite imagery, our CNN was trained to output bounding box predictions of where the drops are located. These bounding boxes are rectangular boxes that describe the drop’s spatial location, as well as the CNN’s confidence. It is important to note that we only processed satellite images that are a fixed distance from flowlines to ensure that we found elevation changes with water flow.

An example of our object detection model’s bounding box predictions.
An example of our object detection model’s bounding box predictions.

Training Process & Model Predictions

In the case of object detection, training data samples are pairs of input imagery and their corresponding bounding boxes. We utilized the pre-trained models in the TensorFlow Object Detection API framework to accelerate our training process. These models were initially trained to identify a large range of objects, and we fine-tuned their parameters to specifically identify drops by further training the model with a collection of drop location data that had been created in our pilot phase.

In the prediction phase, input imagery was given to the model for areas it has never seen, and using the learned weights, the model output bounding box predictions for each drop. After lots of experimentation, the SSD MobileNet was our best pre-trained model, which is interesting to note because it has one of the simplest architectures.

Challenges

The training process for our model was complex because there is not a comprehensive dataset of drops in the United States. To train our model, we had to create a dataset from our own detections and small studies. With each new state we searched, it was useful to add those detections to our dataset and re-train our model because drops can look different across different states.

Because of the strict criteria for a drop, it was important to use a human-in-the-loop approach. After our model made its predictions, we would review those drop detections to filter any erroneous drops. We could increase the precision by accepting all predictions greater than the certain threshold of confidence (e.g., 70%). However, we could lose potential drops that were predicted with a lower confidence. Since finding a drop is a rare instance, we found that it was worthwhile to capture as many potential drops as possible. After experimentation, we found a threshold that ensured that we selected drops with the highest degree of accuracy without missing any smaller or more obscure drops that were more difficult for our model to locate.

Stage 2: Metadata Extraction

Once we identified drop locations, we augmented each drop with information on the flow capacity at that location. This data enables estimates of potential power generation at each drop. For each drop found, we estimated the height of the drop, the average months of flow through the drop per year, and the width of the canal.

Drop Height

To calculate the height of the drop, we analyzed elevation layers from Mapbox. Within each of our bounding boxes, we calculated the drop height by subtracting the minimum elevation from the maximum elevation. We utilized Google Earth Pro’s elevation profile functionality to help validate these calculations. We generated a random sample of our elevation calculations to compare with the ground truth values.

Mapbox elevation layer for a given tile. Bright yellow is higher elevation, dark blue is lower elevation.
Mapbox elevation layer for a given tile. Bright yellow is higher elevation, dark blue is lower elevation.

Months of Flow

We utilized Sentinel-2 imagery, specifically Normalized Difference Water Index data or NDWI layer mosaics, to calculate the months of flow for a given drop. Using this satellite data, we averaged the number of months where there is flow present each drop’s bounding box. We experimented to find the minimum NDWI value that indicates a presence of flow. For a given month, if the average NDWI value was greater than this value, we assumed that there is flow.

Width

The width of the canal was also determined from a spectral analysis of Sentinel-2 imagery at the drop site. Specifically, we used NDWI layers to determine which pixels contain water. We created a mask of NDWI values greater than our NDWI water threshold to determine the presence of water within each bounding box. Using this mask, we could measure the width perpendicular to the flowline. We used the ‘Measure Line’ feature of QGIS to validate our width calculations. After generating a random sample of drops, we compared our width calculations against the widths that we manually measured.

An NDWI mask for a bounding box. Water is marked in yellow.
An NDWI mask for a bounding box. Water is marked in yellow.

Lens Collaboration

To further validate these calculations, specifically the months of flow, we used Upstream’s Lens application. In addition to the visualization capabilities of Lens, we used the Analyze Area functionality to calculate the average presence of water across multiple years. We uploaded a random sample of drops to the platform, and we were able to compare the presence of flow with our predicted values.

Analyzing a drop location through the Lens application
Analyzing a drop location through the Lens application

Challenges

Since each of our metadata calculations was determined by the location of a drop’s bounding box, the accuracy of our calculations was limited by the accuracy of our bounding boxes. In the case of an imprecise bounding box, we utilized various techniques to reduce error. For example, we experimented with a small buffer that could offset a bounding box that did not fully capture the minimum and maximum elevation values. On the other hand, this buffer could decrease the accuracy of the width calculations because it adds the potential of a larger prediction.

Future improvements to our CNN could increase the accuracy of the metadata calculations. One idea for this is to implement a segmentation model on top of our initial object detection model that classifies the specific pixels of a drop within its bounding box. This would reduce the dependency on a bounding box, which was the largest source of error.

Conclusions

Overall, this project was a very crucial first step in understanding the potential for new hydropower in the Western United States, with the long-term goal of adding to our clean energy resources. After months of work, we found about 3000 drop locations in the Western United States with an elevation change greater than 0.6 meters and at least one month of flow. The model that we created can now be built upon and expanded at a national and international level to search for drops, and in addition, our object detection pipeline can be applied to other use cases in satellite imagery.

Each star represents a drop location. Note the clusters with a higher density.
Each star represents a drop location. Note the clusters with a higher density.

Each star represents a drop location. Note the clusters with a higher density.

I’d like to thank Kevin Altamirano for his investment of time and support. His mentoring enabled me to seamlessly apply new machine learning techniques to the domain of satellite imagery. I’d also like to thank the entire Upstream Tech team. I found support in the Lens and HydroForecast teams, and I was able to learn from each team’s work. I am hopeful that each team can benefit from our work as well, specifically our techniques of object detection applied to satellite images and our methods of flow analysis. By building on Upstream Tech's machine learning frameworks and technologies, I was able meaningfully contribute to this technical project.

Read more

No items found.