General
GeneralGeneral
General
General
/
News
NewsNewsNewsNewsNewsNewsNews
News
News
/
Insight

To understand intelligence

A simple framework for understanding and applying AI
Oct 8, 2024
Table of contents

The spectrum of AI technologies is vast, ranging from tried and true algorithms to ‘don’t-worry-it’s-still-science-fiction’ Artificial General Intelligence. This vastness makes it challenging to rapidly determine if an AI technology is a suitable solution for a given problem. One might ask:

  • "Can I trust a Large Language Model (LLM) to analyze historical weather patterns?"
  • "Is this machine learning model reliable enough to detect anomalies in vibration data?"
  • "How can I ensure that a predictive forecast model won’t just perform well against historical data, but also operate accurately in real-world conditions?"

When viewed through the lens of adaptive evolution – the iterative process by which an intelligence, in this case AI, continuously learns and improves based on its environment – we gain a clearer understanding of where and how a particular AI technology can succeed. By questioning the goals, constraints, and inputs that shape the development of AI models, we can better judge their fitness for specific applications. Let’s explore this concept further by examining two opportunities for applying AI in the waterpower sector: LLMs and predictive AI for hydrological forecasting.

At their core, LLMs are a type of Predictive AI model, but they're distinguished by the scale of computational power and the immense amount of data used for learning. When trained, LLMs are measured against their ability to comprehend input (prompts) and produce reasonable and probable output. When used in applications aligned with this objective – such as text transformation and summarization, creative idea seeding, or chatbots, they present a transformative new way to interact with and leverage computers. When used outside of the context of their adaptive evolution, such as retrieving facts, performing mathematics, or automating processes prone to bias, Generative AI models are unfit and present pitfalls.

HydroForecast, a predictive AI model developed by my company Upstream Tech, focuses on a very different challenge: forecasting streamflow across diverse timescales. When assessing HydroForecast’s deployment in real-world scenarios, operators tend to raise several key concerns:

  • Can the model reliably forecast general conditions across a wide range of watersheds?
  • How accurate is it during extreme weather events, where precision is most critical?
  • Will it perform well under unprecedented conditions, outside the bounds of historical data?

While HydroForecast has proven itself in competition – including a year-long contest sponsored by the U.S. Bureau of Reclamation, where it outperformed utilities’ in-house forecasting teams, public entrants, and government agencies, winning 23 of 25 categories – a demonstration is valuable in building trust, but not sufficient in fully earning it. To gain full confidence, operators must understand the objectives and environment in which learning occurred.

HydroForecast’s 10-day foundational model is trained across hundreds of basins with diverse hydrological characteristics, and the objective function is finely tuned to balance overall accuracy with the ability to capture critical metrics for practitioners, such as peak timing and volume during extreme weather events. Input selection is grounded in physical scientific principles. Taking this physics-driven, foundational approach results in a model that is accurate in general conditions, robust to nonstationarity, and reliable during extreme weather events.

In engineering school, we joked that hardware is just software petrified in silicon. In much the same way, AI represents “learning” frozen in bytes. Whether we are working with a simple algorithm or something on the path to General AI, understanding the learning process – how a model has adapted and evolved its intelligence – is essential to being both better users and better creators of AI.