Finding optimal solutions for practical problems when nothing is certain

// Industrial & Systems Engineering

Tags: Faculty, Optimization, research

Photo of Jim Luedtke

Industrial and Systems Engineering Associate Professor James Luedtke. Photo: Stephanie Precourt.

Share this story:

Making decisions under uncertainty is something we do all the time: When, for example, should I leave to take my child to soccer practice, not knowing how road construction will affect traffic?

But these everyday decisions tend to involve problems of a smaller scale than, say, preparing for the next season of forest fires or protecting multi-million-dollar IT infrastructure from cybersecurity threats.

For that kind of decision support, companies and government agencies alike might seek out Jim Luedtke, an associate professor of industrial and systems engineering at the University of Wisconsin-Madison.

Luedtke is an expert in an area called stochastic optimization.

This subfield of operations research uses mathematical models to help decision-makers sift through multiple actionable items, each associated with a certain cost, that contribute to an overall solution for their problem—say improved cybersecurity. This solution has to match a given budget and account for the inherent uncertainty of the many smaller yes-or-no decisions that make up the total.

“If you have a sequence of 300 yes-or-no decisions, the number of possible combinations is larger than the number of atoms in the universe, so you can’t manually sort through all of them,” Luedtke notes. “But the models we build actually can consider all of these, and do so very fast, thanks to a toolbox of mathematical tricks.”

The research Luedtke conducts with Laura Albert, who also is an associate professor of industrial and systems engineering at UW-Madison, informs government efforts to defend federal computer systems against cyberattacks. As part of a project funded by the National Science Foundation, for which Albert is the principal investigator, they communicate with federal decision-makers via collaborators at Sandia National Laboratories in Albuquerque.

In their mathematical models, the researchers account for the lack of certainty about any single decision’s impact by having subject matter experts assign a success probability to each mitigation strategy. For example, someone who has worked in cybersecurity for many years may state that the probability of a successful attack after installing a firewall is 35 percent, compared to 65 percent without it. It is better to include some measure of uncertainty in the decision-making process, even if it is imprecise, than to ignore it altogether, Luedtke says.

A particular challenge with cybersecurity are so-called “adaptive adversaries” whose attacks are not randomly distributed. Once decision-makers put into place a certain combination of defense strategies, smart hackers may detect what’s already in place and cleverly choose their next attack to work around it. That increases the level of complexity the researchers have to build into the model.

They also have to consider many different kinds of attacks. “In addition to malicious software, federal agencies worry about security risks in the supply chain for a manufactured IT component, such as an adversary hiding a computer chip in a piece of hardware,” Luedtke says. “In this case, possible defense strategies could include more elaborate quality-control procedures for the manufacturing process, or switching to a more trustworthy supplier, which will likely affect the production costs.”

The researchers’ proposed method for choosing investments provides a sequence of solutions tailored to a given budget, making it easy to update decisions when the financial constraints change from year to year.

In the cybersecurity example, it is difficult to measure the ultimate success of these kinds of decision support systems, Luedtke readily admits. Once a decision has been made, nobody will ever know what may have happened if that decision had not been implemented. But other applications are not hampered by this lack of counterfactuals. Take renewable energy technology, for example.

Here, the uncertainty of the power-generating process is due to the energy source itself: We don’t know ahead of time how much wind will blow or how much sun will shine the next day.

“But when operators of large power generation systems switch to a new optimization tool to balance energy supply and demand, they can directly measure whether their cost decreases or whether power outages and other undesirable outcomes happen less frequently than before,” Luedtke says.

While his research is theoretical, he finds it gratifying that the mathematical models he develops solve practical problems from a wide range of real-world applications. In addition to cybersecurity and renewable energy, forest fires are another intriguing example.

At the beginning of peak wildfire season, forest managers have to allocate resources—bulldozers, firefighters and other equipment—to a large geographical area. Their goal is to distribute those resources as cost-effectively as possible while maximizing the probability of responding early and quickly to a fire outbreak, before damage spreads out of control. In this case, the uncertainty in the model is not knowing when and where actual fires may start throughout the high-risk season.

Thanks to the rapid growth of technology and computer power, the interest in decision support systems has grown as well. Some of Luedtke’s methods have been implemented in software packages that companies and agencies around the world use to make daily decisions about optimizing their workflow.

“There is a big demand in industry for students who receive this kind of training,” he says. “This is especially true when we’re pushing the frontiers on the types of problems our models and algorithms can solve and the areas to which they apply.”

Author: Silke Schmidt