Battery life is a big problem for our small handheld devices.
It’s not just phones and wearables that greedily gobble power. Gadgets like medical implants, factory controllers and the antilock brakes in our cars all operate on limited energy budgets. All are examples of embedded systems, which, unlike general purpose computers, are designed to perform specific tasks.
An innovative strategy to reduce power usage by embedded systems while still maintaining acceptable levels of performance has earned Younghyun Kim, an assistant professor of electrical and computer engineering at the University of Wisconsin-Madison, a prestigious CAREER Award from the National Science Foundation.
Kim’s plan hinges on the notion that computers sometimes can be “too perfect,” which wastes resources.
“There are many scenarios, for example, in machine learning, where we don’t necessarily need precise results for intermediate computations,” says Kim. “So, can we reduce the precision and do more computing for the same amount of effort?”
Reducing precision is not itself a new idea—so-called “approximate computing” has been gaining traction in recent years as a strategy to speed up computers without adding new hardware as performance improvements have come up against the limits of advancements in semiconductor technology.
Approximate computing allows for a little bit of “fudge factor” in scenarios where an exact answer is not necessary, for example, in search engine results for open-ended questions with many possible answers. In some applications, just a 5-percent drop in accuracy can offer up to 50 times the energy savings compared to fully perfect computation.
Most engineers apply approximate computing to a rather limited set of hardware, namely, only the processor cores that do the actual computing.
But embedded systems usually consist of several components working together, such as sensors and actuators and communication interfaces in addition to the central computing core.
“In embedded systems, computation is not the only major power consumer,” he says. “System components such as memory or devices like cameras and displays also consume a lot of power. Focusing on computation and ignoring the other parts isn’t a valid approach.”
Kim plans to deploy approximation across entire embedded systems, from computation to memory to output.
It’s relatively uncharted territory, and it’s not so simple as simply combining several individual components, each with slightly reduced precision. The effects of approximation on a whole embedded system might very well be greater than the sum of its slight less precise parts.
“Data flows from one component to another,” says Kim. “How the approximate data is transferred and what’s the implication to the overall quality is not well studied.”
That’s why Kim will take a holistic approach, first considering the effects of approximation on individual components, and then developing automated design tools to tweak precision across entire embedded systems.
“You can’t just brute force design the system,” says Kim. “We need a tool or a methodology to maximize energy efficiency and quality.” For example—reducing screen brightness is an approximation that could help save power, but if the data feed going into the screen is excessively imprecise, the resulting image might be incomprehensible.
Importantly, Kim’s modeling places the considerations of the eventual human users of embedded systems front and center.
“Human perception is not perfect and it tolerates error,” says Kim. “We’re including humans in our modeling to determine how approximation affects human perceptions, so we can generate the optimal system.”
Author: Sam Million-Weaver