This summer in Belgium, Assistant Professor Andrew Miller and a couple of colleagues wrapped up research of a basic theoretical model that he says is a nice, complete step forward for people who study optimization theory.
The model appears often as a sub-model embedded in many production planning applications. “It never occurs by itself, but there might be several linked instances of this model in a production planning situation in which you have to have a minimum batch size, for example,” explains Miller, who worked at the Center for Operations Research and Econometrics at Catholic University of Louvain.
Another instance in which the basic model, or building block, might occur is within a production setup scenario: If you configure a production line for one item and leave it that way for several shifts, you would incur both setup and startup costs only for the initial shift. “In these kinds of cases, very often the way we model them uses these small models, or building blocks,” he says.
To help solve these basic models more effectively, Miller and his colleagues looked for new ways to generate valid inequalities, which lead to better linear programming relaxations. One method that has gained popularity recently is combining mixed-integer rounding inequalities, which, says Miller, people do to find valid inequalities for the mixed-integer programming model that make the linear programming relaxation stronger.
“In mixed-integer programming, some of the decisions are binary, or yes-no, or they’re such that you can’t do things in divisible quantities,” he says. “You often have to make decisions regarding indivisible quantities, like the number of batches. You can’t produce a half batch; you have to produce two or three or four. And so if you relax that restriction, you get a linear programming model.”
The group discovered how to generate valid inequalities for the model. In addition, the researchers also showed that their approach generates all of the valid inequalities needed for the model.
From a theoretical perspective, their linear programming relaxations are better than what’s been done before, although they have yet to test them computationally, says Miller. “The theoretical results that show that we have done everything that we can do for this model also suggest that if you want to do something else — if you need a better linear programming relaxation — then you need to take a different approach than we do because you can’t go any farther with it,” he says.