Solution Methods · First introduced 7 Apr 2026

Lagrangian Relaxation

Assign the right price to a constraint violation and you reveal which constraints are truly binding — and which are merely expensive. — Economic interpretation of Lagrangian duality

Lagrangian relaxation moves hard coupling constraints into the objective with a penalty multiplier, creating a decomposable problem that provides a lower bound and often near-optimal solutions.

Core idea

In many large optimisation problems, a few constraints couple otherwise independent subproblems. A manufacturing plant must balance production across machines. A power grid must balance supply and demand. Lagrangian relaxation exploits this structure by moving the coupling constraint into the objective function, multiplied by a penalty term called a Lagrange multiplier. The relaxed problem decomposes into independent subproblems — each generator, each machine, each unit can optimise separately. The multipliers are then iteratively adjusted (using the subgradient method) to tighten the lower bound until it converges.

The key insight: solving the relaxed problem provides a lower bound on the true optimum. Adjusting multipliers improves the bound. When the relaxation is "tight" (no duality gap), the lower bound equals the true optimum. Even when loose, the near-optimal primal solutions recovered from the relaxed problem are often useful in practice.


Concrete example: unit commitment in power systems
Scenario — energy systems

Problem: Choose which generators g to commit (on/off) and their output level p_g to minimise total cost, subject to demand being met exactly: sum of p_g = demand.

Without relaxation: This is a large MIP (each generator is a binary decision; the demand constraint couples all generators). Solving requires heavy branching.

Lagrangian relaxation: Relax the demand constraint with multiplier λ (price per unit of demand imbalance). Each generator's problem becomes independent: should I commit and at what level, given the price signal λ? Generator g optimises: minimise (operating cost of g) - λ × (contribution of g to meeting demand). Now 100 independent MIPs instead of one coupled MIP.

Iteration: Solve each generator's MIP in parallel. If total output exceeds demand, increase λ (make generators reluctant to run). If total output falls short, decrease λ. Each iteration solves independent subproblems and updates λ using subgradient information. After ~20 iterations, the lower bound and primal solution converge to near-optimality.


Why practitioners confuse this with getting the optimal solution
Lower bounds are not solutions; feasibility requires a heuristic

Common mistake: "I solved the Lagrangian relaxation and got a solution, so I'm done."

Reality: The Lagrangian relaxation provides a lower bound (a certificate that the optimum is at least this good). The "solution" from solving the relaxed problem independently is not necessarily feasible in the original problem. In unit commitment: the multiplier λ does not guarantee that sum of p_g equals demand exactly, only that the subproblems are solved.

To recover a feasible solution, a primal heuristic is needed: adjust the relaxed solution (e.g., ramp up/down generator outputs to match demand exactly) while trying to preserve near-optimality. This heuristic yields both a lower bound (Lagrangian) and an upper bound (feasible primal solution). The gap between bounds measures solution quality. Convergence in the multipliers does not guarantee gap closure without careful primal recovery.


Where this shows up in practice
Energy
Unit commitment with demand balance constraints; hydro scheduling with water balance; transmission-constrained dispatch.
Supply Chain
Production planning with capacity constraints; inventory balancing across sites; multi-period scheduling.
Transport
Vehicle routing with fleet capacity constraints; driver scheduling with time window coupling.
Manufacturing
Job scheduling on parallel machines; resource-constrained project scheduling.

One-line version

Lagrangian relaxation gives a lower bound and often a near-feasible primal solution — but feasibility is not guaranteed. A heuristic repair step is almost always required to convert the dual solution into a usable plan.


Related concepts