🎧 The Brief

Industry Signals

Industry Signal Supply Chain Semiconductors Apr 2026

Kinaxis Maestro + NVIDIA cuOpt: 12× Supply-Chain Planning Speedup on Semiconductor Model 🔗

Foundation: Semiconductor supply chains are among the most computationally demanding planning environments — long lead times, hundreds of BOM (bill of materials) levels, capacity constraints across dozens of manufacturing stages, and demand volatility that requires constant re-planning. Running enough scenarios to cover all meaningful combinations has historically required hours of compute time, which forces planners to use coarser approximations and accept plan staleness.

Kinaxis has integrated NVIDIA cuOpt into its Maestro supply-chain planning platform, replacing CPU-bound optimisation loops with GPU-accelerated parallel solving. On a validated semiconductor planning model comprising nearly 50 million decision variables, spanning over 40,000 SKUs across a six-quarter daily planning horizon, the combined system reduces end-to-end calculation time from over three hours to 17 minutes — a 12× speedup. At 250,000 scenarios executed per month across 400+ global enterprise customers, the practical effect is that planners can re-optimise intra-day rather than overnight. Kinaxis was named a Leader in the 2026 Gartner Magic Quadrant for Discrete and Process Industries, with the cuOpt integration cited as a differentiating capability.

Why it matters: GPU-accelerated optimisation via cuOptNVIDIA's GPU-accelerated routing and planning optimisation library, based on parallel metaheuristic search. is moving from proof-of-concept to named enterprise deployments at scale. The 12× speedup is not a synthetic benchmark — it is reported on a real semiconductor planning model running in production across a major vendor’s customer base. For practitioners evaluating whether GPU-native solvers justify infrastructure cost, this provides a concrete reference point: the crossover is visible at the scale of a mid-complexity semiconductor planning problem (~250K scenario evaluations/month).
Kinaxis IR · Press Release · Mar 2026
Industry Signal Energy Award Winner Apr 2026

AFRY BID3 + FICO Xpress: Energy Market Bidding Optimisation Wins 2026 FICO Decision Award (ESG Champion) 🔗

Foundation: Energy market participants — utilities, independent power producers, and aggregators — must submit binding bids into day-ahead and intra-day electricity markets hours before the actual delivery period. The bidding problem is a unit commitment variant: which generators to commit, at what output levels, across which market intervals, subject to ramp rates, minimum up/down times, regulatory constraints, and uncertain prices. Getting this wrong means either leaving revenue on the table or accepting unprofitable commitments.

AFRY, the global engineering and design consultancy, built BID3 as a FICO Xpress-based optimisation platform for energy market bidding, portfolio optimisation, and ESG-aligned dispatch planning. The platform is deployed across 30 organisations globally, with 25% year-on-year user growth recorded in 2025. AFRY and FICO Xpress were named the 2026 FICO Decision Award winner in the ESG Champion category, recognising measurable sustainability impact achieved through optimisation. The award covers both the platform’s commercial bidding outcomes and its capability to embed carbon and renewable targets directly into the optimisation objective.

Why it matters: This is one of the clearest published examples of MILP-based energy optimisation being used at scale across multiple organisations simultaneously (30 named clients) rather than a single bespoke deployment. The ESG Champion award designation signals that the platform’s objective function explicitly incorporates carbon and sustainability targets alongside revenue — a pattern that practitioners in regulated energy markets will increasingly be required to adopt as carbon pricing mechanisms mature. FICO Xpress as the named solver provides a reproducible architecture anchor.
IT Brief Asia · AFRY BID3 ESG Champion · Jan 2026
Industry Signal Financial Services Award Winner Apr 2026

Erste Group + FICO Xpress: Retail Lending Portfolio Optimisation Wins 2026 FICO Decision Award (AI/ML/Optimisation) 🔗

Foundation: Retail lending optimisation is a high-stakes pricing and allocation problem: a bank must set interest rates and credit limits across a portfolio of loan applicants to maximise revenue and expected profit while staying within regulatory capital constraints, credit-risk appetite bounds, and competitive market pricing. The decision space is large (millions of customers, hundreds of product variants), the constraints are hard (Basel capital ratios, internal risk limits), and the objective is non-linear (probability of default interacts with pricing). MILP formulations with risk-weighted objective terms are the standard industrial approach.

Erste Group, one of Central and Eastern Europe’s largest retail banks, deployed FICO Xpress for loan pricing and risk management optimisation across its retail lending portfolio. The platform applies MILP-based decision models to set loan pricing across product categories subject to regulatory capital requirements, credit-risk constraints, and profitability targets. The deployment delivered a 22% increase in lending profitability alongside a significant reduction in manual decision-making exceptions, enabling individualised pricing at scale across mortgages, cash loans, and other retail products. Erste Group was co-awarded the 2026 FICO Decision Award in the AI/ML/Optimisation category alongside Inditex, recognising measurable optimisation outcomes at the scale of a major European retail bank serving millions of customers across multiple markets.

Why it matters: Named MILP-based lending optimisation deployments at major European banks are rare in public literature; most are described only generically in regulatory filings or vendor case studies without outcome metrics. The award designation confirms measurable ROI sufficient to pass FICO’s submission review, providing practitioners in financial services with a concrete architecture reference: Xpress-based MILP over a risk-adjusted pricing objective, deployed at Erste Group scale. For practitioners at other retail banks, this is a reproducible benchmark for the minimum viable complexity of a lending optimisation model that produces award-level outcomes.
FintechLaunches · Erste Group FICO deployment · 2026

Tools Watchlist

1 release · past 14 days

Solver and modelling-framework releases that passed all four qualification tests.

NVIDIA cuOpt 26.04 🔗

NVIDIA · released Apr 2026 · Release notes
FieldContent
Version26.04 — April 2026
What changed
  • New Mixed-Integer Programming (MIP) cutting planes: clique cuts and implied bounds cuts added before branching
  • FP32 and mixed-precision support in the Primal-Dual Large-scale Parallel (PDLP) solver, plus batch PDLP in reliability branching
  • No-relaxation heuristics run before presolve; Python 3.14 support added
ImpactGPU-accelerated MIP solves benefit from stronger cuts before branching, potentially reducing the search tree on linear programming (LP) relaxation-heavy models. Mixed-precision PDLP enables faster LP solve steps on hardware that supports lower precision arithmetic.
MigrationDrop-in upgrade; cuOpt is open-source under Apache 2.0. Python 3.14 support is additive, no API changes.

Research Papers

Research Paper Transport Healthcare 📋 Case Study arXiv · 17 Apr 2026

Exact algorithm routes visit-ordered vehicle-routing instances, beating all benchmark methods across three deployment domains 🔗

Foundation: A home-health nurse visits several patients each day; for some patients, one appointment must finish before the next can start (a dressing change must precede a shower, for instance). When those visit-order requirements span different nurses and routes, the problem becomes a Vehicle Routing ProblemThe optimisation problem of finding the lowest-cost set of routes for a fleet of vehicles to serve a set of customers. See full concept page. (VRP) with temporal dependency constraints, because the scheduler must decide not just which nurse goes where, but in what sequence across the whole fleet. Fragment-based route construction, which builds routes from short valid sub-sequences rather than individual stops, makes it possible to enumerate only time-feasible orderings from the start.

Van Montfort, Leitner, and Paradiso propose a price-cut-and-enumerate algorithm that alternates column generationA technique that builds the optimal solution incrementally by generating only the profitable columns (routes, schedules, or patterns) rather than enumerating all possibilities upfront. See full concept page. (adding profitable route fragments) with row generation (adding valid inequalities that cut infeasible inter-route orderings) before branching. Tested on three benchmark domains including home healthcare, aircraft turnaround, and technician scheduling, the algorithm outperforms all competing exact methods. The case study below details the home healthcare formulation, the algorithm structure, and the practitioner implications.

arXiv:2604.16064 · 17 Apr 2026 📋 Case Study below ↓
📋 Case Study Source: paper benchmark — arXiv:2604.16064 (Apr 2026) Expand for full case ▼
Takeaway
Fragment-based formulations find exact optimal routes for visit-ordered instances where classical arc-flow models run out of memory. ↓ Expand to see formulation, difficulty, and full takeaway
Research Paper Energy Manufacturing arXiv · 16 Apr 2026

Upfront job ordering outperforms joint optimisation under uncertain time-of-use start-time costs 🔗

Foundation: In a factory running on time-of-use electricity tariffs, the cost of running a machine at 10 am differs from the cost at 3 pm, and those prices change day to day. Deciding the order in which jobs run and when each starts therefore depends on prices that are unknown when the plan is made. The budgeted uncertainty set is the modelling structure for such scenarios: it limits how many prices can simultaneously be adversarial, controlled by a budget Γ (a count set in advance) reflecting that not every period goes badly at once.

Rodríguez-Ballesteros and co-authors study single-machine scheduling where job costs shift throughout the day: energy or labour rates change by time slot. They model price uncertainty via a budgeted uncertainty set, limiting how many periods can be adversarial at once, and split planning into two stages. Job ordering is fixed upfront; start times are set after real costs are revealed. The paper proves the robust counterpart computationally hard and develops mixed-integer modelsOptimisation models that combine continuous and binary or integer decision variables. See full concept page. for both continuous and discrete budget variants.

Why it matters: Start-time dependent costs appear in any operation paying time-of-use electricity rates or shift-based labour premiums. This paper shows that fixing the job sequence before costs are known, then setting start times once actual prices are revealed, produces better outcomes than treating ordering and timing as a single joint decision under uncertainty. The advantage holds across both continuous and discrete price-budget models.
arXiv:2604.15161 · 16 Apr 2026

Term of the Day

Budgeted Uncertainty Set

"In order to allow a finer tuning of the robustness of our model, we introduce the notion of a budget of uncertainty." — Dimitris Bertsimas & Melvyn Sim, The Price of Robustness, Operations Research (2004)

A budgeted uncertainty set limits how many uncertain parameters can simultaneously take their worst-case values. Instead of assuming every uncertain number will be adversarial at once (a box uncertainty set, which gives overly conservative, expensive solutions), you set a budget Γ (a count you choose in advance) that caps how many parameters can deviate together. The model then optimises against the worst scenario within that budget, producing solutions robust to realistic disruptions rather than improbable simultaneous catastrophes.

δ₁ δ₂ nominal Box: all pairs adversarial Γ = 1 at most 1 adversarial × × Both worst-case simultaneously × Budget Gamma=1 cuts the corners off the box

A concrete example

A factory schedules eight jobs on a single machine. Electricity prices vary by time slot and are uncertain: each slot has a nominal (expected) price and a maximum deviation.

Without a budget (box uncertainty): the scheduler assumes every slot can be at its worst price simultaneously. The resulting plan minimises cost under the most adversarial possible day, with all eight prices at their peaks. This plan is rarely violated, but it is systematically too conservative: it accepts high scheduling costs to guard against a scenario that almost never occurs.

With budget Γ = 2 (budgeted uncertainty): the scheduler assumes at most two time slots will face their worst-case price in any single day. The plan is optimised against the worst pair of bad-price slots. It costs less to implement, handles any realistic two-slot disruption, and is violated only if three or more slots simultaneously hit their worst case, an event judged outside the budget.

The key insight: Γ is not a probability. It is a count. No distributional assumption is made about which slots will be adversarial or how often. The budget is a modelling choice the practitioner makes based on operational experience of how many simultaneous disruptions are plausible.

Why practitioners misread this

Misread 1: treating Γ as a probability. Setting Γ = 2 does not mean each parameter has a 2% chance of deviating. It means at most 2 parameters can deviate at once, with no distributional assumption at all. The Bertsimas-Sim model is entirely distribution-free. A practitioner who calibrates Γ by estimating a probability has confused two different modelling frameworks: the budgeted set belongs to robust optimisation, not stochastic programming.

Misread 2: assuming the budget applies only to independent uncertainties. The model is valid whether uncertainties are correlated or not. The budget limits the count of simultaneously adversarial parameters regardless of their joint behaviour. What it does not capture is gradations of adversarial-ness: each parameter is either at its worst-case deviation or at its nominal value, with no in-between. If continuous partial deviations matter, an ellipsoidal uncertainty set (a different shape, based on covariance structure) may fit better.

Misread 3: confusing the budget with the uncertainty set shape. The budgeted uncertainty set is one specific shape out of many: box (all combinations), ellipsoidal (covariance-based), polyhedral (linear constraints on deviations), and budget (the L1-ball in normalised deviation space). Choosing Γ implicitly chooses a specific polytope. Two practitioners using different Γ values are using geometrically different sets, not just different conservatism levels within the same shape.

Where this shows up in practice

Energy scheduling: single-machine or job-shop models with time-of-use electricity tariffs protect against at most Γ price intervals being adversarial in a single day.

Supply chain: demand fulfillment models limit the number of product families facing simultaneously poor demand, rather than assuming all products underperform at once.

Finance: portfolio construction limits how many asset returns can simultaneously be at their worst, without assuming a joint return distribution.

Nurse and shift scheduling: staffing models assume at most Γ wards will simultaneously face unexpected high patient volumes.

The diagnostic question: how many of these uncertain quantities can plausibly be at their worst at the same time? That count is your Γ. If you can answer it, you can use a budgeted uncertainty set.

Daily Synthesis
  • Van Montfort et al. VRP When a routing problem requires some visits to precede others across different vehicles, building routes from pre-validated short sub-sequences (fragments) makes exact solution tractable where classical arc-flow models run out of memory. The price-cut-and-enumerate algorithm outperforms all benchmark exact methods on home healthcare, aircraft scheduling, and technician routing instances.
  • Rodríguez-Ballesteros et al. Committing the job sequence before knowing actual start-time costs, then setting start times once real prices are revealed, produces better outcomes than treating ordering and timing as a single joint decision under uncertainty. The advantage holds across both continuous and discrete price-uncertainty models.
  • NVIDIA cuOpt 26.04 NVIDIA cuOpt 26.04 adds clique cuts and implied bounds cuts to its Mixed-Integer Programming (MIP) engine, and extends the Primal-Dual Large-scale Parallel (PDLP) solver to mixed-precision and batch modes. GPU-accelerated MIP solvers can now apply more cutting planes before branching, which typically reduces the search tree.