← Radar Archive

Industry Signals

Industry · Supply Chain · AI Conference project44 · Apr 8–9, 2026 (Today)

project44 Launches decision44: Inaugural AI Supply Chain Execution Conference Opens Today in Chicago

project44, the global supply chain visibility and decision intelligence leader, today opens decision44 -- its inaugural flagship conference -- at Convene at Willis Tower in Chicago (April 8-9), followed by a second edition at the Okura Hotel in Amsterdam (April 15-16). The event convenes supply chain, logistics, and transportation executives around one premise: artificial intelligence (AI)-powered decision intelligence is the infrastructure layer connecting reactive operations to predictive execution. Keynotes include Kevin O'Leary (Chairman, O'Leary Ventures) on capital allocation under uncertainty, and Pierre Yared (Acting Chairman, Council of Economic Advisers) on global trade dynamics and tariff-driven supply chain restructuring.

Why it matters: decision44 signals project44's move from data provider to decision platform -- from showing where your freight is to prescribing what to do about it. The conference framing -- reactive to predictive, connect-see-act-automate -- positions decision intelligence as strategic infrastructure, not an analytics bolt-on. As tariff disruption and geopolitical volatility accelerate supply chain fragmentation, the organisations at decision44 are stress-testing whether their planning systems are optimised for expected conditions or resilient to adversarial ones.

→ decision44 Chicago    → Press Release
Industry · Operations Research (OR) · Applied Impact INFORMS · Edelman Gala Apr 13, 2026

2026 Franz Edelman Award Finalists Revealed: Chewy, NVIDIA, Google, Microsoft, ECCO Shoe, and India's Anna Chakra

INFORMS has named six finalists for the 2026 Franz Edelman Award, the world's leading honour in analytics, operations research, and management science. The finalists are: Chewy (supply chain replenishment), NVIDIA (carbon-aware high-performance computing scheduling), Google, Microsoft (cloud fulfilment orchestration), ECCO Shoe, and India's Department of Food and Public Distribution for "Anna Chakra" -- an OR-based decision support system that optimises state-specific logistics across India's Public Distribution System at national scale. The winner will be announced at the Edelman Gala on April 13 at the INFORMS Analytics+ Conference in National Harbor, MD.

Why it matters: The Edelman Award is the highest signal of where OR is delivering provable, large-scale impact. The 2026 finalists span supply chain replenishment optimisation, graphics processing unit (GPU)-accelerated carbon scheduling, cloud fulfilment, and public sector food distribution at national scale -- a breadth that confirms decision intelligence is no longer a specialist tool but the operational backbone of organisations from hyperscalers to governments. Historically, Edelman-winning approaches become industry standards within 3-5 years.

→ INFORMS Edelman Finalists    → INFORMS Analytics+ 2026
Industry · Manufacturing · Autonomous Operations Rockwell Automation · Apr 7 announcement / Hannover Messe Apr 20–24

Rockwell Automation Announces Autonomous Industrial Operations Platform for Hannover Messe 2026

Rockwell Automation published its Hannover Messe 2026 showcase (April 20-24, Hannover) on April 7: a unified demonstration of the journey from conventional automation to autonomous, outcome-driven operations. The platform combines Emulate3D digital twin simulation, the Plex Smart Manufacturing Platform, and industrial AI for real-time optimisation and predictive maintenance. Rockwell frames the progression as automation -- intelligence -- autonomy: embedding AI directly into operations for real-time optimisation, predictive insights, and autonomous control across design, commissioning, and maintenance cycles.

Why it matters: Industrial OR's next frontier is closed-loop autonomy: digital twins generate scenarios, solvers optimise responses, and actuators execute without human approval cycles. Rockwell's architecture -- digital twin simulation feeding real-time scheduling -- is the manufacturing equivalent of what project44 is building for logistics. The convergence of both at roughly the same moment confirms that autonomous decision execution is the next competitive layer across physical operations, not just a future aspiration.

→ Rockwell Automation Hannover Messe 2026
Industry · Digital Twin · Supply Chain Optimisation Siemens + PepsiCo · Apr 6, 2026

PepsiCo Deploys Siemens + NVIDIA AI-Powered Digital Twin Across Supply Chain and Manufacturing

PepsiCo has partnered with Siemens and NVIDIA to deploy an AI-powered digital twin across its supply chain and manufacturing operations, announced ahead of Hannover Messe 2026. The Siemens Insights Hub Production Co-pilot -- built on generative AI and agentic intelligence -- lets operators interact with manufacturing data using natural language, accelerating root-cause analysis and delivering optimisation recommendations in real time. Siemens NX X Manufacturing Co-pilot adds AI-assisted machining strategies and workflow optimisation for engineering teams, with KUKA robotics integrations cutting programming time for simple warehouse tasks by up to 80%.

Why it matters: When PepsiCo -- one of the world's largest fast-moving consumer goods (FMCG) supply chains -- deploys a Siemens+NVIDIA digital twin stack at enterprise scale, it marks the infrastructure moment for physical-digital integration in manufacturing optimisation. This follows the trajectory of cloud computing: from pilot to standard in five years. Organisations that haven't mapped their manufacturing and supply chain operations to a digital twin layer are now structurally behind organisations that have.

→ Siemens Insights Hub Blog

Research Papers

Research · Fleet Optimisation · Dynamic Programming arXiv:2604.02768 · 3 Apr 2026

Rollout-Based Charging Scheduling for Electric Truck Fleets in Large Transportation Networks

Ting Bai, Xinfeng Ru, Shaoyuan Li, Andreas A. Malikopoulos

Addresses charging schedule optimisation for large electric truck fleets with dedicated infrastructure, where a central coordinator must determine charging sequences and power allocation to minimise operational cost. The problem combines discrete sequencing with continuous control, making exact solutions computationally infeasible for real-time use. The paper proposes a rollout-based dynamic programming (DP) framework with a two-layer inner-outer structure that decouples discrete ordering decisions from continuous schedule optimisation -- achieving near-optimal solutions with polynomial-time complexity. The framework adapts dynamically to vehicle arrivals and time-varying electricity prices.

What problem it solves: Exact methods for fleet charging combine discrete sequencing with continuous power allocation -- a computationally intractable combination for real-time use. The rollout approach delivers near-optimal quality at practical speed, making live operational decisions feasible as fleets scale to thousands of vehicles.

Why it matters: Electric vehicle (EV) fleet scheduling is one of the fastest-growing OR application domains. The inner-outer decoupling of sequencing from continuous optimisation generalises directly to delivery routing, workshop scheduling, and job shop problems -- wherever discrete assignment and continuous resource allocation are interleaved. Polynomial-time near-optimality is the key production-readiness threshold; this framework clears it for large-scale real-time use.

→ View Paper on arXiv
Research · Two-Stage Robust OptimisationA two-stage optimisation model where first-stage decisions are made before uncertainty is revealed, and second-stage decisions minimise cost under the worst-case scenario from an uncertainty set. Differs from stochastic programming by optimising against adversarial outcomes rather than expected value. Introduced as Term of the Day in this issue. · Energy Systems arXiv:2604.03475 · 3 Apr 2026

Scheduling Electricity Production Units to Mitigate Severe Weather Impact: An Efficient Computational Implementation

Yongzheng Dai, Antonio J. Conejo, Feng Qiu

Addresses how electric utilities can pre-position generation units to minimise load shedding during extreme weather events that cause worst-case transmission failures. The problem is formulated as a Two-Stage Robust OptimisationA two-stage optimisation model where first-stage decisions are made before uncertainty is revealed, and second-stage decisions minimise cost under the worst-case scenario from an uncertainty set. Differs from stochastic programming by optimising against adversarial outcomes rather than expected value. Introduced as Term of the Day in this issue. (TSRO) model: generation unit commitment in stage 1, power dispatch under worst-case disruption scenario in stage 2. Solved via a problem-specific outer approximation algorithm combined with a column-and-constraint generation framework that incorporates convexified alternating current (AC) power flow constraints. The resulting algorithm outperforms off-the-shelf solvers while delivering a more physically accurate model than prior literature.

What problem it solves: Standard unit commitment tools optimise for average conditions; severe weather introduces adversarial transmission failures that demand explicit worst-case guarantees. This paper makes convex, physically-accurate robust scheduling computationally tractable -- replacing infeasible exact solvers with a structured approximation that is both faster and more accurate.

Why it matters: Energy grid resilience under severe weather is a direct proxy for supply chain resilience under tariff disruption or logistics failures: both require operations that remain feasible under adversarial conditions, not just stochastic variation. The column-and-constraint generation framework introduced here directly underpins today's Term of the Day, and the pattern -- model worst-case as a two-stage problem, solve efficiently with a structured decomposition -- is broadly applicable across infrastructure scheduling.

→ View Paper on arXiv
Research · Stochastic Optimisation · Energy Markets arXiv:2604.01755 · 2 Apr 2026

Day-Ahead Offering for Virtual Power Plants: A Stochastic Linear Programming Reformulation and Projected Subgradient Method

Weiqi Meng, Hongyi Li, Bai Cui

Virtual power plants (VPPs) must determine optimal pricing and quantity bids in day-ahead energy markets under uncertain renewable generation and volatile prices. The paper formulates this as a two-stage stochastic adaptive robust optimisation problem with Markovian price uncertainty. The core innovation is an inner-approximation-based projected subgradient method that reformulates the intractable robust second-stage problem as a linear program (LP) with a nested resource allocation structure, enabling a greedy algorithm to exploit the isotonic structure of feasible regions. The result: approximately two orders of magnitude (100x) speedup over conventional methods while preserving solution quality.

What problem it solves: Day-ahead VPP bidding windows are narrow; standard robust optimisation is too slow for real-time market submission. The inner approximation technique converts a nonlinear intractable problem into a structured LP, making adversarial-robust bidding feasible within operational time constraints.

Why it matters: A 100x speedup from mathematical reformulation, not faster hardware, is the template every practitioner should internalise: before investing in compute, check whether the formulation can be restructured. VPPs are also the energy-sector equivalent of distributed planning units in supply chains -- where aggregate behaviour emerges from many local decisions under shared uncertainty. The inner approximation framework applies anywhere robust optimisation is needed but conventional solvers are too slow for the decision window.

→ View Paper on arXiv

Term of the Day

Mathematical Framework · New Term First in this issue · 8 Apr 2026

Two-Stage Robust Optimisation

In Two-Stage Robust Optimisation (TSRO), decisions are split across two stages: first-stage decisions are made before uncertainty is revealed, and second-stage (recourse) decisions are made after, under whichever realisation of uncertainty occurs. The key word is robust: rather than minimising expected cost across all scenarios (as in stochastic programming), TSRO minimises cost under the worst-case scenario from a defined uncertainty set. The outer optimisation selects first-stage decisions; the inner optimisation finds the worst-case scenario those decisions must survive; the recourse layer responds to that scenario. The combined problem is usually written as a min-max-min (minimise over first-stage, maximise over adversarial scenarios, minimise over recourse actions).

In practice, TSRO appears wherever operations must stay feasible under adversarial conditions, not just average ones: power grid unit commitment that must survive worst-case transmission failures (today's arXiv:2604.03475), supply chain inventory positioning that must absorb worst-case demand disruptions, fleet routing that must complete delivery under worst-case road closures, and day-ahead energy market bidding that must profit under worst-case price and generation combinations (today's arXiv:2604.01755).

Why practitioners confuse this with Stochastic Programming

Stochastic programming and Two-Stage Robust Optimisation both handle uncertainty, but they encode fundamentally different risk postures. Stochastic programming minimises expected cost across a probability distribution of scenarios: it implicitly assumes the distribution is known and accepts that bad scenarios will sometimes occur, compensated for by good ones. Two-Stage Robust Optimisation makes no assumption about probabilities; it minimises worst-case cost within an uncertainty set. The practical implication: stochastic programming produces plans that are on average good but can fail badly in tail scenarios; TSRO produces plans that are guaranteed to work in the worst case but may be unnecessarily conservative in average conditions. Choosing between them is not a technical question -- it is a risk appetite question. A utility protecting against a once-in-decade grid failure needs TSRO. A logistics company optimising average-case delivery cost can use stochastic programming. Most practitioners apply stochastic programming by default without asking whether their failure modes are average-case or adversarial-case.

The tractability challenge: TSRO problems are notoriously hard to solve because the inner maximisation (finding the worst-case scenario) and the outer minimisation (choosing first-stage actions) interact non-convexly. Column-and-constraint generation (C&CG) is the standard decomposition technique: iteratively add the worst-case scenario as a new constraint until the first-stage decision is robust against all scenarios generated so far. This is the framework used in arXiv:2604.03475. When C&CG is too slow, inner-approximation reformulations (arXiv:2604.01755) can replace the intractable robust second-stage with a structured linear program, achieving the dramatic speedups needed for time-sensitive operational decisions.

Related:

Stochastic Programming · Benders Decomposition · Lagrangian Relaxation · Branch and Bound
→ arXiv:2604.03475 — energy grid TSRO    → arXiv:2604.01755 — VPP stochastic-robust bidding

Upcoming Conferences

Conference Dates Location Key Track
INFORMS Analytics+ 2026 Apr 12–14, 2026 National Harbor, MD Edelman Award Gala (Apr 13), GPU/Mixed Integer Programming (MIP) Panel, GenAI, Supply Chain
AGIFORS 2026 Airline Ops Apr 13–17, 2026 Barcelona, Spain Airline Operations Research (OR), Crew & Maintenance Scheduling, Disruption Recovery
Hannover Messe 2026 Apr 20–24, 2026 Hannover, Germany AI in Manufacturing, Physical AI, Autonomous Industrial Operations, Digital Twin
CPAIOR 2026 May 26–29, 2026 Rabat, Morocco Constraint Programming (CP), Integer Programming, Operations Research Integration
Daily Synthesis

This week's signals converge on a question that every decision intelligence team needs to answer explicitly: are your optimisation systems designed for average conditions, or adversarial ones? The projects44 decision44 framing (reactive to predictive), the Franz Edelman finalists (OR at national scale under resource constraints), the Rockwell and Siemens manufacturing platforms, and both energy papers this week are all navigating the same underlying tension. Two-Stage Robust Optimisation is not just a technique -- it is a risk philosophy encoded as mathematics.

  • project44 decision44 (today) and the Franz Edelman Gala (April 13) bracket this week as the highest-signal 7-day period for applied decision intelligence in Q2 2026 -- one showing where commercial DI is heading, the other confirming where it has already delivered measurable impact.
  • Rockwell and Siemens both announcing autonomous operations platforms for Hannover Messe (April 20-24) is the industrial automation sector making its bet explicit: the future is closed-loop optimisation, where digital twins generate scenarios, solvers select actions, and actuators execute -- removing the human approval step that makes current systems reactive by design.
  • The 100x speedup in arXiv:2604.01755 from mathematical reformulation alone is the most important practitioner lesson this week: compute is not always the bottleneck. Choosing the right model structure -- inner approximation vs. outer approximation, linear program (LP) reformulation vs. direct mixed-integer programming (MIP) solve -- can dwarf any hardware investment for time-sensitive decision problems.
  • Two-Stage Robust Optimisation is the mathematical vocabulary for the risk posture that every supply chain planner implicitly takes when they say "our plan needs to be resilient to disruption." Making that posture explicit -- defining the uncertainty set, choosing TSRO over stochastic programming -- is the difference between resilience as a design principle and resilience as an aspiration.

For practitioners: Before the INFORMS Analytics+ conference opens on April 12, ask your team one question: when your planning system says a schedule is "optimal," has it been tested against worst-case scenarios, or only average ones? If the honest answer is average, you are using stochastic or deterministic methods where your risk profile may require robust methods. The Edelman finalists this year -- from PDS food distribution at India scale to GPU-accelerated carbon scheduling -- are all systems where that question was answered explicitly before deployment.

Decision Optimisation Radar · nexmindai.org

← Back to Radar Archive