|
DAILY INTELLIGENCE BRIEF
Decision Optimisation Radar
Sunday, 23 March 2026 • Daily Edition
|
|
๐ญ Industry & Tool Updates
|
|
INDUSTRY UPDATE #1
Kinaxis + NVIDIA cuOpt: 12ร Faster Supply Chain Optimisation at 50M Decision Variables
March 16, 2026 | Source: Kinaxis Press Release / NVIDIA GTC 2026
Kinaxis announced a landmark integration of NVIDIA cuOpt GPU acceleration into its Maestro supply chain planning platform. In benchmark tests on a semiconductor planning model with nearly 50 million decision variables, total end-to-end planning time was cut from over 3 hours to just 17 minutes โ a 12ร reduction. Core optimisation solve time improved 23ร, cutting compute requirements by over 95% while maintaining comparable solution quality.
Why it matters for enterprises: Interactive, agent-driven supply chain replanning is now computationally feasible for large enterprises. This eliminates the overnight batch planning cycle โ planners can re-optimise in real time in response to disruptions, shifting from reactive to prescriptive planning.
|
|
|
INDUSTRY UPDATE #2
Microsoft IFS + OptiGuide Named INFORMS Edelman 2026 Finalist
March 2026 | Source: INFORMS / Microsoft Research
Microsoft's Intelligent Fulfillment Service (IFS) โ combining ML, mathematical optimisation, and agentic AI built on OptiGuide โ is a 2026 Franz Edelman Award finalist. It cuts planning cycle times in half, delivers hundreds of millions in annual savings, reduces fulfillment team workload by 23%, and compresses decision cycles from days to minutes via an LLM-powered planner assistant.
Why it matters for enterprises: The clearest real-world validation of the LLM + OR hybrid stack in production at hyperscale. OptiGuide's architecture โ LLMs as intelligent interpreters of optimisation models โ is a proven enterprise design template.
|
|
|
INDUSTRY UPDATE #3 โ TOOL
NVIDIA cuOpt Open-Sourced Under Apache 2.0
March 2026 | Source: NVIDIA Developer Blog / GitHub
NVIDIA open-sourced cuOpt under Apache 2.0 โ free for commercial use. Solves LP, MILP, and Vehicle Routing Problems (VRPs) at 10ร to 5,000ร the speed of CPU-based solvers, integrates with PuLP and AMPL with minimal refactoring, and offers Python API, REST API, and CLI interfaces.
Why it matters for enterprises: Zero-cost GPU-accelerated optimisation. Python teams can drop cuOpt into existing PuLP models today. The 5,000ร VRP ceiling is especially relevant for last-mile delivery and dynamic fleet optimisation.
|
|
|
INDUSTRY UPDATE #4
SAS and FICO Named Leaders in Inaugural Gartner Magic Quadrant for Decision Intelligence Platforms
February 2026 | Source: SAS / FICO
Gartner published its inaugural Magic Quadrant for Decision Intelligence Platforms, formally recognising DI as a distinct enterprise software category. SAS (Viya) and FICO named Leaders; IBM, Pega, and Quantexa also assessed. Evaluation covers the full decision lifecycle: modelling, orchestration, monitoring, governance, and agentic AI integration.
Why it matters for enterprises: DI has crossed the chasm to enterprise procurement maturity. Expect accelerated consolidation across platforms combining prescriptive analytics, ML, and mathematical optimisation.
|
|
|
INDUSTRY UPDATE #5
SCIP Platform Launches "Optimize & Assure" Modules
February 2026 | Source: BusinessWire
SCIP expanded with two new decision intelligence modules: Optimize and Assure โ focused on clarifying priorities, validating choices, and building confidence in supply chain decisions under disruption. Delivers real-time prescriptive analytics and automated workflows.
Why it matters for enterprises: "Decision quality" as a product feature reflects a growing enterprise need for auditability and confidence in AI-driven decisions. Expect the "assurance layer" pattern to become standard across OR-adjacent enterprise software.
|
|
|
RESEARCH PAPER #1
OR-LLM-Agent: End-to-End Reasoning Agent for Operations Research Problems
arXiv: 2503.10009 | cs.AI
First fully automated end-to-end AI agent for real-world OR problems. Uses reasoning LLMs (GPT-o3-mini, DeepSeek-R1, Gemini 2.0 Flash Thinking) to translate natural-language problem descriptions into formal models and Gurobi solver code via three sub-agents: modelling โ code generation โ debugging. Achieves 100% pass rate and 85% solution accuracy on the new BWOR benchmark.
What problem it solves: Removes the OR domain expertise barrier โ natural-language problem descriptions are automatically translated into executable solver code end-to-end.
Why it matters: Democratises OR modelling for enterprises without dedicated optimisation teams. Combined with open solvers (HiGHS, cuOpt), this creates a zero-cost end-to-end optimisation pipeline for Python developers.
|
|
|
RESEARCH PAPER #2
OR-R1: Automating OR Optimisation via Test-Time Reinforcement Learning (AAAI 2026)
arXiv: 2511.09092 | cs.AI โ AAAI 2026
Data-efficient LLM training for OR: SFT for reasoning patterns + Test-time Group Relative Policy Optimisation (TGRPO) for iterative self-improvement. Uses only 1/10th the synthetic training data of prior SOTA (ORLM); achieves 67.7% average accuracy across 6 benchmarks including NL4OPT, MAMO ComplexLP, and IndustryOR.
What problem it solves: Reduces data and expertise cost of training LLMs for OR, narrowing the Pass@1 vs. Pass@8 gap from 13% to 7%.
Why it matters: RL-based post-training + test-time compute scaling = LLMs that improve at OR through experience. A strong signal for LLM-powered solver ecosystems.
|
|
|
RESEARCH PAPER #3
PyJobShop: Constraint Programming for Scheduling in Python
arXiv: 2502.13483 | cs.AI
New open-source Python library for scheduling (FJSP, RCPSP, open shop, parallel machines) backed by OR-Tools CP-SAT. Clean modelling API, PSPLIB benchmark loading, supports exact CP solving and metaheuristic integration. No unified Python CP library previously covered this breadth of scheduling variants.
What problem it solves: No unified Python CP library previously covered this breadth of scheduling variants with this level of API ergonomics.
Why it matters: Python-first teams in manufacturing, logistics, and project management now have a production-ready, free alternative to commercial scheduling solvers.
|
|
|
RESEARCH PAPER #4
Luca: LLM-Upgraded Graph RL for Carbon-Aware Job Scheduling
arXiv: 2512.06351 | cs.AI
Pre-trained LLM encodes machine states, job properties, and carbon intensity forecasts into embeddings that inform a graph RL policy (PPO). Multi-criteria objective: minimise both makespan and carbon emissions. LLM embeddings bridge unstructured context and graph-structured scheduling state.
What problem it solves: Pure graph RL cannot incorporate heterogeneous contextual signals (energy prices, carbon intensity, machine descriptions) โ LLM embeddings bridge that gap without manual feature engineering.
Why it matters: With sustainability regulations mandating carbon accounting, scheduling systems must optimise for both throughput and ESG metrics. Luca's architecture injects these signals directly into OR decision loops.
|
|
|
RESEARCH PAPER #5
AlphaEvolve: Gemini-Powered Evolutionary Agent Recovers 0.7% of Google's Global Compute
arXiv: 2506.13131 | Google DeepMind
LLMs (Gemini 2.0 Flash + Pro) propose algorithmic mutations in an evolutionary loop; automated evaluators score solutions; best solutions propagate. Deployed internally at Google for over a year. 0.7% of Google's worldwide computing resources recovered continuously. 23% speedup on Gemini's matrix multiplication kernel. 32.5% speedup on FlashAttention.
What problem it solves: Algorithm design and system-level scheduling optimisation have historically required scarce human experts. AlphaEvolve automates both discovery and ongoing tuning at scale.
Why it matters: For enterprises managing large compute or logistics fleets, evolutionary LLM agents applied to scheduling and resource allocation represent the next frontier of prescriptive analytics.
|
|
| Conference |
Key Date |
Location |
Relevance |
| CPAIOR 2026 |
Notifications Mar 23 ยท May 26โ29 |
Rabat, Morocco |
CP + AI + OR; LLM-based solving; first time in Africa |
| IPCO 2026 |
Final papers Mar 25 ยท Jun 17โ19 |
Padova, Italy |
MIP theory; combinatorial algorithms |
| INFORMS IOS 2026 |
Concluded Mar 20โ22 |
Atlanta, Georgia |
Stochastic optimisation; papers landing on arXiv now |
| INFORMS Edelman 2026 |
Winner at Analytics+ Conference |
TBD |
6 finalists: Microsoft, NVIDIA, Chewy + 3 others |
| ICAPS 2026 |
Jun 27 โ Jul 2 |
Dublin, Ireland |
Automated planning & scheduling; agentic systems |
|
|
Daily Synthesis
This week's signals converge on what may be the most consequential architectural shift in decision optimisation since the MIP revolution of the 1990s: the LLM-as-formulator, solver-as-executor stack is reaching production readiness.
|
|
The expertise gap is closing. OR-LLM-Agent and OR-R1 together demonstrate that reasoning LLMs are achieving 85% accuracy on real OR problem sets โ the domain expertise barrier that previously kept optimisation out of reach for most engineering teams is dissolving.
|
|
|
The hardware ceiling is gone. Kinaxis + NVIDIA's 12ร speedup on 50M-variable models removes the last major latency argument against real-time replanning โ overnight batch cycles are now optional, not required.
|
|
|
Category maturity signal. The inaugural Gartner Magic Quadrant for Decision Intelligence Platforms marks the moment DI crossed the chasm from specialist tool to enterprise software procurement category.
|
|
|
The durable moat has shifted. It is no longer access to solvers (cuOpt is now free) or the ability to formulate models (LLMs are closing that gap) โ it is proprietary decision data, validated constraint libraries, and institutional knowledge of what makes a good solution in your domain.
|
For practitioners: Organisations that begin systematically capturing decision history, constraint structures, and outcome labels today are building the most defensible long-term advantage in this space. The solver and formulation layers are becoming commodities โ the data layer is not.
|
|
Generated by Decision Optimisation Radar, automated daily scan | 23 March 2026
|
|