← Radar Archive

Industry Signals

Industry · Solver Hexaly · Feb 2026 release / showcased Apr 13–17

Hexaly Optimizer 14.5 Showcased at AGIFORS 2026 & INFORMS Analytics+

Hexaly released Optimizer 14.5 in February 2026, delivering major performance improvements in routing (Prize Collecting Vehicle Routing with Time Windows (PC-VRPTW) benchmark: 2.7% average gap on 1,000-customer instances), scheduling, packing, and nonlinear optimisation. The solver is being showcased this week at AGIFORS 2026 (Barcelona, Apr 13–17) and INFORMS Analytics+ 2026 (National Harbor, Apr 12–14), providing live benchmarking comparisons against Gurobi, CPLEX, and OR-Tools across routing, scheduling, and packing problem classes.

Why it matters: Hexaly's routing benchmarks push into territory previously dominated by dedicated Vehicle Routing Problem (VRP) solvers. Practitioners at both conferences this week will have direct access to live comparisons shaping solver selection decisions for the rest of 2026.

→ Hexaly Optimizer 14.5    → AGIFORS 2026    → INFORMS Analytics+ 2026
Industry · GPU Optimisation INFORMS Analytics+ 2026 · Apr 14, 2:30–4:30 pm

INFORMS Analytics+ 2026: First Dedicated GPU for Linear Programming (LP)/Mixed Integer Programming (MIP) Panel (Apr 14)

The 2026 INFORMS Analytics+ Conference (Apr 12–14, National Harbor, MD) hosts a dedicated 2-hour session on Tuesday April 14: "Understanding the Potential Role of GPUs for Linear Programming (LP) and Mixed Integer Programming (MIP) within Industrial Strength Optimisation." Panelists include NVIDIA and FICO representatives alongside Gurobi's Ed Klotz. The same conference hosts the Franz Edelman Award competition — with Microsoft IFS/OptiGuide among finalists — and tracks on GenAI, Supply Chain, and Analytics in Practice.

Why it matters: A dedicated GPU/MIP panel at the premier applied Operations Research (OR) conference signals that the practitioner community is ready to move from "GPU solvers are impressive" to "here is when GPU LP/MIP pays off in production." The session will likely crystallise consensus on problem classes, instance sizes, and infrastructure choices — directly informing cloud solver decisions through year-end 2026.

→ INFORMS Analytics+ 2026
Industry · Tool Integration Nextmv · Apr 2026

Nextmv Adds NVIDIA cuOpt Support for Production GPU-Accelerated VRP

Nextmv, the decision operations platform used by logistics and last-mile delivery teams, announced native support for NVIDIA cuOpt as a solver backend. Following cuOpt's Apache 2.0 open-sourcing in March 2026, the integration enables existing Python/PuLP vehicle routing models to run GPU-accelerated solving via Nextmv's Application Programming Interface (API). Nextmv provides version management, model testing, and A/B solver comparison tooling — meaning teams can benchmark Central Processing Unit (CPU) vs. Graphics Processing Unit (GPU) solver performance in production without custom engineering effort.

Why it matters: This integration closes the final gap between research-grade GPU solvers and production decision systems. Logistics teams can now adopt GPU-accelerated VRP within their existing decision infrastructure, version-control solver updates, and run controlled A/B comparisons between CPU and GPU backends — turning cuOpt from a research artefact into a production-switchable component.

→ Nextmv cuOpt Announcement    → GitHub: NVIDIA/cuopt
Industry · Airline Operations AGIFORS 2026 · Apr 13–17, Barcelona

AGIFORS 2026 Barcelona: Airline Operations Research (OR) Under the Theme of Disruption & Uncertainty

The AGIFORS (Airline Group of the International Federation of Operational Research Societies) Airline Operations Study Group Meeting convenes April 13–17 at the Hilton Barcelona. The 2026 theme is "Efficiency and Resiliency Amidst Economic and Political Uncertainty." Sessions span flight operations research, crew scheduling, maintenance optimisation, and disruption recovery, with vendor demonstrations including Hexaly Optimizer 14.5.

Why it matters: Aviation is among the highest-stakes OR domains — real-time replanning across millions of variables under volatile constraints. What the airline OR community standardises on at AGIFORS typically reaches commercial planning software within 2–3 years, making this a leading indicator for the broader enterprise OR market.

→ AGIFORS 2026 Airline Ops

Research Papers

Research · Large Language Model (LLM)+OR · Interactive Optimisation arXiv:2604.02666 · 3 Apr 2026

Let's Have a Conversation: Designing and Evaluating LLM Agents for Interactive Optimisation

Joshua Drossman, Alexandre Jacquillat, Sébastien Martin — MIT

Proposes a scalable methodology for evaluating Large Language Model (LLM)-powered optimisation agents through structured multi-turn conversations rather than one-shot queries. The authors build stakeholder agents governed by internal utility functions and generate thousands of simulated conversations in a school scheduling case study. Domain-tailored agents — equipped with structured tools and domain-specific prompts — converge to significantly higher-quality solutions in fewer turns than general-purpose chatbots, and one-shot evaluation is shown to severely underestimate real agent quality across all conditions.

What problem it solves: Standard benchmarks evaluate LLM+OR systems on single-query performance, missing the iterative negotiation that real planning decisions require. This paper formalises how to measure and improve agents across a full conversation arc — objectives, constraint relaxation, and trade-off exploration included.

Why it matters: As organisations deploy LLM planning agents, interactive solution refinement — not just a single best answer — is the key product differentiator. This paper provides a replicable evaluation framework practitioners can apply to domain-specific scheduling and allocation problems before deployment.

→ View Paper on arXiv
Research · Stochastic Optimisation · VRP arXiv:2604.02496 · 2 Apr 2026

On Vehicle Routing Problems with Stochastic Demands — Scenario-Optimal Recourse Policies

Matheus J. Ota, Ricardo Fukasawa — University of Waterloo

Addresses two-stage Vehicle Routing Problems with Stochastic Demands (VRPSDs), where routes are pre-planned, customer demands are revealed on vehicle arrival, and recourse actions are triggered when capacity is exceeded. The paper introduces Scenario Recourse InequalitiesValid inequalities for two-stage VRPSDs casting recourse policy choice as a higher-dimensional MIP, enabling exact optimisation with provable optimality certificates from empirical demand distributions. — first explained April 7 2026. (SRIs), a new class of valid inequalities that cast recourse policies as solutions to a higher-dimensional Mixed Integer Program (MIP), enabling exact solution under scenario-optimal recourse with provable optimality certificates from empirical demand distributions.

What problem it solves: Existing VRPSD approaches either use heuristic recourse (no guarantees) or produce overly conservative bounds. Scenario Recourse Inequalities (SRIs) provide tight MIP formulations under realistic, empirically-estimated demand scenarios without sacrificing solution quality or tractability.

Why it matters: Stochastic last-mile delivery is one of the most commercially significant VRP variants. A principled MIP framework for scenario-optimal recourse gives logistics OR teams both better solutions and formal certificates — directly applicable to tariff-disrupted, demand-uncertain supply chains where heuristic recourse policies carry unacceptable operational risk.

→ View Paper on arXiv
Research · Neurosymbolic Artificial Intelligence (AI) · Agentic Planning arXiv:2604.00555 · 1 Apr 2026

Ontology-Constrained Neural Reasoning in Enterprise Agentic Systems: A Neurosymbolic Architecture for Domain-Grounded AI Agents

Thanh Luong Tuan — Golden Gate University, San Francisco

Proposes a neurosymbolic architecture within the Foundation AgenticOS (FAOS) platform that constrains LLM agent outputs via a three-layer ontological framework — Role, Domain, and Interaction ontologies. Evaluated across 600 runs in five industries (FinTech, Insurance, Healthcare, Vietnamese Banking, Vietnamese Insurance), ontology-coupled agents significantly outperform ungrounded agents on Metric Accuracy, Regulatory Compliance, and Role Consistency, with the largest gains in domains where LLM parametric knowledge is weakest.

What problem it solves: LLM hallucination, domain drift, and regulatory non-compliance in high-stakes enterprise deployments. Ontological constraints act as verifiable semantic guardrails without requiring full model retraining — making compliance by construction achievable at deployment time rather than training time.

Why it matters: Decision intelligence deployments in regulated industries face compliance barriers that pure LLM approaches cannot satisfy. This paper provides a replicable architectural template — ontology as the compliance layer — that aligns with the verifiability trend seen in Palantir's AIP Analyst dependency graph and EVOM's solver-as-verifier paradigm from last week's edition, completing a three-pillar picture of verifiable enterprise AI.

→ View Paper on arXiv

Term of the Day

Modelling Concepts · New Term First in this issue · 7 Apr 2026

Pareto Optimality

A solution is Pareto optimal (also called Pareto efficient) when no objective can be improved without worsening at least one other objective. The Pareto front is the complete set of all Pareto-optimal solutions for a given multi-objective problem. Every point on the front represents a valid, non-dominated trade-off, and the front can contain infinitely many solutions. Choosing a single solution from the Pareto front requires a further step: specifying a preference function, setting weights, or applying a utility model to express how much one objective is worth sacrificing relative to another.

In practice, Pareto optimality appears wherever two or more objectives compete: cost versus service level in supply chain planning, cost versus patient outcome in healthcare scheduling, cost versus carbon in energy dispatch, risk versus return in portfolio optimisation, and delivery speed versus fuel consumption in transport routing. The Ota and Fukasawa stochastic VRP paper from today's issue is implicitly navigating a Pareto front over cost and recourse risk every time the solver selects a route plan under demand uncertainty.

Why practitioners misread this

The most common error is treating "Pareto optimal" as a synonym for "the best solution." In reality, the Pareto front contains many solutions, none of which is uniquely best without additional preference information. A supply chain plan that is cheapest-but-slowest and a plan that is fastest-but-most-expensive can both be Pareto optimal simultaneously. The label signals only that no free lunch exists: you cannot get more of one objective without giving up something in another. The related confusion is between the Pareto front and a weighted scalarisation: if a planner sets cost weight 0.6 and service weight 0.4 and solves a single objective, they obtain one Pareto-optimal point, not the full front. Agentic AI systems that recommend "the optimal schedule" are almost always presenting a single scalarised Pareto point with hidden weights, not a dominant solution that wins on all criteria.

The scalarisation trap: When a solver minimises a weighted sum of objectives, it navigates the Pareto front implicitly. Changing the weights produces a different Pareto-optimal point. Many practitioners treat the solver's output as "the optimal answer" without realising that the weights embedded in their objective function are the decision, and changing them would yield an equally valid but very different plan. Before accepting any multi-objective recommendation, ask: what weights were assumed, and are they the right ones for this context?

Related:

Mixed Integer Programming · Stochastic Programming · Scenario Recourse Inequalities
→ Pareto efficiency — Wikipedia    → Ota & Fukasawa stochastic VRP (arXiv:2604.02496)

Upcoming Conferences

Conference Dates Location Key Track
INFORMS Analytics+ 2026 Apr 12–14, 2026 National Harbor, MD GPU/MIP Panel, Edelman Award, GenAI, Supply Chain
AGIFORS 2026 Airline Ops Apr 13–17, 2026 Barcelona, Spain Airline OR, Crew & Maintenance Scheduling, Disruption
ICAPS 2026 Jun 27–Jul 2, 2026 Dublin, Ireland Automated Planning & Scheduling, Agentic Systems
NeurIPS 2026 Dec 6–12, 2026 Sydney, Australia ML+OR, Reinforcement Learning (RL) for Optimisation, Neural Combinatorial Optimisation
Daily Synthesis

Two parallel Operations Research (OR) conference tracks this week, INFORMS Analytics+ for applied analytics practitioners and AGIFORS for aviation OR specialists, signal that decision intelligence infrastructure questions are being addressed simultaneously at the general and domain-specific levels. This is the hallmark of a maturing technology: the field moves from "can this work?" to "how do we standardise and deploy it at scale?" Running through every signal this week is a common thread: every real-world optimisation problem involves multiple objectives, and the systems being standardised must be honest about which Pareto-optimal point they are recommending, and why.

  • The INFORMS Analytics+ GPU/MIP panel (Apr 14) is the first dedicated session of its kind at the premier applied OR conference — the practitioner community is ready to standardise on when GPU-accelerated LP/MIP pays off vs. traditional CPU solvers.
  • The interactive optimisation paper (arXiv:2604.02666) reframes the LLM+OR evaluation problem: solution quality from a single query is the wrong metric; the right metric is quality across a full stakeholder conversation arc — which domain-tailored agents win decisively.
  • Ontological constraints (arXiv:2604.00555) are emerging as the enterprise compliance layer for LLM agents — complementing solver verification (EVOM) and dependency-graph transparency (Palantir AIP Analyst) as the three pillars of verifiable decision intelligence.
  • Nextmv's cuOpt integration closes the production deployment gap for GPU-accelerated Vehicle Routing Problem (VRP) solving, turning cuOpt from a research artefact into a production-switchable component with controlled A/B comparison baked in.
  • Pareto Optimality is the unspoken assumption behind every "optimal" recommendation in multi-objective decision systems. Whether it is Hexaly benchmarking across speed and quality trade-offs, the stochastic VRP balancing cost and recourse risk, or LLM agents converging on scheduling solutions across conflicting stakeholder objectives, every system is navigating a Pareto front. The systems that win enterprise trust will be those that make those trade-offs explicit rather than hiding them inside scalarisation weights.

For practitioners: This week's INFORMS Analytics+ GPU/Mixed Integer Programming (MIP) panel (Apr 14, 2:30 to 4:30 pm ET) and the Hexaly 14.5 benchmarks at AGIFORS are the two highest-signal events of Q2 2026 for OR infrastructure decisions. Both will shape solver selection conversations and cloud architecture choices through year-end 2026. Before attending either, ask your team: when we say a solution is "optimal," which Pareto-optimal point are we actually choosing, and do the embedded weights match our strategic priorities?

Decision Optimisation Radar · nexmindai.org

← Back to Radar Archive