Solution Methods · First introduced 23 Apr 2026

Second-Order Cone Programming

"A second-order cone program is a convex optimization problem that minimizes a linear function over the intersection of an affine set and the product of second-order cones. It is more general than linear and quadratic programming, yet can be solved with comparable efficiency." — Boyd & Vandenberghe, Convex Optimization (2004), §4.4

Optimise a linear objective over cone constraints — subsumes Linear Programming and convex Quadratic Programming, finds the global optimum in polynomial time, and is the natural solver for portfolio CVaR bounds, power flow relaxations, and distributionally robust optimisation.

Why it's needed

Many real decisions have constraints that are neither linear inequalities nor simple quadratic expressions but are still convex — the kind of shape where moving away from the boundary always gives you more room, with no hidden ridges or non-convex pockets. Classic examples: bounding the Euclidean norm of a portfolio’s weight vector (a diversification constraint), limiting the magnitude of a voltage at a grid node (a power-flow stability constraint), or controlling the signal-to-interference ratio at an antenna (a beamforming constraint).

Linear Programming (LP) cannot model these; its feasible set is defined by flat halfspaces. Quadratic Programming (QP) handles quadratic objectives over linear constraints, but struggles with quadratic constraints. Nonlinear Programming (NLP) can model the constraints but loses the global-optimality certificate: an NLP solver might converge to a local minimum and miss the global one. What you need is a framework that handles curved constraints while retaining the efficiency and global-optimality guarantee of LP.

Second-Order Cone Programming (SOCP) does exactly this: it captures all constraints that reduce to Euclidean-norm bounds over affine expressions, and solves them to provable global optimality in polynomial time.


The convex programming hierarchy

SOCP sits in a precise hierarchy of convex programs. Each level up adds more modelling power but typically costs more per solve:

LP
Linear objective, linear constraints. Fastest solvers; no curved feasible sets.
SOCP ← here
Linear objective, cone constraints. Handles norms, quadratics, robust LP. Polynomial-time interior-point.
SDP
Linear objective, positive-semidefinite matrix constraints. More expressive; much heavier per solve.

Every LP is an SOCP (set the cone constraint to a trivial bound). Every SOCP is an SDP (lift the cone into a matrix inequality). But the reverse does not hold: not every SOCP can be written as an LP, and not every SDP reduces to an SOCP. The hierarchy is strict.


What it does

An SOCP minimises a linear objective cTx subject to: one or more second-order cone constraints of the form ‖Aix + bi2 ≤ ciTx + di, and standard linear equality and inequality constraints. The cone constraint says “the Euclidean length of this affine function of x cannot exceed this other affine function of x.” The feasible set is the intersection of an affine subspace with a product of these cones.

Standard SOCP form minimise   cᵀx
subject to   ∥Aᵢx + bᵢ∥₂ ≤ cᵢᵀx + dᵢ   for each cone i
             Fx = g   (linear equalities)

Interior-point solvers treat the cone constraint as a smooth curved boundary and navigate inside the cone using barrier functions. Because the cone is convex, there are no local minima that are not global minima, and the solver outputs a primal solution x* together with a dual certificate that proves optimality. Typical solve times for well-conditioned SOCP problems are milliseconds to seconds even at hundreds of variables.


Concrete example: portfolio CVaR with norm-bounded positions
Scenario — equity portfolio with CVaR tail bound and diversification constraint

Setup. A fund manager holds n = 200 equity positions with weight vector w (summing to 1, long-only). The weekly return distribution is approximated by S = 500 historical scenarios. The manager wants to maximise expected return, cap the 95%-Conditional Value at Risk (CVaR) of weekly loss at €2M, and enforce a diversification constraint ‖w2 ≤ κ (bounding the Euclidean norm of the weight vector, which limits over-concentration in any cluster of correlated assets).

Where the cone comes from. The CVaR constraint is linearisable via the Rockafellar–Uryasev auxiliary variable t (see the CVaR concept page): it introduces 500 auxiliary variables and 500 linear inequalities but stays a linear program. The diversification constraint ‖w2 ≤ κ is exactly a second-order cone constraint over the weight vector. Together they form an SOCP: a linear objective, a set of linear constraints (CVaR, budget, long-only), and one cone constraint (norm-bounded positions).

What an SOCP solver does. MOSEK, CVXOPT, or Clarabel (via CVXPY) solves this in under 100 milliseconds for 200 assets and 500 scenarios, returning the optimal weight vector with a dual certificate proving no feasible portfolio achieves a better expected return. When a price tick arrives the problem re-solves from the warm-start dual, typically in under 10 milliseconds — fast enough for live rebalancing signals.

What a non-SOCP alternative would cost. Dropping the ‖w2 bound and replacing it with individual position caps (linear constraints) is simpler but misses correlated clusters: two correlated assets can each stay under their individual cap while the joint position is dangerously large. A nonlinear solver that handles the norm constraint exactly may not certify global optimality and may not warm-start efficiently.


Common misreads
SOCP is not just Quadratic Programming with extra steps

Many practitioners treat SOCP as “Quadratic Programming (QP) with extra steps.” In fact, a convex QP minimises a quadratic objective subject to linear constraints; an SOCP minimises a linear objective subject to cone constraints. A QP with a positive-semidefinite objective can always be rewritten as an SOCP via epigraph reformulation (introduce a variable t ≥ ½xTQx as a cone constraint), but SOCP also handles problems a QP cannot — including hyperbolic constraints of the form xyz2 and robust Linear Programming (LP) problems with ellipsoidal uncertainty, neither of which has a QP representation.

Many familiar problems are SOCP in disguise

Power flow constraints (bounding the squared voltage magnitude at a grid node), truss-design problems (stress-to-weight ratio limits under load), antenna beamforming (signal-to-interference-plus-noise ratio bounds), and Lasso-regularised regression (the ℓ1 penalty can be lifted into a cone) all reduce to SOCP once reformulated. Practitioners who don’t recognise the cone structure reach for general nonlinear solvers with no global-optimality guarantee and no warm-start certificate, when an SOCP solver would find the certifiably optimal solution in polynomial time with interpretable dual prices.

Know when to go to Semidefinite Programming instead

If your constraints involve positive-semidefinite matrix variables — for example, covariance matrix estimation in a distributionally robust formulation, or stability constraints in control systems — you need Semidefinite Programming (SDP). SOCP cannot handle matrix cone constraints. But SDP solvers scale much worse (roughly O(n3.5) versus O(n2.5) for SOCP): choose SOCP whenever the constraint structure permits it, and escalate to SDP only when the matrix structure is irreducible. For most portfolio and power-systems applications, SOCP is sufficient and orders of magnitude faster.


Where this shows up in practice
Portfolio Optimisation
CVaR-constrained allocation with norm-bounded positions; Markowitz mean-variance via SOCP epigraph; distributionally robust portfolio with Wasserstein ambiguity set and piecewise-linear loss.
Power Systems
Second-order cone relaxations of Alternating Current (AC) Optimal Power Flow (OPF); voltage magnitude stability constraints; chance-constrained dispatch with renewable generation uncertainty.
Structural Engineering
Truss design under stress and displacement limits; topology optimisation; minimum-weight design subject to force-balance and stress-cone constraints. All reduce to SOCP and can be solved to global optimality.
Signal Processing
Robust beamforming (maximise Signal-to-Noise Ratio (SNR) subject to null-steering cone constraints); Finite Impulse Response (FIR) filter design with passband, stopband, and group-delay constraints.
Distributionally Robust Opt.
Wasserstein-ball ambiguity set with piecewise-linear loss reduces to SOCP. The Wasserstein distance constraint between the empirical distribution and worst-case distribution is a second-order cone constraint.
Robotics & Control
Model Predictive Control (MPC) with quadratic cost and cone friction constraints; safe trajectory planning where safety margins are cone-representable; spacecraft attitude control under torque limits.
Machine Learning
Sparse logistic regression with ellipsoidal constraint; kernel SVM with norm-bounded dual variables; SOCP relaxations of sparse recovery problems (robustified LASSO).
Network Design
Facility location with Euclidean distances; robust network flow with uncertain edge capacities represented as ellipsoidal sets; second-order cone network interdiction models.

The DRO connection: why SOCP powers distributionally robust decisions

Distributionally Robust Optimisation (DRO) hedges against being wrong about the probability distribution, not just about the outcome. It picks the decision that minimises the worst-case expected cost over all distributions inside an ambiguity set — a ball of plausible distributions centred on the empirical estimate.

When the ambiguity set is a Wasserstein ball (defined by optimal-transport distance between distributions) and the loss function is piecewise linear, a fundamental reformulation result (Mohajerin Esfahani & Kuhn, 2018) shows that the DRO problem reduces to a finite-dimensional SOCP. The Wasserstein distance constraint appears as a second-order cone constraint in the dual, and the piecewise-linear loss contributes linear terms only. The resulting SOCP has the same number of variables and constraints as a classical scenario-based stochastic program, and can be solved by any SOCP solver with no modification.

Why this matters operationally
A practitioner who builds a scenario-based stochastic program (say, unit commitment under wind uncertainty) can add distributional robustness — hedging against a misspecified wind distribution — by simply switching the objective from expected cost to Wasserstein-DRO cost. The model stays an SOCP. The solver does not change. The dual variables now have the additional interpretation of transport costs between empirical and worst-case scenarios.

SOCP is the practical computation engine behind the DRO revolution in operations research — the reason that distributional robustness is no longer just a theoretical property but a tractable design choice.


The geometry: what a second-order cone looks like
LP halfspace ∩ halfspace LP ⊂ SOCP SOCP ‖Ax+b‖ ≤ cᵀx+d SOCP ⊂ SDP SDP X ⪰ 0 fastest solve polynomial-time most expressive CONVEX PROGRAMMING HIERARCHY LP ⊂ SOCP ⊂ SDP — each level adds modelling power, each costs more per solve
LP is a special case of SOCP (flat cones); SOCP is a special case of SDP (cone lifted to matrix inequality). The reverse inclusions do not hold.

One-liner

SOCP optimises a linear objective over cone constraints, subsumes LP and convex QP, and finds the global optimum in polynomial time — it is the natural solver for portfolio risk bounds, power flow relaxations, and any problem with Euclidean-norm constraints.


Related concepts