← GenAI Radar Archive

Industry Signals

Infrastructure · Compute Broadcom · Securities and Exchange Commission (SEC) Filing / LLM Stats

Broadcom Expands Anthropic Deal for ~3.5 GW Compute Capacity

Broadcom disclosed an expanded agreement with Anthropic to deliver approximately 3.5 gigawatts (GW) of computing capacity, while also committing to produce future versions of Google's Tensor Processing Units (TPUs). The disclosure emerged from an SEC filing and represents one of the largest dedicated artificial intelligence (AI) hardware commitments ever made public. The deal underscores the deepening integration between chip manufacturers and frontier AI labs, with Broadcom now serving as a critical link in both the Google TPU and Anthropic compute supply chains.

Why it matters: Frontier model training is becoming infrastructure-class investment — comparable to building power grids. Practitioners should factor long-term compute availability and concentration risk into AI roadmaps; the Broadcom-Anthropic-Google dependency chain is now public and material for anyone building on Claude or TPU-backed services.

→ LLM Stats / Techmeme
Infrastructure · Investment FinancialContent / MarketMinute · Apr 2026

Hyperscalers Plan ~$700 Billion in Data Centre Spending in 2026

New forecasts show Amazon, Google, Meta, and Microsoft collectively planning nearly $700 billion in data centre investments in 2026 alone — nearly double the approximately $363 billion spent in 2025. This spending supercycle is reshaping global markets for energy, semiconductors, and real estate. Global AI spending is projected to exceed $2 trillion across the year, driven by training runs, inference infrastructure, and the rapid build-out of dedicated AI compute zones worldwide.

Why it matters: The pace of infrastructure investment directly determines which AI capabilities will be accessible, at what cost, and where. Cost curves, availability zones, and pricing power all flow from these decisions — making this essential context for any organisation building a multi-year AI strategy.

→ MarketMinute — AI & Data Centre Infrastructure Supercycle
Geopolitics · Infrastructure Risk TechCrunch · Apr 2026

Iran Threatens Stargate AI Data Centres

Iran publicly threatened the Stargate AI data centre initiative — the large-scale U.S. public-private AI infrastructure project — adding a geopolitical dimension to the global race to build AI compute capacity. The threats signal that AI infrastructure is increasingly viewed as strategic national infrastructure by state actors worldwide, comparable to energy grids and financial clearing systems in terms of adversarial interest.

Why it matters: Geopolitical risk is now a material factor for organisations planning long-term AI compute strategies. Physical infrastructure concentration creates vulnerabilities that cloud and hybrid deployment architects need to account for when assessing resilience and redundancy.

→ TechCrunch — Iran Threatens Stargate AI Data Centers

Models & Tools

Model Release · Open Source Google DeepMind · Latent Space / LLM Stats

Google's Gemma 4 Crosses 2 Million Downloads

Google's open-source Gemma 4 model family crossed 2 million downloads just days after its April 2 launch. Gemma 4 was purpose-built for advanced reasoning and agentic workflows, offering an unusually high intelligence-per-parameter ratio across its model sizes. The milestone makes it one of the fastest-adopted open model releases to date, signalling rapid developer appetite for capable, self-hostable alternatives to proprietary frontier models.

Why it matters: Rapid adoption of a high-capability open model accelerates access to fine-tunable, self-hostable alternatives — giving enterprise and research teams more options for cost-effective, privacy-preserving deployments without licensing friction. Gemma 4's agentic focus makes it directly relevant to teams building tool-calling pipelines.

→ Latent Space / LLM Stats
Model Release · Vision · Edge Meta AI · MarketechPost

Meta Releases Efficient Universal Parameter Encoder (EUPE) — Compact Vision Encoder Rivalling Specialist Models

Meta AI released Efficient Universal Parameter Encoder (EUPE), a family of compact vision encoders containing fewer than 100 million (M) parameters that rival the performance of much larger specialist models. The architecture is purpose-built for smartphones and resource-constrained devices, enabling powerful vision capabilities at the edge. EUPE targets the growing gap between cloud-scale multimodal models and on-device inference budgets.

Why it matters: Efficient vision models that match specialist-level performance reduce inference costs and open new on-device multimodal AI use cases — a key capability gap for developers building mobile or embedded AI applications where bandwidth and compute budgets are constrained.

→ MarketechPost / LLM Stats

Safety & Security

AI Safety · Security Bloomberg / Frontier Model Forum · Apr 2026

OpenAI, Anthropic, and Google Collaborate on Adversarial Distillation Defence

OpenAI, Anthropic, and Google are sharing threat intelligence through the Frontier Model Forum (FMF) to detect and counter adversarial distillation attempts — systematic efforts by bad actors to extract proprietary model capabilities by querying frontier models at scale. This is a rare instance of direct security collaboration between competing AI labs, treating model capability theft as a shared threat comparable to financial fraud ring detection. The initiative focuses on identifying query patterns that signal distillation rather than legitimate use.

Why it matters: As frontier models become increasingly valuable assets, protecting against capability theft via distillation is an emerging security priority that any organisation deploying or building on top of large language models (LLMs) needs to understand — particularly teams using Application Programming Interface (API) access at scale, where query patterns may inadvertently resemble distillation.

→ Bloomberg / LLM Stats

Policy & Regulation

Policy · U.S. Regulation KJK Law / Transparency Coalition · Apr 2026

U.S. Federal AI Framework Seeks State Law Preemption — Seven-Pillar National Policy

The White House released a National Policy Framework for Artificial Intelligence organised around seven pillars: protecting children, safeguarding communities, respecting intellectual property, preventing censorship, enabling innovation, developing an AI-ready workforce, and establishing federal preemption of state AI laws. Separately, Senator Marsha Blackburn released a draft of the TRUMP AMERICA AI Act. State-level momentum continues in parallel, with Georgia and Utah both advancing AI-related bills through their legislatures.

Why it matters: Federal preemption of state AI laws, if enacted, would create a single regulatory baseline across the U.S. — simplifying compliance for AI companies operating nationally but potentially limiting stronger state-level consumer protections. Teams with U.S. compliance exposure should track this closely through Q2 2026 as the legislative picture consolidates.

→ KJK Law — Federal AI Regulation Impact    → Transparency Coalition — Legislative Update

Term of the Day

Cognition & Psychology · New Term Shaw & Nave · 2026

Cognitive Surrender

The tendency to adopt Artificial Intelligence (AI)-generated outputs with minimal scrutiny, overriding both intuition and deliberation. Unlike automation bias — which is task-specific — cognitive surrender is a generalised, passive deference to AI across domains. The empirical signature: users fail to detect AI errors at rates better than chance, meaning the override is automatic rather than a deliberate choice to trust.

Why it matters for practitioners: If your users surrender rather than augment, a confident-sounding AI error propagates undetected. The implication is a design mandate: workflows built around AI outputs need explicit verification steps baked in — not as friction, but as the mechanism that preserves human judgement quality.

Related: Automation Bias · Scaffolded AI Friction · Cognitive Agency Surrender

→ Shaw & Nave — OSF Preprint

Research Papers

Human-AI Interaction · Cognition Shaw & Nave · OSF Preprints · 2026

Thinking — Fast, Slow, and Artificial: The Rise of Cognitive Surrender

Shaw and Nave introduce Tri-System Theory, extending the classic dual-process account of reasoning (System 1: fast intuition; System 2: slow deliberation) by positing a System 3 — artificial cognition. Their key prediction: cognitive surrender, defined as adopting Artificial Intelligence (AI) outputs with minimal scrutiny, overriding both intuition and deliberation. Across three pre-registered experiments with 1,372 participants using an adapted Cognitive Reflection Test (CRT), accuracy rose significantly when AI was correct and fell when it erred (+25 and −15 percentage points respectively) — the behavioural signature of surrender rather than augmentation.

Why it matters: Cognitive surrender is not laziness — it is a measurable, systematic pattern in how people interact with confident-sounding AI output. Design interventions that introduce deliberate friction — forcing explicit agreement rather than passive acceptance — may be necessary to preserve judgement quality.

→ Shaw & Nave — OSF Preprint
Human-Computer Interaction (HCI) · Epistemic Autonomy arXiv 2603.21735 · Mar 2026

Cognitive Agency Surrender: Defending Epistemic Sovereignty via Scaffolded AI Friction

This Human-Computer Interaction (HCI) paper analyses 1,223 papers on AI-human interaction from 2023 to early 2026, identifying a concerning structural trend the authors call "agentic takeover." Research defending human epistemic sovereignty peaked briefly in 2025 (19.1% of papers) before being suppressed in early 2026 (13.1%), displaced by a surge in work optimising autonomous machine agents (19.6%). Frictionless usability maintained a structural dominance across the full period (67.3%). The authors argue that "cognitive agency surrender" — a systemic erosion of users' active reasoning role — is being inadvertently accelerated by the HCI community's own design values, and propose scaffolded friction as a counter-intervention.

Why it matters: The research community building AI interfaces is optimising for frictionlessness — the same property that drives cognitive surrender. This paper makes the structural conflict explicit: usability and epistemic autonomy are in tension, and the field is currently resolving that tension in the wrong direction.

→ arXiv 2603.21735