AI Roundup: April 6, 2026
1. Drift Protocol Confirms $270M Exploit Was a Six-Month North Korean Intelligence Operation
Drift Protocol. Drift Protocol disclosed that its $270 million exploit was a sophisticated six-month intelligence operation by North Korean state-linked group UNC4736 (AppleJeus/Citrine Sleet). Attackers posed as a quant trading firm, held in-person meetings, deposited over $1 million to build trust, then compromised multisig signers via a malicious TestFlight app and a VSCode/Cursor vulnerability to drain the Solana DeFi protocol. The disclosure is the most detailed public account of a state-sponsored AI-adjacent attack chain targeting crypto infrastructure. Source
2. Japan Pushes Physical AI From Pilot Projects Into Real-World Deployment
Japan. Japan is transitioning physical AI and robotics from experimental pilots into production environments, driven by acute labor shortages that make the country a natural proving ground for autonomous systems. Japan’s Ministry of Economy, Trade and Industry aims to build a domestic physical AI sector and capture 30% of the global robotics market by 2040. Venture firms including Salesforce Ventures, Global Brain, and Woven Capital are backing the push. Unlike Western markets where automation often meets workforce resistance, Japan’s demographic crisis means robots are filling roles that go chronically unfilled rather than displacing existing workers. Source
3. Ledger CTO Warns AI Is Making Crypto Hacks Cheaper and Easier
Ledger. Ledger CTO Charles Guillemet warned that AI is fundamentally shifting the economics of cyberattacks on crypto platforms. Tasks that once required skilled security researchers months of work — reverse engineering software, chaining exploit sequences, crafting social engineering campaigns — can now be accomplished in seconds with well-crafted prompts. AI-assisted hacks and exploits contributed to $1.4 billion in crypto losses over the past year, and Guillemet argues the trend will accelerate as models become more capable at autonomous vulnerability discovery and exploitation. Source
4. Critical OpenClaw Privilege Escalation Vulnerability Affects 135K+ Instances
OpenClaw. A critical privilege escalation vulnerability (CVE-2026-33579, CVSS 8.6) was disclosed in OpenClaw, the open-source AI assistant project with 210,000+ GitHub stars. The /pair approve command path fails to enforce security scopes, allowing any user with pairing access to gain full admin control. An estimated 63% of the 135,000+ publicly exposed instances run without authentication, leaving them vulnerable. A patch is available in version 2026.3.28. Source
5. Goldman Sachs Forecasts Sharp Semiconductor Revenue Surge Driven by AI Demand
Goldman Sachs. Goldman Sachs projected that AI-led demand will drive a significant surge in global semiconductor revenues through 2026, with AI-related hardware revenues potentially reaching over $700 billion by Q4 2026. Data center construction jobs have increased by 212,000 since 2022, reflecting the massive infrastructure buildout. The semiconductor sector is now the primary beneficiary of AI capital expenditure as hyperscalers and sovereign AI initiatives compete for chip capacity. Source
6. Faraday Future Aegis Quadruped Robot Gets FCC Certification for US Commercial Sale
Faraday Future. The FX Aegis quadruped robot passed full FCC compliance certification for commercial distribution in the United States. The AI-powered robot supports Wi-Fi and 5G connectivity, produces 48 Nm peak joint torque, can overcome 13-inch obstacles, and climbs 40-degree slopes. Pricing starts at $2,490, targeting hospitality, retail, and security deployments. Faraday Future shipped 20+ units in March and is targeting 200 in its first delivery season. Source
7. MaxToki: AI Foundation Model Predicts Cellular Aging Trajectories
Research. Researchers published MaxToki, a temporal AI foundation model trained on nearly one trillion gene tokens that predicts cell state trajectories across human aging. Unlike static snapshot models, MaxToki predicted novel pro-aging drivers that were experimentally validated to cause cardiac dysfunction in mice within six weeks. It also inferred age acceleration in smokers’ lung cells (5 years), pulmonary fibrosis patients (15 years), and Alzheimer’s microglia (3 years), opening new pathways for targeted aging interventions. Source
8. New York Times Drops Freelancer Over AI-Assisted Plagiarism
Industry. The New York Times severed ties with freelance writer Alex Preston after discovering his book review contained language plagiarized from a Guardian review, introduced via an AI drafting tool. The incident became a widely discussed case study for newsrooms grappling with AI tool use in journalism. Traditional plagiarism detection proved insufficient to catch AI-introduced unattributed material, raising questions about editorial oversight processes in an era where AI writing assistance is becoming commonplace. Source
9. China’s Government Push to Integrate AI Across K-12 Education
China. ChinaTalk published an in-depth analysis of China’s Ministry of Education push to integrate AI across K-12 education, with full curriculum rollout planned by 2026. The initiative aims to reduce teacher workloads, improve rural school quality, and help students with disabilities. However, it faces challenges including a 30% screen time cap mandated by a 2018 directive and concerns about student dependency. The experiment represents the most ambitious national-scale attempt to embed AI in primary and secondary education. Source
10. Gary Marcus Investigates MEDVi, the $1.8B Two-Employee AI Telehealth Company
MEDVi. Gary Marcus published a detailed investigation into MEDVi, the AI-powered telehealth startup profiled by the New York Times as projecting $1.8 billion in 2026 revenue with only two employees. The investigation revealed that the FDA sent MEDVi a warning letter in February 2026 for misbranding compounded GLP-1 drugs, that over 5,000 ads ran under fictitious personas with fabricated medical titles, and that its clinician network suffered a 1.6 million patient data breach in January 2026. Source
11. Microsoft’s Copilot Terms of Service Label It “For Entertainment Purposes Only”
Microsoft. TechCrunch highlighted that Microsoft’s terms of service for Copilot include a disclaimer stating the product is “for entertainment purposes only” and should not be relied upon for important advice. The finding underscores a broader pattern: while marketing materials emphasize productivity and enterprise readiness, legal teams are hedging with disclaimers that could limit liability. The gap between how AI tools are sold and how they are legally disclaimed is becoming a governance and trust issue for enterprise adopters. Source