AI Daily Digest — April 20, 2026 ⏱️ 8 min read
Today’s digest: Anthropic dropped Claude Opus 4.7 last Wednesday and it’s genuinely impressive — high-res vision, a new “xhigh” effort level, and coding benchmarks that beat GPT-5.4 and Gemini 3.1 Pro. Meanwhile, Meta finally shipped its first post-Wang model and Q1 funding numbers are absolutely unhinged.
🔥 Top Stories
Claude Opus 4.7 is here — and it sees better than you do
Anthropic released Opus 4.7 on April 16 with 3x the visual resolution of previous Claude models (3.75MP), a new “xhigh” reasoning effort level, and task budgets for agentic loops. Coding benchmarks top GPT-5.4 and Gemini 3.1 Pro across the board — though Anthropic openly admits it still trails their own unreleased Mythos model. The pricing stays at $5/$25 per million tokens, but a new tokenizer means your bills might creep up 10-35%. About time they shipped high-res vision. Read more →
Meta debuts Muse Spark — first model from the Alexandr Wang era
Meta’s Superintelligence Labs shipped Muse Spark, a natively multimodal reasoning model with tool-use, visual chain of thought, and multi-agent orchestration. It doesn’t beat frontier models outright, but it’s competitive — and it’s already powering Meta AI across WhatsApp, Instagram, and Facebook. The real question: can Meta actually monetize this, or is it just an expensive moat for ad targeting? Read more →
EU locks in AI hiring bias audit rules — 105 days to comply
If you’re building anything that touches hiring decisions in the EU, pay attention. Regulators confirmed the exact audit scope, documentation requirements, and certified auditor process for AI-powered recruitment tools. The compliance clock is ticking and certified auditors are already in short supply. Read more →
OpenAI launches GPT-5.4-Cyber for security teams
OpenAI carved out a dedicated cybersecurity variant of GPT-5.4 with expanded access for security teams. Codex Security has already contributed to 3,000+ critical vulnerability fixes. Meanwhile, Anthropic’s Mythos has found “thousands” of OS and browser vulnerabilities. The AI security arms race is very real. Read more →
Gemma 4 drops — Google’s most capable open model yet
Google released Gemma 4 as the foundation for next-gen Gemini Nano, meaning code you write today for Gemma 4 works on-device later this year. If you’re evaluating open models for production, this changes the calculus — especially compared to API-based alternatives. Read more →
🛠️ New Tools & Releases
| Tool | What’s New | Link |
|---|---|---|
| Cursor 2.0 | Up to 8 parallel AI agents working on different sections of your codebase simultaneously | cursor.com |
| Claude Code Routines | Automate and schedule coding tasks that run without active sessions — see our comparison | anthropic.com |
| GLM-5.1 | Zhipu AI’s 744B MoE model (40B active), 200K context, MIT license. Serious open-source contender | Details |
| Gemma 4 | Google’s best open model, foundation for on-device Gemini Nano 4 | blog.google |
| MCP | Anthropic’s Model Context Protocol crossed 97M installs — it’s infrastructure now, not experimental | anthropic.com |
💰 Funding & Business
- Q1 2026 broke everything: $300B poured into 6,000 startups globally, with AI absorbing 81% ($242B). These numbers are surreal.
- OpenAI closed $122B at an $852B valuation — the largest private financing deal in Silicon Valley history.
- Anthropic’s revenue flipped from money-loser to $30B+ annualized revenue in months. That’s not a typo.
- Eclipse Ventures raised $1.3B across two funds targeting physical AI, robotics, and manufacturing startups. Humanoid robotics is 2026’s breakout investment category.
📊 What Developers Are Discussing
- AI can’t read clocks: Even GPT-5.4 only hits 50% accuracy on analog clock-reading tasks. Models crush benchmarks but trip over things a 6-year-old can do. HN is having a field day.
- 4chan discovered chain-of-thought first? New research suggests AI Dungeon players on 4chan stumbled onto step-by-step reasoning in 2022, over a year before Google’s paper. The discourse is predictably chaotic.
- GitHub Copilot tightens limits: Pro trials paused, usage caps tightened. The free tier squeeze continues. If you’re comparing alternatives, we broke down Cursor vs Copilot recently.
- 84% of devs now use or plan to use AI coding tools per the latest Stack Overflow survey. The holdouts are a shrinking minority.
- Meta’s tribal knowledge mapping: Meta built a swarm of 50+ AI agents to document their internal codebases, going from 5% to 100% coverage across 4,100+ files. Smart approach to a real problem.
📝 Worth Reading
- Human scientists still trounce AI agents on complex tasks — Nature study showing the gap between benchmarks and real research is wider than the hype suggests.
- How Meta used AI to map tribal knowledge in data pipelines — Practical case study on using agent swarms for documentation at scale.
- We need to re-learn what AI agent dev tools are in 2026 — n8n’s take on how the agent tooling landscape shifted under everyone’s feet.
- Stanford’s AI Index 2026 — The annual state-of-AI report with the charts that actually matter.
- MIT Tech Review: Understand AI in 2026 through charts — Companion piece to Stanford’s index, more accessible for non-researchers.