Claude Opus 4.7 is here — what's new and how to use it
Anthropic just shipped Opus 4.7 with 1M context, faster output, and an expanded tool-use spec. Here's the practical breakdown.
Why it matters
Opus 4.7 ships today: 1M token context window, noticeably faster for long responses, tool use improvements, same pricing as 4.6. If you're using Claude Code, you already have access — just run `/model`.
Anthropic dropped Claude Opus 4.7 this morning. Here's what it actually means if you use Claude every day.
The headline features
- 1M token context window (up from 200K). That's roughly a medium-sized codebase in a single prompt.
- Faster output, particularly on long generations — Anthropic claims 1.6× throughput, we've seen closer to 1.3× in practice.
- Improved tool use — fewer false starts on complex multi-step tasks.
- Same pricing as Opus 4.6.
What's not new
- The knowledge cutoff (still January 2026).
- The character/personality — this is the same Claude you've been talking to, a bit sharper.
How to use it today
If you're on Claude.ai, it's the new default. In Claude Code, run /model and select opus-4-7. Cursor users: update to the latest and select it in settings.
Why the 1M context matters for non-developers
Large context isn't just "more text at once" — it changes what's practical. You can now:
- Paste your entire novel draft for feedback
- Dump a full Notion workspace export and ask questions across it
- Feed it three long PDFs at once without summarizing first
The catch with long contexts
Long context doesn't mean flawless long context. Retrieval quality still degrades past ~400K tokens. Rule of thumb: use long context when you need it, not as a flex.
Our take
This is the "boring quarterly upgrade" that every capable lab needs to ship. Nothing revolutionary, but every edge case you've been hitting with 4.6 is probably fixed. Worth the switch.
Source: anthropic.com
Newsletter
A short weekly email about AI tools and what's worth trying.
Free. No spam. Unsubscribe anytime.