← Back to Blog

Claude Security Is Now in Public Beta: What Anthropic’s Enterprise Scanning Tool Actually Does

Editorial image for Claude Security Is Now in Public Beta: What Anthropic’s Enterprise Scanning Tool Actually Does about Cybersecurity.
BLOOMIE
POWERED BY NEROVA

Anthropic has moved Claude Security into public beta, and that matters because the company is no longer talking about cybersecurity as a future AI use case. It is shipping a product aimed directly at one of the most urgent enterprise problems in software security: finding real vulnerabilities fast enough, and turning them into fixes before they become incidents.

On April 30, 2026, Anthropic announced that Claude Security was available in public beta for Claude Enterprise customers. The company describes it as a way to scan code for vulnerabilities and generate proposed fixes using Claude Opus 4.7, either directly on the Claude platform or through partner products and services.

That positioning is important. Anthropic is not just selling a model here. It is selling a workflow: repository selection, vulnerability scanning, severity assessment, patch guidance, triage, export, and integration into the systems security teams already use.

What Claude Security is

Claude Security is Anthropic’s repository scanning and vulnerability review product for enterprise software teams. It was previously known as Claude Code Security, but the public beta marks a broader and more productized release.

The core promise is straightforward: select a repository, a branch, or even a specific directory, run a scan, and Claude will analyze the codebase for vulnerabilities. It then returns:

  • detailed findings
  • confidence ratings
  • severity and likely impact
  • reproduction guidance
  • instructions for a targeted patch

Anthropic’s argument is that this works differently from older pattern-matching tools. Instead of only looking for known signatures, Claude Security is meant to reason across files and modules, trace data flows, and understand how code behaves in context.

That is the real appeal of using a frontier model for security review. Many important bugs are not obvious single-file mistakes. They emerge from how systems interact.

What changed in the public beta release

The public beta is not Anthropic’s first attempt at this product. Claude Security had already been used in a limited research preview by hundreds of organizations. What changed with the broader release is that Anthropic packaged the product around real operational needs instead of just model capability claims.

According to Anthropic, the public beta adds several practical capabilities that matter for security teams:

  • scheduled scans for ongoing coverage instead of one-off reviews
  • targeted scans for a repository, directory, or branch
  • improved tracking of triaged findings
  • easier exports into existing audit and tracking systems
  • webhook delivery into tools such as Slack and Jira

That feature set is a signal that Anthropic understands how security teams actually work. Detection alone is not enough. Findings have to move into existing triage, remediation, and audit processes.

How Claude Security works in practice

Anthropic says Claude Security can be accessed directly from the Claude interface, where an enterprise user selects a repository or narrows the scope to a specific part of the codebase and then launches a scan.

During that process, Claude is meant to behave more like a researcher than a linting rule. It reads source code, reasons about how components connect, and looks for meaningful vulnerabilities rather than just superficial matches.

Once the scan finishes, teams get an explanation of what Claude found and why it matters. That includes:

  • whether Claude believes the issue is real
  • how severe it appears to be
  • what the practical impact may be
  • how the problem can be reproduced
  • how a targeted fix should be approached

Anthropic also ties this output into Claude Code on the Web, which means a team can move from detection to patch creation in the same broader product ecosystem.

That end-to-end flow is probably the product’s biggest advantage. Security teams do not just need another list of findings. They need a faster route from scan to fix.

Why this release matters now

The timing is not accidental. Anthropic is explicitly arguing that AI is shrinking the gap between vulnerability discovery and exploitation. In other words, if stronger models make it easier to find bugs, defenders need access to equally strong systems on their side.

This is where Claude Security fits. Anthropic is framing Opus 4.7 as one of the strongest generally available models for identifying and patching software vulnerabilities, especially the context-heavy issues that simpler tools may miss.

For enterprise buyers, that changes the conversation. The old question was whether generative AI could help security teams write scripts or summarize logs. The new question is whether frontier models can become part of the actual vulnerability management workflow.

Claude Security suggests the answer is increasingly yes.

Where Claude Security fits in the enterprise stack

Most companies are not going to replace their entire AppSec stack with one AI product, and Anthropic is not pretending otherwise. The company is clearly trying to fit into the environments security teams already run.

Anthropic says Opus 4.7 is being integrated through technology partners including CrowdStrike, Microsoft Security, Palo Alto Networks, SentinelOne, TrendAI, and Wiz. It also highlights services partners such as Accenture, BCG, Deloitte, Infosys, and PwC.

That matters for two reasons. First, it gives Claude Security more distribution inside real enterprise buying paths. Second, it reduces friction for teams that do not want to adopt a standalone workflow from scratch.

In practical terms, Claude Security looks less like a point feature and more like a new AI analysis layer that can sit across direct Anthropic workflows and established security platforms.

What security leaders should like—and what they should still watch

The strongest part of Anthropic’s positioning is its focus on signal quality. Security teams do not need larger piles of noise. They need higher-confidence findings they can act on quickly. Anthropic says Claude Security uses a multi-stage validation pipeline and attaches confidence ratings to each result, which is the right design direction if the product is going to earn trust.

It is also encouraging that Anthropic emphasizes speed from scan to fix, not just scan to finding. That is the metric that maps to operational reality.

Still, enterprises should stay disciplined. AI-assisted vulnerability analysis can be powerful without being infallible. Teams should expect false positives, incomplete reasoning on edge cases, and patch suggestions that still require human review. High-severity remediation should remain inside governed engineering and security workflows, not be blindly auto-applied.

The right deployment model is augmentation first: use Claude Security to help teams prioritize, investigate, and remediate faster, while keeping human ownership over production decisions.

The practical takeaway

Claude Security is one of the more important enterprise security launches in AI this year because it turns a broad promise into a more concrete product: scan code, identify vulnerabilities, explain risk, and help generate fixes.

That is a much more useful proposition than generic “AI for security” marketing. It is also more commercially meaningful for companies like Nerova and their customers, because it points toward a future where AI agents do not just automate knowledge work. They help defend the software systems those businesses actually run.

For enterprise teams, the near-term move is clear: evaluate Claude Security as a workflow accelerator for AppSec and product security, not as a replacement for engineering judgment. If it reliably shortens the path from finding to fix, it earns a real place in the stack.

And that is the deeper signal in this release: security is becoming one of the first places where AI products have to prove not just intelligence, but operational usefulness under real constraints.

Frequently Asked Questions

Does Nerova need a local office to help businesses in this area?

No. Nerova serves businesses through cloud-based AI agents, chatbots, audits, and workflow automation while keeping local claims honest and focused on business needs.

What local workflows are usually the best fit?

The best fit is usually a specific workflow such as lead intake, appointment questions, customer support, sales follow-up, internal knowledge retrieval, or operations handoffs.

How should a business choose the right AI service?

Start with the workflow that creates the most delay or missed revenue, then choose a chatbot, single agent, AI team, or audit based on how many steps and systems are involved.

Nerova AI agents and AI teams

Nerova helps companies design and deploy AI agents and AI teams for real business workflows.

See how Nerova builds AI agents
Ask Nerova about this article