Anthropic accuses three Chinese AI labs of abusing Claude to improve their own models

Anthropic is issuing a call to action against AI “distillation attacks,” after accusing three AI companies of misusing its Claude chatbot. On its website, Anthropic claimed that DeepSeek, Moonshot and MiniMax have been conducting “industrial-scale campaigns…to illicitly extract Claude’s capabilities to improve their own models.” Distillation in the AI world refers to when less capable…

Read More

Google clamps down on Antigravity ‘malicious usage’, cutting off OpenClaw users in sweeping ToS enforcement move

Google caused controversy among some developers this weekend and today, Monday, February 23rd, after restricting their usage of its new Antigravity “vibe coding” platform, alleging “maliciously usage.”  Some users who had been using the open source autonomous AI agent OpenClaw in conjunction with agents built on Antigravity, as well as those who had connected OpenClaw…

Read More

Anthropic’s Claude Code Security is available now after finding 500+ vulnerabilities: how security leaders should respond

Anthropic pointed its most advanced AI model, Claude Opus 4.6, at production open-source codebases and found a plethora of security holes: more than 500 high-severity vulnerabilities that had survived decades of expert review and millions of hours of fuzzing, with each candidate vetted through internal and external security review before disclosure. Fifteen days later, the…

Read More
Back To Top