Amid a weekslong conflict with the Pentagon, resulting in a blacklist and a lawsuit, Anthropic is shaking up its C-suite and research initiatives. The company announced Wednesday that it’s launching a new internal think tank, called the Anthropic Institute, that combines three of Anthropic’s current research teams. It will focus on researching AI’s large-scale implications, such as “what happens to jobs and economies, whether AI makes us safer or introduces new dangers, how its values might shape ours, and whether we can retain control,” per the company.
The news comes with C-suite changes, too. Anthropic cofounder Jack Clark is moving into a new role leading the think tank. His new title will be head of public benefit, after more than five years as head of public policy. The public policy team — which tripled in size in 2025, per Anthropic — will now be led by Sarah Heck, who was formerly head of external affairs. Anthropic will also open its planned office in Washington, DC, and the public policy team will continue to focus on issues like national security, AI infrastructure, energy, and “democratic leadership in AI.”
Clark told The Verge that the Anthropic Institute’s debut has been in the works for a while, and that he’s been thinking about moving into a role like this since November. But the timing comes just days after Anthropic sued the US government over its designation as a supply-chain risk, which would bar its clients from using Anthropic’s tech at all in their own work with the Department of Defense. The suit alleges that the Trump administration illegally blacklisted the company for setting “red lines” on mass domestic surveillance and fully autonomous lethal weapons.
When asked about it, Clark said, “It’s never dull working in AI here at Anthropic — there’s always something going on … The pace of AI progress isn’t slowing itself down for external events, and neither are we.” Clark said the situation hasn’t “directly changed” the planned research agenda but that he felt it “has affirmed” Anthropic’s decision to release more information to the public. “What we’re experiencing with the last few weeks just sort of shows you how much hunger there is for a larger national conversation by the public about this technology,” he said.
The Anthropic Institute launches with about 30 people, including founding members Matt Botvinick, formerly of Google DeepMind; Anton Korinek, a professor on leave from the University of Virginia’s department of economics; and Zoe Hitzig, a researcher who left OpenAI after its decision to introduce ads within ChatGPT. The new think tank combines Anthropic’s societal impacts team, which studies AI’s impacts on different areas of society; its frontier red team, which stress-tests AI systems for vulnerabilities and issues; and its economic research team, which tracks AI’s implications for the economy and the labor market. The Anthropic Institute also plans to “incubate” new teams, such as a team led by Botvinick studying how AI will impact the legal system. Hitzig and Korinek will lead large economic research projects. Clark said he expects the think tank’s number of staff will double every year for the foreseeable future.
This year in particular, there’s an increasing amount of pressure on high-valuation AI companies like Anthropic, which reportedly plans to IPO this year. Anthropic’s court filings revealed that the company generated more than $5 billion in all-time commercial revenue and that it has spent $10 billion to date on model training and inference. It also said that the company has “received outreach from numerous outside partners … expressing confusion about what was required of them and concern about their ability to continue to work with Anthropic” and that “dozens of companies have contacted Anthropic” seeking guidance and “in some cases, an understanding of their termination rights.” Anthropic said that depending on the interpretation of what the government will prohibit, exactly, “hundreds of millions of 2026 revenue is at risk” at least, and in the most severe case, it would be multiple billions.
Is Anthropic concerned about devoting more resources to long-term research when it’s very likely to lose some portion of its income in the short term? When The Verge asked Clark, he said he had “no concerns.”
“People tend to buy trust,” Clark said. “A lot of what we can produce are the sorts of research that help businesses trust us … Long-term, Anthropic has always viewed its investment in safety — and studying and reporting on the safety of its systems — as being not a cost center but a profit center.”
Clark also said he believes that powerful AI (essentially Anthropic’s own term for AGI, or artificial general intelligence) will arrive by the end of this year or early 2027, and that he decided to change roles largely due to the “pace of AI progress.” He added that when he looked back at his work last year, he focused more on policy matters, like SB 53, than he did AI R&D and other matters he wanted to give attention to. Anthropic said in a release that the Anthropic Institute is specifically dedicated to answering the “hardest questions posed by powerful AI.”
Of course, as The Verge wrote in December, a lot of tech companies are pro-transparency until it’s bad for business. So what happens if and when the Anthropic Institute’s research teams uncover results that make the company look bad?
Clark said that Anthropic’s cofounders have “similar values” about the importance of public disclosure, especially since the company is technically a public benefit corporation, meaning it has the ability to carry out objectives “not solely for fiduciary gain.” He added that in a conversation he had with CEO Dario Amodei last week, they aligned on the importance of transparency despite PR challenges that could come from it.
But the Anthropic Institute’s research could require significant compute at a time when companies are racing to prioritize commercial products. Clark said outside the resources dedicated to frontier model pre-training, Anthropic allocates its compute on a week-by-week basis according to “what seems most important,” so no precise portion has been set aside, but he doesn’t anticipate there being any conflicts.
The Anthropic Institute also plans to study people’s emotional dependence on AI, an intensifying problem that’s gained public awareness over the past year. Clark said that so far, Anthropic’s research teams have studied types of conversations happening with Claude and measured the tech’s ability to persuade people of things or act sycophantic, but they haven’t spent as much time talking to people using the technology about their individual experiences. He said the think tank plans to conduct large-scale social science research, including using Anthropic’s AI to conduct interviews with users.
“I think of this as: Social media had a huge effect on society, and it wasn’t just based on what was happening on the social media platforms. It was, ‘How was the use of social media changing people?’” Clark said. “We want to understand, ‘How does the use of AI change people?’”
