The Anthropic vs. Department of Defense clash is really confusing me. It's not that I find it shocking that the Trump administration is doing something that oversteps norms - that's expected. The thing I find confusing is the "pro-business" tech commenters who I have some respect for that are siding with the DoD way more forcefully than I would ever expect. Specifically, Ben Thompson from Stratechery, and Noah Smith of Noahpinion. I don't have a problem with disagreeing with either of them, but the force with which they're making their arguments makes me wonder if I'm missing something.
In short, Anthropic wants to put restrictions on how their models are used, but the DoD doesn't want any restrictions. It seems pretty clear to me that as a private company, Anthropic should be allowed to put whatever restrictions on their contracts that they want. If the government or any other potential customer finds the contract too restrictive, they can negotiate or take their business elsewhere.
I understand on some level that the DoD might not want a software vendor telling them what the can or can't do with software they are paying for. Fine! I can understand why a military would want the freedom to make their own decisions when using software they pay for! The response of threatening to classify the company as a supply chain risk is what crosses a line for me. I cannot in any way wrap my head around an argument that a company is a supply chain risk because they are too restrictive about how the AI can be used. For example, I believe the "supply chain risk" classification makes it so that if Microsoft or any other company has a product that uses Anthropic's models to do something mundane like suggesting changes on your email, it now has to take a much higher burden when selling that software to certain part of the DoD. Where is the line that leads from "government can't kill people with this AI without human oversight" to "using this AI model for spell-correct would be a national security risk"??? I don't think this is a straw-man or over-simplification. I know of one specific company that sells software to the government who now has guidance to avoid using Anthropic models in any customer-facing functionality.
Again, the idea that the Trump administration is throwing its weight around and trying to abuse power isn't new. The thing about this situation that throws me off is the responses from people I respect. Thompson and Smith both argue that AI is so important and potentially powerful that the government should be making decisions, not private companies. I could maybe understand the argument if the government was trying to pressure Anthropic to put more guardrails on their models, not less! The reality is that Hegseth and Trump don't seem to have any interest in making AI safer. As far as I know, none of their statements about anthropic suggest that they want to do anything to make AI more safe - only that they want more control over it. It seems to me that Thompson and Smith are just projecting their own views onto the situation, and giving the administration coverage to be thugs to this private company.
I'm also frustrated by the dismissal of the "thuggish" part of this situation. Smith says in a comment
I agree, the Trump administration is thuggish and lawless.
But the deeper truth -- that nation-states will never surrender their monopoly on the use of force -- is even more important.
Thompson said something similar on Twitter (sorry-not-sorry, I'm dead-naming two organizations today)
I wasn’t making a normative argument. Of course I think this is bad. I was pointing out what will inevitably happen with AI in reality
I'm frustrated that they're so quick to say "yeah, this is bad" in passing, but spend huge articles explaining why it's "inevitable". It's especially frustrating because the reasoning on why this is "inevitable" hinges on arguments that the administration isn't even trying to make. Maybe it wouldn't be inevitable if private companies trying to be cautious have the backing of public support. I know a couple blogs isn't going to be the deciding factor in public support for this, but they were certainly enough to make me question if I was missing something big. I don't think I am. I think sometimes smart people are just wrong.
I deleted all my ChatGPT data and replaced the app on all my devices with Claude. This wasn't a particularly bold move since Claude is a better model in a lot of ways, but I for one would rather be a small part of the crowd pushing back against government overreach, not making up arguments that help normalize the oversteps.