Trump says stuff about AI

https://www.thetech.buzz/p/trump-s-plan-to-make-america-first-in-ai-defense?_bhlid=a0b667fb3390f05e81307ee1833a8c809a33dcb3

Foundational AI companies will also benefit. Anthropic just partnered with Palantir and Amazon to provide U.S. intelligence and defense agencies with access to its Claude AI models. This collaboration aims to operationalize Claude within Palantir’s platform, hosted on AWS, specifically targeting defense-accredited environments handling highly sensitive national security data.

A debate has been underway in Silicon Valley and D.C. regarding the ethics and implications of fully autonomous weapons. While some defense tech leaders like Shield AI’s Brandon Tseng express opposition, others like Anduril’s Palmer Luckey and investor Joe Lonsdale are open to their use to take on China and Russia. Trump’s election likely clears the way for the sector to go full steam ahead on Defense AI.

1 Like

I’m with Tseng. @SimonBiggs let’s catch up on this. I am really keen to know where Anthropic stands on this per my email to you yesterday. I’ve been trawling the published stuff and want to know if I’ve missed something.

https://www.businesswire.com/news/home/20241107699415/en/Anthropic-and-Palantir-Partner-to-Bring-Claude-AI-Models-to-AWS-for-U.S.-Government-Intelligence-and-Defense-Operations

Palantir is proud to be the first industry partner to bring Claude models to classified environments.

“We’re proud to be at the forefront of bringing responsible AI solutions to U.S. classified environments, enhancing analytical capabilities and operational efficiencies in vital government operations. Access to Claude 3 and Claude 3.5 within Palantir AIP on AWS will equip U.S. defense and intelligence organizations with powerful AI tools that can rapidly process and analyze vast amounts of complex data. This will dramatically improve intelligence analysis and enable officials in their decision-making processes, streamline resource intensive tasks and boost operational efficiency across departments,” said Kate Earle Jensen, Head of Sales and Partnerships, Anthropic.

My views on this stuff from a participant PoV.

My view on this from a (critical) observers PoV.

Something helpful on the topic:

https://www.lesswrong.com/posts/HE3Styo9vpk7m8zi4/evhub-s-shortform?commentId=5HEqdHd9qozS9Piam

Thanks Simon. I’m actually less concerned by the relationship anthropic has with companies in the business of enabling US military force than I am with the broader question of AI safety and the delegation of lethal decisions to AI actors. I’m not black-and-white on the latter, indeed I have said as much.

That said, it does concern me as to the opacity of anthropic in terms of establishing clearly defined principles rather than clearly defined levels of concern, about the deployment of AI in this context,

I’ve said it many times before, I want to know the position of all the major frontier model developers on the multipolar trap which we all seem to be racing headlong towards not withstanding anthropics safety first approach to the deployment of AI in life and death applications which I certainly do appreciate.

Clearly this is not an easy question on which to have unwavering principles. And I agree that the bagel approach to “layering safety” is a put your head in the sand response which is often counter-productive. It’s certainly not helpful when it comes to resolving the problem. Indeed from a game theoretical point of view I don’t think anyone has demonstrated as theoretical solution beyond the Nash equilibrium.

The key challenge is whether we have the luxury of believing we can achieve an equilibrium before we reach it this time around.

What I would like to know is where anthropic is on this, and on the question of AI safety in the context of what I have already written about Azimovs three laws. What I’m finding frustrating is an apparent lack of anything that addresses these issues in any substantial way coming from any of the frontier model developer and the US or other governments

Is there anything public you can point to that would help clarify this?