Opinion | Iran strikes are a wake-up call to regulate military AI
The United States and Israel are using artificial intelligence (AI) in their ongoing war on Iran. Even amid Anthropic’s blacklisting by the Pentagon amid a dispute over wartime applications, The Washington Post reported that the US military used the company’s AI tool Claude to strike around 1,000 targets in the first 24 hours of the…
Claude helped in war-planning by optimising target selection, analysing intelligence data and issuing precise location coordinates by assessing satellite images.
The use of Claude is part of the Pentagon’s Maven Smart System. Built by Palantir, the system uses classified data from satellites, surveillance and other intelligence sources to provide real-time targeting options for the war against Iran.
The increasing use of AI shortens the “kill chain”, reducing the time between identifying a target and neutralising it. This leads to decision compression in which human actors increasingly rely on algorithmic recommendations rather than independent judgment. With a lack of binding agreements on responsible military AI use, risks will only increase.

