Close Menu
Survival Prepper StoresSurvival Prepper Stores
  • Home
  • News
  • Prepping & Survival
  • Firearms
  • Videos
What's Hot

USS Nimitz, now in service until 2027, heads to SOUTHCOM exercise

March 25, 2026

Michigan Knifemaker Closes Shop After Admitting to Using Chinese Steel and Labeling It USA-Made

March 25, 2026

Deadly Iran school strike casts shadow over Pentagon’s AI targeting push

March 24, 2026
Facebook X (Twitter) Instagram
Survival Prepper StoresSurvival Prepper Stores
  • Home
  • News
  • Prepping & Survival
  • Firearms
  • Videos
Survival Prepper StoresSurvival Prepper Stores
Join Us
Home » Deadly Iran school strike casts shadow over Pentagon’s AI targeting push
News

Deadly Iran school strike casts shadow over Pentagon’s AI targeting push

Vern EvansBy Vern EvansMarch 24, 2026No Comments6 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Deadly Iran school strike casts shadow over Pentagon’s AI targeting push

KYIV, Ukraine — On the first day of the U.S.-Iran war, a Tomahawk cruise missile struck Shajareh Tayyebeh elementary school in Minab, southern Iran. At least 168 people were killed — more than 100 of them under the age of 12, according to UN and Iranian officials.

The school building sat fewer than 100 yards from a long-time Islamic Revolutionary Guard Corps naval installation and was previously located within the IRGC compound perimeter until a wall appeared between 2013 and 2016, according to an analysis of satellite imagery by Amnesty International.

By the time the U.S. and Israel launched their first strikes on Feb. 28, the school had been established several years prior. It was active on social media and had its own website, a Reuters investigation found.

So what went wrong?

“Was artificial intelligence, including the use of the Maven Smart System, used to identify the Shajareh Tayyebeh school as a target?” more than 120 House Democrats asked in a March 12 letter to the Pentagon, just days after 46 Senate Democrats sent a similar request demanding clarity on the deadly hit.

The Maven Smart System, a targeting and intelligence platform built by data analytics company Palantir Technologies under a $1.3 billion Pentagon contract, was built to solve a problem that has grown exponentially in recent years: information overload — with artificial intelligence as its secret weapon.

Maven fuses satellite imagery, drone feeds, radar data and signals intelligence into a single interface, then classifies targets, recommends weapons systems and generates strike packages in near real time, compressing kill-chain reasoning and decision making into the fastest timelines ever seen on the battlefield.

And it uses Anthropic’s Claude AI model, embedded in its system, to semi-autonomously rank targets by strategic importance, drafting automated legal justifications for each strike along the way.

The software generated hundreds of strike coordinates in the first 24 hours of the Iran campaign, enabling the U.S. to hit more than 1,000 targets in the first 24 hours of the war, according to The Washington Post.

After sources briefed on preliminary findings told CNN that U.S. Central Command had created targeting coordinates using outdated intelligence provided by the Defense Intelligence Agency that had not been updated to reflect the school’s presence, one question became central to the inquiries: “If so, did a human verify the accuracy of this target?” they asked.

They are still waiting for an official explanation.

Ukrainian drone operators who build and deploy semi-autonomous targeting systems on the front line told Military Times they recognized the likely culprit immediately.

Ihor Matviyuk, the director of Aero Center, a Ukrainian drone company that builds and deploys semi-autonomous drones on the front lines of the war with Russia, said he can imagine exactly how the failure happened.

Although he has no inside knowledge of the Minab strike specifically, earlier this month he said that it bears the hallmarks of a targeting failure — not an AI malfunction.

“It was almost definitely a strike on the [given] coordinates,” Matviyuk told Military Times. “The main problem was not the AI — it was how close the military object was to the school.”

Last week, former military officials speaking to Semafor confirmed Matviyuk’s early assessment: “Humans — not AI — are to blame” for the school strike, they said, pointing to stale human-curated data fed to the Pentagon’s Maven targeting platform.

Matviyuk recognized the pattern because he’s had to decide how much AI to use in his own semi-autonomous weapon systems again and again as drone warfare and software capabilities have rapidly evolved on Ukraine’s battlefield.

“Automatic targeting allows us to capture less than half of the targets, not more,” Matviyuk said. “Because they are all still camouflaged.”

Ukrainian soldiers train with drones at an undisclosed location in the Donetsk Oblast, Ukraine, September 2025. (Diego Herrera Carcedo/Anadolu via Getty Images)

The Defense Department’s own data bears that out. Maven can correctly identify objects at roughly 60% accuracy overall — compared with 84% for human analysts.

But that rate drops below 30% in adverse conditions, such as bad weather or poor visibility, according to Pentagon data published in a 2024 Bloomberg report.

The risk of “collateral damage,” as the strike on the Minab school might be categorized in military terminology, is too high — that is why Aero Center and every other Ukrainian drone company that spoke with Military Times says they always leave the final strike decision to a human operator.

“The direct impact is always carried out by the operator’s command,” Matviyuk said, “to prevent civilians from getting under the blow.”

In 2021, an experimental U.S. Air Force targeting AI scored roughly 25% accuracy in real conditions, despite rating its own confidence at 90%, then-Maj. Gen. Daniel Simpson, the Air Force’s assistant deputy chief of staff for intelligence, surveillance, and reconnaissance, told Defense One.

“It was confidently wrong,” Simpson said, summing up the program’s problems. “And that’s not the algorithm’s fault. It’s because we fed it the wrong training data.”

The situation is not expected to improve. Last month, Hegseth slashed the Civilian Protection Center of Excellence workforce by approximately 90% and cut CENTCOM’s civilian casualty assessment team from 10 to one, Politico reported.

Then, after leaving a skeleton staff to oversee the guardrails of the biggest expansion of AI in the military, Deputy Secretary of Defense Steve Feinberg signed a memo earlier this month formalizing AI’s role in military decision making — designating Maven an official program of record and pushing adoption across all U.S. military branches by September, Reuters reported on Friday.

RELATED

Ukrainian weapon makers like Matviyuk are not shying away from giving AI more autonomy, but they’re using it strategically.

Autonomous targeting is effective for “massive offensive operations, where targets are not camouflaged,” he said, a description that may fit Iran’s fixed military installations, which are far less concealed than most positions on the Ukrainian front.

“We support the idea of using the human element less and less in the drone operator job,” Matviyuk said. “Autonomy, autonomous elements of drones — that’s the stuff we are working on.”

The problem, in his view, was not that the Pentagon used AI. It was that the data behind the target had not been updated since a girls’ school replaced a military headquarters on the same coordinates — and the people whose job it was to verify that data had already been cut from the chain.

AI systems are only as reliable as the people who build, feed and oversee them, Matviyuk emphasized.

When the human link fails, whether through bad data, gutted oversight or compressed timelines — the machine will continue to execute the error with precision.

Former CENTCOM director of intelligence, Lt. Gen. Karen Gibson, was unequivocal about where accountability for lethal strikes lies, regardless of weapon autonomy, at a Center for Strategic and International Studies panel last week.

“I will always come back to the fundamental principle of human responsibility and accountability,” she said. “A commander somewhere will ultimately be held responsible — not a machine or a software engineer.”

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Keep Reading

USS Nimitz, now in service until 2027, heads to SOUTHCOM exercise

Army ROTC cadets awarded medals for stopping campus shooter

Coast Guard relieves commander of Alaska-based cutter

Military families: DOD wants your input on quality of life improvements

After more than half a century, these veterans returned to Vietnam

US expected to send thousands of soldiers to Middle East, sources say

Don't Miss

Michigan Knifemaker Closes Shop After Admitting to Using Chinese Steel and Labeling It USA-Made

Prepping & Survival March 25, 2026

Sign up for the Outdoor Life Newsletter Get the hottest outdoor news—plus a free month…

Deadly Iran school strike casts shadow over Pentagon’s AI targeting push

March 24, 2026

Iranian Missiles Pound Israel Overnight After US Claims Progress On Talks

March 24, 2026

Army ROTC cadets awarded medals for stopping campus shooter

March 24, 2026

Subscribe to Updates

Get the latest news and updates directly to your inbox.

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
Copyright © 2026 Survival Prepper Stores. All rights reserved.

Type above and press Enter to search. Press Esc to cancel.