
The Hidden Dangers of AI in Modern Warfare
Clip: 3/18/2026 | 18m 21sVideo has Closed Captions
Heidy Khlaaf raises the alarm on the U.S. military's use of artificial intelligence.
The U.S. is reportedly deploying artificial intelligence to help fight its war with Iran, even as the Pentagon pushes for less human oversight over the use of this technology. Heidy Khlaaf is sounding the alarm about the safety and reliability of these tools, particularly in facilitating what is called a, quote, "kill chain." Dr. Khlaaf is the chief AI scientist at the AI Now Institute.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback

The Hidden Dangers of AI in Modern Warfare
Clip: 3/18/2026 | 18m 21sVideo has Closed Captions
The U.S. is reportedly deploying artificial intelligence to help fight its war with Iran, even as the Pentagon pushes for less human oversight over the use of this technology. Heidy Khlaaf is sounding the alarm about the safety and reliability of these tools, particularly in facilitating what is called a, quote, "kill chain." Dr. Khlaaf is the chief AI scientist at the AI Now Institute.
Problems playing video? | Closed Captioning Feedback
How to Watch Amanpour and Company
Amanpour and Company is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.

Watch Amanpour and Company on PBS
PBS and WNET, in collaboration with CNN, launched Amanpour and Company in September 2018. The series features wide-ranging, in-depth conversations with global thought leaders and cultural influencers on issues impacting the world each day, from politics, business, technology and arts, to science and sports.Providing Support for PBS.org
Learn Moreabout PBS online sponsorship>> NOW, THE U. S. IS REPORTEDLY DEPLOYING ARTIFICIAL INTELLIGENCE TO HELP FIGHT ITS WAR WITH IRAN, AS THE PENTAGON PUSHES FOR LESS HUMAN OVERSIGHT OVER THE USE OF THIS TECHNOLOGY, OUR NEXT GUEST IS SOUNDING THE ALARM AROUND THE SAFETY AND RELIABILITY OF THESE TOOLS PARTICULARLY IN FACILITATING WHAT IS CALLED A, QUOTE, KILL CHAIN.
HEIDY KHLAAF IS THE CHIEF A. I. SCIENTIST AT THE A. I. NOW INSTITUTE, AND SHE SHARES HER CONCERNS ON THE GROWING USE OF A. I. SYSTEMS IN THE MILITARY WITH HARI SREENIVASAN.
>> THANKS SO MUCH FOR JOINING US.
YOU ARE SOMEONE WHO HELPED PIONEER THE FIELD OF A. I. SAFETY.
AND AS AN ENGINEER WHAT DOES IT MEAN AND WHAT DOES IT LOOK LIKE IN PRACTICE?
>> SO A. I. SAFETY HAS A LOT OF DIFFERENT DEFINITIONS TO DIFFERENT TYPES OF PEOPLE, BUT I COME FROM THE TRADITIONAL SAFETY ENGINEERING DISCIPLINE, WHICH IS ABOUT MAKING SURE THAT SYSTEMS ARE SAFETY CRITICAL, SO THINGS LIKE AIRPLANES, NUCLEAR PLANTS, OUR INFRASTRUCTURE.
IF IT FAILS, HUMAN LIVES ARE AT RISK HERE.
AND THAT'S A VERY DIFFERENT TYPE OF DISCIPLINE THAN WHAT PEOPLE THINK ABOUT IN TERMS OF A. I. SAFETY.
AND OVER THE YEARS A. I. SAFETY HAS REALLY BECOME ABOUT, YOU KNOW, EXISTENTIAL RISKS OR THIS FEAR THAT THEY WILL -- THESE A. I. MODELS WILL BECOME SUPER INTELLIGENT, AND, YOU KNOW, THEN BECOME A RISK TO SOCIETY AT LARGE.
BUT THE DIFFERENCES HERE -- THE RISKS THAT THESE A. I. COMPANIES TALK ABOUT WHEN THEY TALK ABOUT A. I. SAFETY ARE REALLY HYPOTHETICAL.
THEY'RE NOT CONCERNED WITH EVERYDAY RISKS THAT A. I. MODELS CAN POSE TO HUMAN LIVE.
THAT'S VERY DIFFERENT FROM SAFETY ENGINEERING, WHICH IS MY DISCIPLINE WHICH THINKS ABOUT THE HUMAN LIVES AFFECTED FROM THAT COULD OCCUR FROM OUR INFRASTRUCTURE.
I OFTEN VIEW AS A. I. SAFETY AS SORT OF SAFETY REVISIONISM, OR THAT TERM HAS BEEN CO- OPTED BECAUSE WE MOVED VERY FAR AWAY FROM TRYING TO MAKE SURE OUR SYSTEMS ARE ACCURATE AND RELIABLE TOWARDS THIS IDEA THAT WE'RE GOING TO BUILD SOME SUPER INTELLIGENT BEING THAT'S GOING TO SOLVE ALL OF OUR WORLD PROBLEMS.
AND I THINK IT'S VERY IMPORTANT THAT WE ALWAYS FOCUS ON THE SCIENCE AND HOW THESE SYSTEMS ACTUALLY FAIL RATHER THAN HYPOTHETICAL SCI- FI SITUATIONS THAT ACTUALLY DON'T HELP US MAKE THESE SITUATIONS RELIABLE AT ALL.
>> THERE IS THIS LIFE THREATENING SCENARIO PEOPLE ARE GETTING FAMILIAR WITH, WHICH IS HOW IS A. I. BEING USED IN WARFARE.
HOW DO YOU SEE A. I. CONTRIBUTING TO THE WAY THAT MILITARIES ARE CARRYING OUT THEIR ACTIONS?
>> WHEN YOU'RE USING THINGS LIKE GENERATIVE A. I. OR LARGE LANGUAGE MODELS FOR WRITING AN E-MAIL, THESE MODELS GETTING SOMETHING WRONG IS VERY LOW RISK, RIGHT?
NO ONE DIES, NOTHING CHANGES.
BUT THEN WHEN YOU MOVE TO TRYING TO IMPLEMENT THEM IN SAFETY CRITICAL SYSTEMS LIKE IN DEFENSE, YOU'RE LITERALLY, YOU KNOW, DETERMINING THE LIVES OF PEOPLE, RIGHT?
THIS IS --THIS IS VERY MUCH HIGH STAKES.
AND, YOU KNOW, WHEN YOU'RE LOOKING AT THE ACCURACY OF THESE SYSTEMS, THEY SHOULDN'T BE NEAR ANY SORT OF TARGETING AT ALL.
SO, FOR EXAMPLE, MAVEN, WHICH IS CURRENTLY BEING USED BY THE U. S. IN IRAN HAS LOW ACCURACY RATES.
YOU KNOW, TWO YEARS AGO AN INVESTIGATION CAME OUT THAT SHOWED THEIR ACCURACY RATE IS AS LOW AS 30% IN SOME SITUATIONS.
AND OVERALL WHEN YOU'RE LOOKING AT THE AVERAGES OF THESE MODELS, THEIR ACCURACY RATE IS AS LOW AS 50%.
AND, YOU KNOW, THAT'S REALLY NOT FAR FROM FLIPPING A COIN, IS IT, RIGHT?
THE SORT OF 50-50 RANDOM CHANCE.
AND I THINK THAT SHOULD MAKE US QUESTION WHY ARE THESE SYSTEMS EVEN NEAR TARGETING AT ALL IF THEY'RE THIS INACCURATE?
AND, YOU KNOW, AGAIN THERE COULD BE OTHER USES OF A. I. WHERE THERE AREN'T LIFE OR DEATH CONSEQUENCES.
BUT IN THE CASE OF MILITARY, THAT VERY MUCH IS WHAT'S AT STAKE.
>> SO HELP US KIND OF EXPLAIN THE DIFFERENCES IN HOW THE MILITARY USES IT.
I MEAN, RIGHT NOW WE THINK OF THIS PHRASE OF AUTONOMOUS SORT OF KILLING MACHINES, AND WE'RE ASCRIBING THIS POWER TO A. I. AND WE'RE HAVING THIS KIND OF DEBATE ABOUT WHETHER OR NOT COMPANIES SHOULD BE DOING THAT, BUT THERE'S, YOU KNOW, ANOTHER LAYER OF JUST, LIKE, INTELLIGENCE AND INTELLIGENCE GATHERING.
SO HOW'S A. I. INVOLVED IN THAT?
>> SO FIRST I WANT TO PREFACE WITH THE FACT THAT A. I. HAS BEEN USED IN THE MILITARY SINCE THE 1960s BUT IT'S A VERY DIFFERENT TYPE OF A. I. THAN WHAT WE'RE SEEING TODAY.
BACK THEN AND A FEW YEARS AGO THEY WERE USING PURPOSE BUILT A. I. MODEL, WHERE THEY WERE VERY TASK-TASK-TASK-SPECIFIC.
AND THEY WERE TRAINED ON SPECIFIC TASKS WITH SPECIFIC DATA FOR SOME MISSION.
THAT'S DIFFERENT THAN WHAT WE'RE SEEING TODAY IN THE USE OF GENERATIVE A. I. OR LARGE LANGUAGE MODEL, WHERE THEY'RE BEING IMPLEMENTED IN WHAT WE CALL DECISION SUPPORT SYSTEMS, WHICH ARE TOOLS THAT BRING TOGETHER A LOT OF DATA LIKE SATELLITE IMAGES, SOCIAL MEDIA FEEDS, INTERCEPTED COMMUNICATIONS.
AND THAT MODEL THEN USES ALL THIS INFORMATION TO MAKE MILITARY RECOMMENDATIONS INCLUDING TARGETING RECOMMENDATIONS.
AND I THINK A LOT OF PEOPLE ARE PROBABLY CONFUSED ABOUT THIS TYPE OF TERM BECAUSE WE'RE ALSO HEARING A LOT ABOUT AUTONOMOUS WEAPON SYSTEMS.
AND THE DIFFERENCE BETWEEN DECISION SUPPORT SYSTEMS AND AUTONOMOUS WEAPONS SYSTEMS IS THAT AUTONOMOUS WEAPONS SYSTEMS ARE ALLOWED TO SELECT AND ENGAGE WITH TARGETS WITHOUT OVERSIGHT FROM A HUMAN BEING VERSUS DECISION SUPPORT SYSTEMS THAT DO HAVE THE SO-CALLED OVERSIGHT, RIGHT, AND IT'S QUESTIONABLE HOW MUCH OVERSIGHT THERE REALLY IS.
THAT TEND TO PROVIDE A GAME- LIKE OR CHAT BOT INTERFACE THAT A MILITARY OPERATOR THEN USES TO APPROVE A. I. TARGET RECOMMENDATIONS.
BUT OVERALL A. I. IS BEING USED IN EVERY PART OF WHAT WE CALL THE KILL CHANGE SO THINGS LIKE INTELLIGENCE, SURVEILLANCE, AND NOW WE'RE LOOKING AT THE SELECTION AND THEN THE STRIKE OF THE TARGETS AS WELL.
>> YOU'RE TALKING ABOUT TAKING SOMETHING THAT WASN'T DESIGNED FOR THE MILITARY, THE LARGE LANGUAGE MODELS, AND WE'RE KIND OF PUTTING THAT INTO THE MILITARY'S NEEDS.
HOW DO WE MEASURE HOW ACCURATE THOSE SYSTEMS ARE IN THE TYPE OF TASKS THAT WE'RE ASKING IN THE MIDDLE OF WAR?
>> I MEAN, THAT'S A VERY GOOD POINT.
YOU KNOW, IF YOU HAVE VISION MODELS, THINGS THAT HAVE BEEN TRAINED TO DETECT, THEY ALREADY HAD LOW ENOUGH ACCURACY RATES BEFORE.
YOU KNOW, WE HAD THE AIR FORCE THAT HAD A TARGETING MODEL WHICH THEY THOUGHT HAD 90% ACCURACY AND IT ACTUALLY IN PRACTICE ONLY HAD 25% ACCURACY.
SO WE WERE ALREADY DEALING WITH THESE ISSUES LONG BEFORE LARGE LANGUAGE MODELS WERE BEING IMPLEMENTED WITHIN SORT OF MILITARY DECISION MAKING.
UNFORTUNATELY, IT IS THE CASE IS HAS BEEN SHOWN BY A LOT OF RESEARCH THAT COMMERCIAL GENERAL MODELS ARE MUCH LESS ACCURATE THAN MILITARY PURPOSE BUILT MODELS, AND SO WE HAVE AN ISSUE WHERE WE'RE ACTUALLY GOING TOWARDS MODELS THAT HAVE REDUCED ACCURACY IN TERMS OF MILITARY CONTEXT.
AND THEY ALSO HAVE SECURITY ISSUES, AND I THINK WE'RE NOT TALKING ABOUT THIS ENOUGH.
BECAUSE THEY ARE BUILT ON A COMMERCIAL SUPPLY CHAIN, THE SUPPLY CHAIN IS NOT VETTED AS WE TYPICALLY WOULD SEE WITH A MILITARY SYSTEM, SO THERE'S ACTUALLY SECURITY ISSUES AS WELL.
IT'S NOT JUST A SAFETY ISSUE.
THEY CAN BUILD BACK DOORS INTO THESE MODELS.
WE HAVE SEEN OPERATIONS FROM RUSSIA AND CHINA THAT PUT OUT A LOT OF DIFFERENT TYPES OF, YOU KNOW, PROPAGANDA TO TRY TO SKEW THE OUTPUTS OF LARGE LANGUAGE MODELS.
AND ANTHROPIC THEMSELVES HAVE ADMITTED YOU ONLY NEED TO CHANGE ABOUT 250 DOCUMENTS OR DATA POINTS FOR A MODEL TO BE ABLE TO CHANGE ITS BEHAVIOR.
SO WE HAVE MULTIPLE ISSUES HERE.
AND SO IT'S VERY UNFORTUNATE THAT INSTEAD OF TRYING TO IMPROVE ON THESE TASK-SPECIFIC MODELS THAT WE HAD BEFORE, WHICH, AGAIN HAD THEIR OWN ACCURACY ISSUES, WE'RE MOVING TOWARDS SOMETHING THAT'S MUCH LESS DETERMINISTIC, MUCH LESS PREDICTABLE AND UNFORTUNATELY NOT ACCURATE EITHER.
>> THERE'S A U. S. MESSAGE FROM THE HEAD OF U. S. CENTCOM LAST WEEK, AND HE SAID PARTLY HUMANS WILL ALWAYS MAKE FINAL DECISIONS ON WHAT TO SHOOT AND WHAT NOT TO SHOOT AND WHEN TO SHOOT, BUT ADVANCED A. I. TOOLS CAN TURN PROCESSES THAT USED TO TAKE HOURS AND SOMETIMES EVEN DAYS INTO SECONDS.
SO I'M TRYING TO FIGURE OUT HERE IF YOU'RE SAYING THAT THESE MODELS ARE INHERENTLY NOT AS ACCURATE OR RELIABLE AS WE THINK, AND IF THESE DECISIONS ARE MADE SO FAST EVEN WHEN A HUMAN GETS THAT INFORMATION IN FRONT OF THEM IS THERE SORT OF A BIAS WHERE I MIGHT SAY THIS IS PROBABLY GOOD?
>> ABSOLUTELY.
THERE'S DEFINITELY A BIAS HERE, AND THAT'S WHY HUMAN IN THE LOOP IS TYPICALLY NOT A VERY MEANINGFUL SOLUTION.
IN OUR FIELD WE HAVE WHAT WE CALL AUTOMATION BIAS, WHICH IS THIS IDEA BASED ON DECADES OF RESEARCH, SHOWING THAT HUMANS OFTEN TRUST THE RECOMMENDATIONS OF ALGORITHMS WITHOUT CORROBORATING WITH OTHER SOURCES TO CHECK IF THOSE RECOMMENDATIONS WERE CORRECT OR NOT, EVEN IF THEY'RE REQUIRED TO BY LAW IN THE CASE OF, YOU KNOW, MILITARY DECISION MAKING.
AND THIS IS ESPECIALLY THE CASE IN MILITARY CONTEXT WHEN OPERATORS USUALLY ONLY HAVE A FEW MINUTES TO MAKE DETERMINATIONS.
WITH MAVEN THE MILITARY IS HOPING TO REACH THE POINT WHERE THEY CAN SELECT A MILITARY TARGET WITHIN A SINGLE HOUR AND THEY CLAIM AN EXCEL OF 20 PEOPLE CAN REPLACE OPERATIONS THAT HAD 2,000 PERSONNEL INSTEAD.
THIS CREATES THE VERY CONDITIONS WHERE AUTOMATION BIAS WOULD THRIVE ESPECIALLY WHEN YOU HAVE THINGS LIKE PALANTIR'S PLATFORM, MAVEN, THAT KIND OF SKEWED WHERE A. I. OUT PUT REALLY IS OR DOESN'T MAKE IT EASY FOR YOU TO TRACE OR VERIFY THAT DECISION.
IN A LOT OF WAYS A LOT OF THESE MODELS HAVE ENORMOUS SCALE SO THEY'RE BLACK BOXES.
SO WE'RE KIND OF AT THE POINT WHERE SOMETIMES YOU DO WONDER IF THE DISTINCTION BETWEEN DECISION SUPPORT SYSTEMS, AS I WAS TALKING ABOUT EARLIER OR AUTONOMOUS WEAPONS SYSTEMS, YOU KNOW, IS SUPERFICIAL IN PRACTICE.
BECAUSE IF REALLY THE OPERATORS ARE DEFAULTING TO THE RECOMMENDATIONS THAT THE A. I. ALGORITHM IS MAKING, REALLY IT TO SHOWS THAT, YOU KNOW --THE HUMAN IN THE LOOP IS NOT THE SOLUTION HERE ESPECIALLY WHEN YOU PAIR IT WITH LACK OF RELIABILITY OF THESE SYSTEMS.
>> WHAT'S INTERESTING RIGHT NOW IS THAT THERE'S THIS BACK AND FORTH BETWEEN ANTHROPIC AND THE PENTAGON.
AND THE CORE OF THE ARGUMENT SEEMS TO BE AT LEAST PUBLICLY REDUCED TO THE IDEA THAT ANTHROPIC IS SAYING WE DON'T WANT THESE MODELS USED FOR AUTONOMOUS WEAPONS SYSTEMS.
WE DON'T ACTUALLY THINK THEY'RE ACCURATE ENOUGH, AND WE DON'T ALSO WANT THEM USED IN MASS SURVEILLANCE OF U. S. CITIZENS.
MY QUESTION IS ARE THEY RELIABLE ENOUGH FOR THE DECISION SUPPORT SYSTEMS THAT YOU'RE MENTIONING IN THIS SURVEILLANCE, IN THE INTELLIGENCE GATHERING IN THE FIRST PLACE?
>> I MEAN, THAT'S A FANTASTIC POINT.
YOU KNOW, WHEN YOU CONSIDER AUTOMATION BIAS WITH THEIR LACK OF ACCURACY AND THE CEO OF ANTHROPIC HIMSELF ADMITTING THESE SYSTEMS ARE NOT RELIABLE, THEN IT'S VERY MUCH THE CASE THAT IF THEY BELIEVE THEIR MODELS AREN'T RELIABLE ENOUGH FOR AUTONOMOUS WEAPONS SYSTEMS, THEY'RE ALSO NOT RELIABLE ENOUGH FOR DECISION SUPPORT SYSTEMS, AND WE SHOULD BE QUESTIONING ALTOGETHER WHETHER OR NOT THESE SYSTEMS CAN BE SUCCESSFULLY USED IN MILITARY SETTINGS ESPECIALLY TARGETING.
>> SO THERE WAS A HORRIBLE, HORRIBLE MISTAKE ON FEBRUARY 28th WHEN A MISSILE HIT AN IRANIAN GIRL'S SCHOOL IN SOUTHERN IRAN.
IT KILLED MORE THAN 170 PEOPLE.
IN THE PRELIMINARY INVESTIGATIONS RIGHT NOW IT SHOWS THAT THE U. S. IS RESPONSIBLE, AND I WONDER IF THIS AN INTELLIGENCE FAILURE, OR WAS THIS AN ARTIFICIAL INTELLIGENCE FAILURE, AND HOW WILL I KNOW?
>> WELL, THE LACK OF CLARITY SURROUNDING THE SITUATION OF WHETHER OR NOT A. I. WAS USED IN THE SCHOOL CASE ACTUALLY TOUCHES ON A VERY IMPORTANT POINT THAT SHOWS HOW A. I. MODELS MAKE IT REALLY EASY TO OBSCURE ACCOUNTABILITY BECAUSE THE USE OF THESE SYSTEMS MAKES IT DIFFICULT TO DISTINGUISH IF THESE VILLAIN ATTACKS WERE DELIBERATE OR INTELLIGENCE FAILURES OR DUE TO IT THE LACK OF A. I. ACCURACY AS, YOU SAID.
OR IT COULD BE A COMBINATION OF ALL THREE.
FOR EXAMPLE, THE A. I. COULD HAVE BEEN USED TO DETERMINE THIS INTELLIGENCE BASED ON THE DATA GIVEN AND THEN THAT INTELLIGENCE WAS THEN USED FOR TARGETING.
BUT THE BLACK BOX INACCURATE NATURE OF A. I. MAKES THAT REALLY DIFFICULT TO DETERMINE.
AND A RECENT INVESTIGATION ACTUALLY SHOWED THAT A STRIKE ON CIVILIAN IN IRAQ IN 2024 THE U. S. CENTRAL COMMAND ADMITTED TO NOT KNOWING WHETHER SOME STRIKES WERE, IN FACT, A. I. RECOMMENDATIONS OR NOT.
AND IF THE DEPARTMENT OF WAR IS DELIBERATELY NOT RECORDING WHEN A. I.
-BASED DECISIONS ARE BEING USED, THEN IT SHOWS THAT A. I. IS REALLY BEING USED TO MUDDY THE ACCOUNTABILITY HERE ESPECIALLY FOR DECISION MAKERS THEN CHAIN OF COMMAND.
>> WOW.
BECAUSE IF A HUMAN BEING WAS DIRECTLY FOUND RESPONSIBLE THEY INTENTIONALLY, THERE WOULD BE A CONSEQUENCE.
THERE WOULD BE SOMEBODY OR SOME CHAIN OF COMMAND TO HOLD ACCOUNTABLE.
BUT YOU'RE SAYING RIGHT NOW ALL THE PEOPLE IN THAT CHAIN OF COMMAND COULD BE WELL-INTENTIONED, NOT INTENDING TO, OF COURSE, STRIKE A GIRL'S SCHOOL.
BUT SAY THIS IS THE INTELLIGENCE WE WERE PRESENTED, AND BASED ON THIS INTELLIGENCE THIS IS THE ACTION THAT I'M SUPPOSED TO TAKE.
>> EXACTLY.
AND I THINK THERE'S A LARGER QUESTION ABOUT THE INVOLVEMENT OF THESE COMPANIES AS WELL BECAUSE THEY ARE THE ONES THAT ARE TAKING MILITARY AND FINE TUNING THEIR MODELS TOWARDS THAT.
SO WHO'S ULTIMATELY RESPONSIBLE HERE?
IS IT THE PEOPLE WHO PROVIDED INTELLIGENCE DATA?
IS IT THE INTELLIGENCE DATA THAT COULD HAVE BEEN --THAT A. I. COULD HAVE BEEN USED TO ESSENTIALLY CREATE?
IS IT THE PEOPLE ON THE GROUNDS WHO THEN APPROVED A RECOMMENDATION BUT, FOR EXAMPLE, WEREN'T GIVEN ENOUGH TIME TO CHECK IF THAT CONDITION WAS, IN FACT, ACCURATE.
THIS IS REALLY THE CORE ISSUE WE HAVE WITH A. I. AND LACK OF ACCOUNTABILITY, AND IT COULD VERY MUCH BE THE CASE IT WAS DELIBERATE, BUT WE STILL WOULDN'T KNOW THAT.
AND, YOU KNOW, I THINK IT'S VERY CONCERNING WE HAVE THE U. S. CENTCOM ESSENTIALLY ADMITTING THEY'RE NOT RECORDING THAT, AND IT'S QUITE A TRIVIAL ENGINEERING FEATURE TO IMPLEMENT.
YOU KNOW, IF AN A. I. RECOMMENDATION WAS BEING MADE OR NOT, RIGHT?
THIS ISN'T A DIFFICULT ENGINEERING PROBLEM, SO I THINK IT SHOULD GIVE US PAUSE AND MAKE US QUESTION HOW ARE MILITARIES USING A. I. TO ALSO EVADE ACCOUNTABILITY, RIGHT?
AND EVEN IF THEY'RE NOT TRYING TO EVADE ACCOUNTABILITY, WHAT IF SOMETHING GOES WRONG?
WHO IS RESPONSIBLE HERE?
ESPECIALLY WHEN YOU HAVE A LOT OF THE OPERATORS THEMSELVES NOT UNDERSTANDING THE FAILURES AND LACK OF ACCURACY OF THESE MODELS, I THINK THAT PUTS RESPONSIBILITY ON THEM THAT THEY'RE PROBABLY NOT PREPARED FOR.
>> I'VE GOT TO IMAGINE THAT PART OF THEIR PITCH TO THE DEPARTMENTS OF WAR IN ANY COUNTRY THAT THEY MIGHT BE WORKING IN WOULD BE, LISTEN, I CAN HELP SAVE LIVES.
I CAN HELP YOU PROSECUTE THIS WITHOUT PUTTING BOOTS ON THE GROUND.
I HAVE NOW INTELLIGENCE SYSTEMS THAT WILL HELP YOU TARGET, THAT WILL HELP YOU FIND EXACTLY THE RIGHT TARGETS THAT ONLY THE MILITARY KIND OF INSTALLATIONS, AND I CAN MINIMIZE CIVILIAN HARM.
WHAT'S WRONG WITH THAT?
>> WELL, I THINK THAT THE ANGLE THAT THEY'RE ACTUALLY SELLING, YOU KNOW, IN COMBINATION WITH WHAT YOU JUST SAID, IS SPEED, RIGHT?
THEY'RE SAYING THAT YOU DON'T HAVE TO PUT BOOTS ON THE GROUND.
IT'S BECAUSE SPEEDS GIVES YOU AN ADVANTAGE IN THESE TYPES OF MILITARY OPERATIONS, AND I THINK IT'S ACTUALLY DANGEROUS THAT SPEED IS BEING SOLD TO US STRATEGIC HERE BY THESE COMPANIES.
BECAUSE LARGE LANGUAGE MODELS, YOU KNOW, CAN JUST BECOME A COVER FOR INDISCRIMINATE TARGETING WHEN YOU CONSIDER HOW INACCURATE THEY ARE, RIGHT?
AND SO YOU'RE NOT ONLY JUST MUDDYING THAT ACCOUNTABILITY, YOU'RE USING A. I. TO LEGITIMIZE THE SPEED IN COMBINATION WITH THEIR LOW ACCURACY RATE, AND IT MIGHT JUST BECOME A HIGH TECH VERSION OF CARPET BOMBING, AND SO I THINK MILITARIES NEED TO BE VERY CAREFUL IN ASSESSING THE CLAIMS THESE A. I. COMPANIES ARE PUTTING FORWARD.
FOR EXAMPLE, I ACTUALLY BELIEVE THAT DEFENSE STANDARDS ARE SOME OF THE MOST STRICT AND RIGOROUS STANDARDS THAT THERE ARE, RIGHT?
THEY REQUIRE VERY HIGH RELIABILITY RATES FOR A REASON, RIGHT?
AGAIN, LIVES ARE AT STAKE.
AND ALSO IF MILITARY EQUIPMENT FAILS OR YOU'RE OVERUSING YOUR MISSILES FOR CIVILIAN TARGETS, THAT'S NOT AN ADVANTAGE FOR YOU IN WARFARE.
AND YET HERE WE ARE, RIGHT, BEING TOLD BY THESE COMPANIES THAT THIS IS AN ADVANTAGE, AND WE'RE SIGNING AWAY THESE CONTRACTS WHERE WE'RE NO LONGER HAVING THAT RIGOROUS DEFENSE OVERSIGHT.
THESE COMPANIES ARE OFTEN GRADING THEIR OWN HOMEWORK, RIGHT?
AND SO THEY'RE SAYING WE WILL IMPLEMENT THIS NEW SYSTEM FOR YOU AND WE, BECAUSE WE'RE THE ONLY PEOPLE WHO UNDERSTAND THIS SYSTEM, EVALUATE IT FOR YOU.
SO WE'RE ACTUALLY MOVING AWAY FROM THESE RIGOROUS INDEPENDENT VERIFICATION THAT DEFENSE USED TO CARRY OUT GEOPROCUREMENT PROCESS AND JUST BELIEVING WHATTHIES A. I. COMPANIES ARE SAYING.
>> YOU CAN ALSO BELIEVE THERE ARE THESE COMPETITIVE FORCES THAT ARE ALSO AFFECTED BY SPEED.
RIGHT, THERE WAS A RECENT STATEMENT FROM THE CHIEF SCIENCE OFFICER OF ANTHROPIC WHO SAID THEY BASICALLY DECIDED TO DROP THEIR FLAGSHIP KIND OF SAFETY PLEDGE.
THEY SAID WE FELT IT WOULDN'T ACTUALLY HELP ANYONE FOR US TO STOP TRAINING A. I. MODELS.
WE DIDN'T REALLY FEEL WITH THE RAPID ADVANCE OF A. I. IT MADE SENSE FOR US TO MAKE UNILATERAL COMMITMENTS IF COMPETITORS ARE BLAZING AHEAD.
>> WELL, I THINK, YOU KNOW, JUST LIKE MANY OTHER TECH COMPANIES THAT HAVE COME BEFORE THEM, OPENA.
I. FOR EXAMPLE, OR GOOGLE, THEY ALWAYS END OF SORT OF DROPPING THEIR SAFETY PLEDGES.
AND ANTHROPIC THEMSELVES ARE JUSTIFYING THIS ROLLBACK BY CLAIMING THEIR RIVALS DIDN'T ADOPT SIMILAR MEASURES BY ENFORCING POSITIONS.
THIS SORT OF IMPLIES THEIR THE RIGHTFUL DEVELOPER OF CAPABILITIES THEY THEMSELVES ADMIT WILL ACCELERATE THE RIVAL --OF THE VERY RISK THEY FEARED.
I THINK, YOU KNOW, IT SHOWS THE VOLUNTARY POLICY AGAIN CO-OPT THESE SAFETY TERMS TO GIVE THE VENEER OF SAFETY, BUT ULTIMATELY IT WAS NEVER SUFFICIENT TO GUARANTEE ANY MEANINGFUL SAFETY GUARDRAILS.
THAT IS EXACTLY WHY WE'RE MEANT TO HAVE INDEPENDENT OVERSIGHTS OVER WHAT THESE COMPANIES ARE DOING.
FOR THEM THEY CAN JUST LOOK AT THE TERM SAFETY AND CHANGE IT TO MEAN WHATEVER THEY THINK IS SUITABLE AT THE TIME.
SO, FOR EXAMPLE, IN THE CASE OF ANTHROPIC THEY OVEREMPHASIZE ON WHAT THEY CALL CBRN, WHICH IS A. I. HAVING CAPABILITIES TO DEVELOP BIOLOGICAL AND RAID LOGICAL WEAPONS, AND THEIR ENTIRE SAFETY FRAMEWORK WAS SORT OF BASED ON THAT, WHEN YOU SHOULD BE MUCH MORE CONCERNED THAT THE TARGETING ACCURACY, IF YOU'RE PUTTING THESE MODELS IN SORT OF MILITARY DECISION MAKING.
AND SO I THINK WE NEED TO BE CAREFUL WHEN THEY'RE PUTTING FORWARD THIS IDEA OF SAFETY.
>> CHIEF A. I. SCIENTIST AT THE A. I. NOW INSTITUTE HEIDY KHLAAF, THANK YOU FOR JOINING US.
>> THANK YOU FOR HAVING ME.

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.
From My Station










Support for PBS provided by: