Source Link
Excerpt:
This summer, someone used AI to create a deepfake of Secretary of State Marco Rubio in an attempt to reach out to foreign ministers, a U.S. senator and a governor over text, voicemail, and the Signal messaging app.
In May someone impersonated President Trump’s chief of staff, Susie Wiles.
Another phony Mr. Rubio had popped up in a deepfake earlier this year, saying he wanted to cut off Ukraine’s access to Elon Musk’s Starlink internet service. Ukraine’s government later rebutted the false claim.
The national security implications are huge: People who think they’re chatting with Mr. Rubio or Ms. Wiles, for instance, might discuss sensitive information about diplomatic negotiations or military strategy.
“You’re either trying to extract sensitive secrets or competitive information or you’re going after access, to an email server or other sensitive network,” Kinny Chan, CEO of the cybersecurity firm QiD, said of the possible motivations.
Synthetic media can also aim to alter behavior. Last year, Democratic voters in New Hampshire received a robocall urging them not to vote in the state’s upcoming primary. The voice on the call sounded suspiciously like then-President Joe Biden but was actually created using AI.