Russian government
Incidentes involucrados como desarrollador e implementador
Incidente 6027 Reportes
Russia Using Artificial Intelligence in Disinformation Campaigns to Erode Western Support for Ukraine
2023-10-02
The Russian government has been stepping up its foreign influence campaigns by using artificial intelligence and emerging technologies to spread disinformation and sow distrust in policies supportive of Ukraine. Part of the strategy includes carrying out influence laundering operations by disseminating their messages to the American public via allies inside nominally independent organizations, according to a recent declassified analysis. This incident is an evolving project.
MásIncidents involved as Developer
Incidente 6446 Reportes
State-Sponsored Hackers Escalate Phishing Attacks Using Artificial Intelligence
2024-02-18
State-sponsored hackers from North Korea, Iran, Russia, and China are reportedly leveraging artificial intelligence to conduct sophisticated phishing and social engineering attacks. They target global defense, cybersecurity, and cryptocurrency sectors, aiming to steal sensitive information and, in the case of North Korea, cryptocurrencies to help fund its illicit nuclear program.
MásIncidents involved as Deployer
Incidente 7742 Reportes
Covert AI Influence Operations Linked to Russia, China, Iran, and Israel, OpenAI Reveals
2024-05-30
In a report released by OpenAI, the company described how its generative AI tools were misused by state actors and private companies in Russia, China, Iran, and Israel to conduct covert influence campaigns aimed at manipulating public opinion and geopolitical narratives.
MásIncidente 6741 Reporte
Manipulated Media via AI Disinformation and Deepfakes in 2024 Elections Erode Trust Across More Than 50 Countries
2024-03-14
AI-driven election disinformation is escalating globally, leveraging easy-to-use generative AI tools to create convincing deepfakes that mislead voters. This shift has simplified the process for individuals to generate fake content, having already eroded trust in elections by undermining public trust and manipulating voter perceptions. Evidence has, for example, been documented in incidents across the U.S., Moldova, Slovakia, Bangladesh, and Taiwan.
MásIncidente 7021 Reporte
Disinformation Deepfake Circulates of State Department Spokesman Matthew Miller Suggesting Belgorod Can Be Attacked with U.S. Weapons
2024-05-31
A deepfake video of State Department spokesman Matthew Miller falsely suggested Belgorod was a legitimate target for Ukrainian strikes. This disinformation spread on Telegram and Russian media, misleading the public and inciting tensions. U.S. officials condemned the deepfake. This incident is an example of the threat of AI-powered disinformation and hybrid attacks.
Más