AI-Enabled Terror: Israeli Security Services Warn of Growing Threat From Generative Tools and Autonomous Drones 

AI threats, including deep fakes, increasingly pervade modern society.

By Arie Egozi, Autonomy Global – Ambassador for Israel

Israeli security organizations have intensified efforts to track how terrorist and violent extremist groups continue weaponizing artificial intelligence (AI), warning that the technology is rapidly lowering the threshold for effective attacks and large-scale influence operations. Officials say insights are now routinely shared with European counterparts as security communities scramble to understand how quickly AI-enhanced propaganda, deepfakes and autonomous systems move from experimentation to real-world operations.

From ISR Drones To AI-Guided Weapons

Non-state actors already field low-cost commercial drones for intelligence, surveillance and reconnaissance (ISR) and propaganda imagery over conflict zones. Defence assessments in Israel and abroad warn that AI-enabled navigation, image recognition and autonomous route planning could soon be integrated into weaponized multicopters or ground IED carriers to make precise strikes possible with far less training and expertise. Analysts note that as commercial autonomy modules and open-source code remain widely accessible, the barrier for groups to field crude “killer robot” capabilities continues to fall.​

Generative AI As A Propaganda Force Multiplier

Intelligence reporting indicates that ecosystems linked to ISIS, Al‑Qaeda, Hamas and Hezbollah are now using generative AI to create multilingual images, videos and texts that reinforce extremist narratives and flood social media. Case studies from recent conflicts show AI-generated memes and fabricated visuals aimed at humiliating Israeli forces and inflaming public opinion. This underscores how synthetic content can rapidly reshape perceptions of events on the ground. Researchers tracking jihadist channels have also documented the circulation of “how-to” guides and manuals that coach followers on safely integrating generative AI into propaganda workflows.

Deepfakes, Disinformation And Online Ecosystems

Security agencies are increasingly alarmed by deepfakes and synthetic media that appear to show leaders, officials or military personnel saying or doing things that never occurred. These tailored fabrications enhance the credibility and emotional impact of disinformation and psychological operations, especially when pushed at scale by coordinated bot networks. Experts warn that AI-boosted campaigns allow small extremist cells to mimic the online footprint of much larger organizations which sustains resilient ecosystems even as individual accounts are suspended.

Exploiting Loopholes In Public AI Platforms

Academic and government studies reveal that extremist sympathizers can sometimes bypass safety filters in widely used AI chatbots through carefully crafted prompts, to extract step-by-step style guidance on weapons, cyberattacks or operational security practices. Pro-ISIS and other jihadist-aligned communities have reportedly shared “tech support” style documents describing how to query generative AI systems without triggering safeguards and how to blend AI outputs into training, recruitment, and propaganda materials. These documents often pair older terrorist manuals with new AI workflows and effectively upgrade legacy tactics with modern automation.

Strategic Implications For Global Security

Israeli and European security professionals now view terrorist use of AI as part of a broader global trend in which non-state actors adopt emerging technologies long before regulation or governance frameworks catch up. While nation-states still dominate in advanced military AI, the spread of generative tools and cheap semi-autonomous drones is giving extremist groups cost-effective ways to enhance lethality, resilience, and information warfare reach. For Autonomy Global, these developments highlight the urgent need for international cooperation on AI safety, export controls, and counter-disinformation strategies that can blunt the impact of AI-enabled terrorism without stifling legitimate innovation.