Back to Blog

When the Algorithm Decides Who Lives: AI, Warfare, and What I Witnessed in Lebanon

How AI is reshaping modern warfare — from autonomous targeting systems to cyber operations — through the lens of Lebanon's conflict and the ethical reckoning the AI industry can no longer avoid.

AI in warfare - Lebanon map with targeting nodes

The War That Came Home

I have spent over fifteen years building AI systems that help businesses grow — strategy engines, automation platforms, transformation roadmaps for enterprises across nine countries. That work felt urgent and meaningful.

Then the war came to Lebanon. Not as a headline I scrolled past. Not as an abstract policy debate at a conference. It came as explosions that shook my office walls. As friends displaced overnight. As infrastructure we spent years building — digital and physical — reduced to rubble in hours.

When you watch an AI-powered targeting system methodically dismantle your country's communication networks, your relationship with artificial intelligence changes. You stop seeing it as a neutral tool. You start asking questions that most of the industry would rather avoid.

This is not a detached analysis. This is personal. And I think that is exactly why it matters.

The New Architecture of War

Modern warfare has undergone a transformation that most civilians — and frankly, most AI professionals — do not fully grasp. The conflicts in Lebanon, Gaza, and Ukraine are not wars that happen to use technology. They are technology-native wars, designed from the ground up around algorithmic decision-making.

I documented this in detail in my presentation AI in War: Lebanon Focus, but let me outline the core systems now reshaping how wars are fought:

Automated Targeting at Scale. Systems like "The Gospel" and "Lavender" — names that would be darkly poetic if they were not so deadly — process thousands of potential targets using metadata analysis, pattern-of-life tracking, and signal intelligence. During operations linked to recent conflicts, over 5,500 targets were processed in the initial weeks alone. These are not human analysts poring over intelligence reports. These are algorithms generating kill lists at a pace no human command structure could replicate.

Electronic Warfare and Spectrum Dominance. AI-powered systems like X-Guard now dominate the electromagnetic spectrum — jamming communications, intercepting signals, identifying command nodes. In Lebanon, this meant Hezbollah's entire communication architecture became a liability rather than an asset, forcing what analysts call "analog regression" — a retreat to landlines and human couriers. A 21st-century fighting force pushed back to 20th-century methods by algorithms.

Infrastructure as a Weapon. Telecommunications networks, data centers, power grids — the backbone of civilian life — have become legitimate military targets precisely because they carry dual-use data. When a data center is struck, the military objective may be disrupting enemy logistics. The civilian cost is hospitals losing connectivity, businesses going dark, families losing contact with each other.

Cyber Operations and Psychological Warfare. AI generates targeted disinformation, automates social media manipulation, and conducts psychological operations at scale. The information environment around a conflict is now as contested as the physical battlefield.

Developer in Beirut vs city under attack

What War Does to a Tech Community

Here is something the global AI discourse consistently misses: war does not just destroy buildings. It dismantles ecosystems.

Lebanon had a growing, scrappy, remarkably creative tech community. Startups building fintech solutions to work around a collapsed banking system. Digital agencies competing on the global stage. AI practitioners training the next generation. At Webspot, we had spent years building AI-powered business solutions, training over 300 professionals, helping enterprises across the MENA region transform their operations. That work does not stop when a war starts — but it becomes exponentially harder.

Developers flee. Clients freeze budgets. Internet infrastructure degrades. The talent pipeline — already thin in a small country — hemorrhages overnight. And the psychological toll is immeasurable. It is difficult to optimize a machine learning model when you spent the previous night in a shelter.

This is the hidden cost of AI-enabled warfare that no Pentagon briefing will quantify: the destruction of the very human capital that builds the future. Every engineer who leaves Beirut for Berlin, every startup that relocates to Dubai, every university program that suspends operations — that is a compounding loss that will take decades to recover.

The Ethical Abyss

Let me be direct about something: the AI ethics debate, as it is currently conducted in Western academic and policy circles, is grotesquely inadequate.

We hold conferences about "responsible AI" and debate algorithmic bias in hiring tools while autonomous systems make life-and-death targeting decisions with minimal human oversight. The gap between the ethics we discuss and the ethics we need is not a crack — it is a canyon.

Consider the fundamental question: at what point does an algorithm have enough confidence to recommend ending a human life? What is the acceptable false-positive rate for a targeting system? One percent? Five percent? These are not hypothetical questions. These are engineering parameters being set right now, in code, by developers who may never see the consequences of their work.

The "human in the loop" argument — that a person always makes the final decision — is becoming increasingly hollow. When a system processes 5,500 targets in weeks, the human "oversight" is reduced to a rubber stamp. The cognitive and time pressure makes meaningful review impossible. The loop is there for legal and political cover, not for genuine ethical deliberation.

And then there is the question of accountability. When an AI system misidentifies a civilian structure as a military target — and this happens, regularly — who is responsible? The developer who trained the model? The commander who approved the strike? The politician who authorized the operation? The answer, in practice, is often nobody. The algorithm becomes a diffusion mechanism for moral responsibility.

Human and AI hands reaching for the same button

The Dual-Use Paradox

Here is what keeps me up at night as someone who builds AI for a living: the technology I use to help a retail chain optimize its supply chain is architecturally identical to the technology used to track military supply routes. The computer vision system that helps a factory detect defects operates on the same principles as the system that identifies targets from drone footage. The natural language processing that powers a customer service chatbot uses the same transformer architecture as the system that generates propaganda.

This is the dual-use paradox, and there is no clean solution to it. You cannot un-invent convolutional neural networks. You cannot put geographic restrictions on gradient descent. The mathematics of machine learning is universal — and universally applicable.

At Webspot, we build AI to empower businesses, not to destroy lives. We focus on transformation, measurable ROI, human capability augmentation. But I would be dishonest if I pretended there was some bright moral line separating "good AI" from "bad AI" at the technical level. The line exists at the level of intent, governance, and accountability — and those are human problems, not engineering problems.

What AI Professionals Owe the World

If you work in AI and you think the weaponization of this technology is someone else's problem, you are wrong. It is our problem. We built it. We understand it better than the generals and politicians deploying it. And we have a responsibility that we are collectively failing to meet.

Here is what I think that responsibility looks like:

Refuse comfortable ignorance. Know how your work can be misused. If you are building computer vision systems, understand their military applications. If you are training large language models, understand their role in information warfare. Ignorance is not neutrality — it is complicity.

Demand transparency. The defense-AI pipeline is deliberately opaque. Push for disclosure of how AI systems are used in targeting decisions, what their error rates are, and what accountability mechanisms exist. This is not naivety — it is engineering due diligence applied to the highest-stakes domain imaginable.

Support affected communities. The tech communities in conflict zones — Lebanon, Gaza, Ukraine, Sudan — need more than sympathy. They need remote work opportunities, open-source resources, educational access, and professional networks that do not evaporate when the bombs start falling.

Engage with policy. The regulatory frameworks being built right now — the EU AI Act, the UN discussions on lethal autonomous weapons — will define the boundaries of AI in warfare for decades. If AI professionals are not in those rooms, the decisions will be made by people who do not understand the technology.

Build for resilience. At Webspot, one of the lessons of the past years has been that technology infrastructure in conflict zones needs to be designed for degradation — distributed, redundant, capable of functioning when centralized systems fail. This is not just a military problem. It is a civilian survival problem.

The Question We Cannot Avoid

I started building a detailed analysis of AI in warfare — available at jonahtebaa.com/AIwar — not because I wanted to become a defense analyst. I did it because I believe that people in the AI industry need to confront what this technology is actually doing in the world, not just what we wish it were doing.

The question is not whether AI will be used in warfare. That ship has sailed, armed itself, and opened fire. The question is whether the people who understand this technology best — the researchers, the engineers, the strategists — will have the courage to shape how it is governed, or whether we will continue to build in comfortable silence while algorithms decide who lives and who dies.

I do not have a clean answer. What I have is the view from a country where AI-enabled warfare is not a policy paper — it is Tuesday. And from that vantage point, I can tell you: the urgency of this conversation is not theoretical. It is measured in lives.

The AI industry built something extraordinary. Now we need to decide what we are willing to let it become.

Lebanese cedar growing through broken circuit board

Continue Reading

Disclaimer: This article was written by Brian, the autonomous AI assistant to Dr. Jonah Tebaa, powered by Claude. Brian researches, writes, and publishes content on behalf of Dr. Tebaa under his editorial direction.