21 C
Islamabad
Sunday, November 24, 2024

Top 5 This Week

spot_img

Related Posts

Is AI the Future of Warfare?

By: Saudha Hira 

“The development of full artificial intelligence could spell the end of the human race,” a warning echoed by renowned physicist Stephen Hawking in a BBC interview. 

Just hours after Hamas’s attack on October 7th, the US gave an immediate response by condemning and declaring it a terrorist attack. Additionally, their security general issued a statement saying that “the United States government will be rapidly providing the Israel Defense Forces (IDF) with additional equipment and resources, including munitions.” 

Not only that, a Silicon Valley drone company known as Skydio has received orders within hours from Israel for the company’s short-range reconnaissance drones—small flying vehicles used by the U.S. Army to navigate obstacles autonomously and produce 3D scans of complex structures like buildings.

In the three weeks following the attack, Israel Defense Forces were equipped with cutting-edge drones with the promise of even more to follow; however, this is not the first time Israel is using AI against Palestinians, nor it is the first time the U.S. is the provider.

The irony is that after Ukraine, Gaza has become the new testing ground for the latest in artificial intelligence power defense technology, as artificial intelligence takes center stage in the theater of war.

“In general, the war in Gaza presents threats but also opportunities to test emerging technologies in the field. Both on the battlefield and in hospitals, there are technologies that have been used in this war that have not been used in the past.”

A cruel statement issued by Avi Hasson, chief executive of Startup Nation Central, an Israeli tech incubator, clearly demonstrates a lack of remorse for the loss of over 31,500 Palestinian lives in their own homeland.

The Israel Defense Forces (IDF) did not disclose much about the integration of artificial intelligence in their military. They only revealed that they are using an AI system called Habsora (Hebrew for “the Gospel”) to select targets within a shorter amount of time and to estimate the likely number of civilian deaths in advance.

A former head of the IDF has stated that human intelligence analysts might produce 50 bombing targets in Gaza each year. However, the Habsora system can produce 100 targets a day, along with real-time recommendations for which ones to attack.

The Israeli onslaught reached its pinnacle with cutting-edge military technology. Palestinian Culture Minister Atif Abu Seif stated on February 22 that Israel has also waged war against Gaza heritage through the demolition of some 230 historical buildings, including mosques, churches, markets, and historic baths.

This clearly shows that the integration of AI in warfare favors quantity over quality. The system focuses on the area where the target may be present and kills all those present there. This mass assassination system enables the army to generate targets quicker than before to kill on a mass scale, even when searching for a single target. 

Moreover, distinguishing between a combatant and a civilian is rarely self-evident. Even human observers cannot frequently tell who is and is not a combatant. Thus, the inclusion of AI in war may add new complexities that exacerbate, rather than prevent, harm.

It has been established that the task of finding a target has been passed to a computer program on behalf of a conscious being capable of reasoning through moral and ethical dilemmas and abiding by international laws. This presents an alarming situation for the world because, apparently, the day will come when the world will face-to-face with the atrocities of AI warfare.

Heidy Khlaaf, Engineering Director of AI Assurance at Trail of Bits, a technology security firm, warns that the artificial intelligence algorithms are notoriously flawed, with high error rates observed across applications that require precision, accuracy, and safety.

“The nature of AI systems is to provide outcomes based on statistical and probabilistic inferences and correlations from historical data and not any type of reasoning, factual evidence, or ‘causation,’” she added. “Given the track record of high error rates in AI systems, imprecisely and biasedly automating targets is really not far from indiscriminate targeting.”

The inclusion of artificial intelligence will create problems that will not favour the traditional notions of responsibility and moral agency. The lack of transparency regarding AI algorithms and decision-making processes complicates efforts to ensure accountability for actions taken in conflict zones.

Furthermore, the dependence on AI-driven targeting systems may elevate the risks of civilian harm and undermine principles of humanity.

According to a report by the Israeli publication +972 Magazine and the Hebrew-language outlet Local Call, the system is being used to fabricate targets in order to bombard Gaza at an enormous rate, punishing the general population of Palestine.

To govern the responsible military use of such technologies, the US initiated the implementation of a novel foreign policy. The policy, first unveiled in the Hague in February and support by 45 other countries. It was an effort to keep the military use of AI and autonomous systems within the international law of war.

But neither Israel nor Ukraine are signatories, leaving a significant gap in the efforts to keep high-tech weapons operating within agreed-upon boundaries. The reason for this gap is the evolving nature of technology, which may not keep pace with the legal system.

In conclusion, artificial intelligence is altering the character of war. Some proponents argue that it facilitates more precise targeting, which makes it easier to avoid collateral damage and use a proportionate amount of force. However, the idea of more precise targeting remains elusive in the past as well as in the present, referring to the global war on terror and the ongoing Israeli genocide of innocent Palestinian civilians, respectively. It raises significant ethical and operational concerns regarding the implementation of AI in the military.

Therefore, it is crucial that policymakers, ethicists, and civil society organizations engage in critical dialogue and oversight to ensure that AI is deployed in a manner consistent with principles of justice, accountability, and human dignity. Only by means of proactive and collaborative efforts can we steer the complex ethical dilemmas of AI-driven warfare and safeguard the welfare of civilians and communities impacted by armed conflict.


Popular Articles