AI in War
Updated 2024-07-15
Notechforapartheid https://www.instagram.com/notechforapartheid/
2024-04-27 Will AI Fight Wars Autonomously in the Future? Or Are We Already There? rtificial intelligence is expected to impact human life in every aspect of its existence, but the most concerning one that experts fear the most is its application in warfare, and they are not wrong here. The US military has conducted the first dogfight between a human controlled aircraft and an AI controlled jet fighter.
On the other hand, AI is already deciding who to target in the real war zones, like Gaza and Ukraine. Where Israel is being said to have deployed an AI algorithm called Lavender to identify potential targets by an Israeli publication +972 Magazine research and is said to have minimal human decision application, which resulted in a huge number of civilian casualties. https://www.msn.com/en-us/news/world/will-ai-fight-wars-autonomously-in-the-future-or-are-we-already-there/ar-AA1nM0cE?
2024-04-11 The Rise of Autonomous Drones in Modern Military Operations Some of the Air Force’s commonly used unmanned aircraft, such as the MQ-9 Reaper and RQ-4 Global Hawk, are technologically advanced and costly, primarily suitable for operations in less contested airspace. Furthermore, they require regular inspections and maintenance to ensure mission readiness. This is where XQ-58A flies in… This unmanned aircraft showcases remarkable performance and human-machine teaming is now more effective than ever before as this unmanned combat aircraft conducts test flights alongside manned fighter aircraft. The X-47B is a revolutionary unmanned combat air vehicle (UCAV) that was developed as part of the US Navy’s Joint Unmanned Combat Air Systems (J-UCAS) program. This program aimed to develop a new generation of autonomous unmanned aircraft that could perform a variety of missions, including intelligence, surveillance, reconnaissance, and strike operations. https://www.msn.com/en-us/news/technology/the-rise-of-autonomous-drones-in-modern-military-operations/vi-BB1luE0j?
2023-12-09 The Pentagon’s Rush To Deploy AI-Enabled Weapons Is Going To Kill Us an unusually bitter fight between those company officials who favor unrestricted research on advanced forms of artificial intelligence (AI) and those who, fearing the potentially catastrophic outcomes of such endeavors, sought to slow the pace of AI development.
At approximately the same time as this epochal battle was getting under way, a similar struggle was unfolding at the United Nations in New York and government offices in Washington, D.C., over the development of autonomous weapons systems—drone ships, planes, and tanks operated by AI rather than humans. In this contest, a broad coalition of diplomats and human rights activists have sought to impose a legally binding ban on such devices—called “killer robots” by opponents—while officials at the Departments of State and Defense have argued for their rapid development.
“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past,” AI whiz Sam Altman and his top lieutenants wrote in May. “We can have a dramatically more prosperous future; but we have to manage risk to get there.”
For Altman, as for many others in the AI field, that risk has an “existential” dimension, entailing the possible collapse of human civilization—and, at the extreme, human extinction. “I think if this technology goes wrong, it can go quite wrong,” he told a Senate hearing on May 16. Altman also signed an open letter released by the Center for AI Safety on May 30 warning of the possible “risk of extinction from AI.” Mitigating that risk, the letter avowed, “should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” https://www.activistpost.com/2023/12/the-pentagons-rush-to-deploy-ai-enabled-weapons-is-going-to-kill-us-all.html
2023-12-08 Artificial Intelligence and the Law: Where’s the Line? With every new incident, questions arise regarding whether the use of AI is ethical or legal. “There are arguments on both sides of the ledger in that AI can overcome or avoid human biases by taking a data-driven approach. But also, that AI itself can produce biased or inappropriate outcomes or outputs, for a host of different reasons,” said Dr. Felicity Bell, Research Fellow for the Law Society at the University of New South Wales.
The July 2020 essay, entitled “Is Human Judgment Necessary? Artificial Intelligence, Algorithmic Governance, and the Law,” warns that “even promising AI systems designed to enhance human judgment involve subtle forms of displacement.” The essay calls for preserving the “conditions of human judgment in appropriate domains of social and legal action.”
mankind must be careful with the tools we use, especially, “if you don’t want to hurt yourself with them.” “Where I see the trap would be on getting used to letting this thing do the thinking and the creating for you. Perhaps, this could become compulsive or addictive just like social media is.” https://www.activistpost.com/2023/12/artificial-intelligence-and-the-law-wheres-the-line.html
The Israeli sites +972 Magazine and Local Call interviewed seven current and former Israeli intelligence officials including participants in the current war on Gaza, who spoke under condition of anonymity. Their testimonies—as well as official statements by Israeli officials, interviews with Palestinians, documentation from the besieged strip, and data—show how Israeli leaders know roughly how many Palestinian civilians are likely to be killed in each of its attacks, and how the use of AI-based systems is accelerating a noncombatant casualty rate that more resembles the indiscriminate bombing of World War II than the modern era of codified civilian protection under international humanitarian law.
“Nothing happens by accident,” another source stressed. “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed—that it was a price worth paying in order to hit [another] target.”https://www.commondreams.org/news/gaza-civilian-casualties-2666414736
2023-12-01 “Mass Assassination Factory”: Israel Using AI to Generate Targets in Gaza, Increasing Civilian Toll We look at a new report that reveals how Israel is using artificial intelligence to draw up targets in its military assault of Gaza. The report’s author, journalist Yuval Abraham, has found that the IDF’s increasing use of AI is partly a response to previous operations in Gaza when Israel quickly ran out of military targets, causing it to loosen its constraints on attacks that could kill civilians. In other words, the “civilian devastation that is happening right now in Gaza” is the result of a “war policy that has a very loose interpretation of what a military target is.” This targeting of private homes and residences to kill alleged combatants means that “when a child is killed in Gaza, it’s because somebody made a decision it was worth it.” It has turned the Israeli military into a “mass assassination factory,” with a “total disregard for Palestinian civil life,” continues Abraham, who also notes that, as an Israeli journalist, his reporting is still subject to military censors. We also discuss another recent report revealing that Israel may have received intelligence about Hamas’s planned attack more than a year in advance of October 7, but ignored it. https://www.democracynow.org/2023/12/1/israel_gaza_war_gospel_artificial_intelligence
2023-11-30 ‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza The Israeli army’s expanded authorization for bombing non-military targets, the loosening of constraints regarding expected civilian casualties, and the use of an artificial intelligence system to generate more potential targets than ever before, appear to have contributed to the destructive nature of the initial stages of Israel’s current war on the Gaza Strip, an investigation by +972 Magazine and Local Call reveals. These factors, as described by current and former Israeli intelligence members, have likely played a role in producing what has been one of the deadliest military campaigns against Palestinians since the Nakba of 1948. https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/
2023-10-10 Automated Warfare – KPFA – Against the Grain Many U.S. military establishment bigwigs are pushing the development of automated and autonomous weapons systems. Roberto González questions whether this robo-fanaticism, as he calls it, is justified. He also describes efforts to address human warfighters’ distrust of machines. (Encore presentation.) Roberto J. González, War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future University of California Press, 2022 https://podcasts.apple.com/us/podcast/automated-warfare/id78900506?
.
Categorized Directory: News and Articles about Israel- Palestine Conflict
.