Tumgik
#Autonomous Weapons
onlyhurtforaminute · 8 months
Text
youtube
BLACK MATTER DEVICE-BLOOD SPLATTER INK BLOT
2 notes · View notes
chloeisntstudying · 1 year
Text
hi friends! it's been a while since i posted, but i'm now on semester 2 of my first year!! i'm diving into some pretty interesting courses this semester, like science and technology in history, history of chinese medicine, as well as AI and new technology law :) i've been having lots of thoughts swirling around my brain after my tutorials, and then i remembered i have a studyblr, so why not post them here?
i had a tutorial on AI and new technology law earlier this morning, and one of the key questions posed to us was this: should we develop lethal autonomous weapons systems that act independently without human intervention, or should humans be kept in the loop in the AI's decision making? this is a pretty complex topic and we had a few lively rounds of debates, so bear with me as i parse out the different sides of the argument before sharing my thoughts.
those who supported the development of AI weapons had these main points:
it would solve the issue of human's lapses of judgement in the battlefield, which could thus minimise risks of more casualties
it's more accurate -> fewer unintended casualties
it could lessen the trauma and moral burden that humans feel whenever they're forced to kill someone
it could act as a deterrence to other countries seeking war
it could lead to wars fought entirely by AI without the need for human soldiers
those who did not support the development had these main points:
a machine does not have morals nor feel empathy, so they shouldn't have the final say in who to kill (e.g. this clip where a soldier shares how his team spared the life of a little girl, while AI wouldn't've)
humans should take agency and responsibility especially when it comes to taking of another's life; by pushing responsibility to the AI, we may end up killing more mindlessly than before (lessening the value of life, in a way)
AI isn't absolutely right all the time; if humans aren't at the helm, AI may make the wrong decisions or get hacked
our professor then showed us a clip from 'Slaughterbots', where AI weapons fell into the wrong hands and used to target innocents, simply because they showed support for a certain political party. the debate then turned to the regulation of AI weapons and whether it is even possible to regulate.
personally, i feel like the development of AI weapons is an inevitability. even if governments aren't the ones funding such research, private companies would certainly do it for the money. just imagine how high of a price such weapons would fetch! if we want to stop the development of such high-risk high-reward weapons, there'll need to be international cooperation. think about it: even if certain countries agree not to develop such weapons, if there are other countries who are developing it, those in the agreement would simply be at the losing end. in order to protect themselves, everyone would end up developing such weapons - it's a tit-for-tat reaction that has been observed throughout history, like in the making of nuclear weapons back in the cold war. but international cooperation will likely never occur, since everyone will inevitably want to protect their country the best that they can.
assuming that countries continue to develop AI weapons, should human intervention be factored into AI's decision-making process ? i do not think there is a straightforward answer to this, but i do think that by clearly defining the purpose of such weapons, the issues of mass carnage and morality could be mitigated. i think that the scenarios of AI choosing to kill everyone, or people using AI to kill everyone with a push of a button, all stem from their purpose of attacking. to attack is to maximise collateral damage on the other side. but what if the purpose of AI weapons is to defend? shoot only when necessary, no matter a machine or a human behind the decisions. wouldn't this reduce the risks of mass carnage that people are so worried about?
of course, i'm aware that this is an idealistic view of the situation; someone has to be the one to attack for us to defend, after all. people would even argue that attacking first is a way of defending themselves, through eliminating the enemy before they can eliminate you. but if we just look at the question of morality in the usage of AI weapons, then this is my answer: program it to defend, not attack, and you'll find yourself having an easier time sleeping at night.
as for the regulation of AI weapons, it is obvious that we'd need laws to ensure that these don't fall in the wrong hands. for instance, perhaps only the government can utilise such weapons. we'd need to look at who has access to these weapons and why, and whether the scale of people who would benefit from the weapons usage is justifiable when weighed against the risks. but even with regulations in place, it'll likely be flouted anyways. just take a look at gun control, or other similar weapons. there's always the black market and illegal traders, illegal research and development labs. it is unlikely that we can ever have complete control over the development and distribution of such coveted weapons.
overall, i think that as long as there is development of such weapons, there will always be the issues of morality and distribution (and as i assumed, based on human greed and past patterns, that development will inevitably continue with limited internation cooperation). i know that this seems to paint a pretty bleak picture of the future of AI weapons, with the bad guys always finding a way, but i guess that's just the reality of life. we just have to try our best to make the best out of it, and try to maximise the benefits while limiting the harms. i think it is really up to the dedication of governments and regulatory authorities on how hard they want to crack down on these issues -- or perhaps not crack down on them at all, with other more pressing issues at hand.
if you made it all the way here, thanks for reading my thoughts and rambles. these posts won't really be organised in any way, since they're just a way for me to sort through my thoughts and opinions after classes and clear my brain :)
2 notes · View notes
smalltofedsblog · 10 months
Text
US Military Calls For Better Weapons To Fight Inexpensive Autonomous AI Threats
“MILITARY TIMES” – By Hope Hodge Seck “After years of catching grief for exquisite weapons acquisition programs with creeping requirements leading to lengthy delays and budget overruns, the Pentagon now finds itself with a different sort of headache: how to stop weapons and systems that are dirt…
Tumblr media
View On WordPress
0 notes
funfrenzyfun · 1 year
Link
1 note · View note
chefbabyna · 1 year
Text
artificial intelligence and machine learning
Artificial Intelligence (AI) and Machine Learning (ML) are two of the most rapidly advancing technologies of the 21st century. AI refers to the development of computer systems that are able to perform tasks that would typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. ML, on the other hand, is a subfield of AI that…
Tumblr media
View On WordPress
0 notes
leakshareorg · 2 years
Text
Loitering Explosive Drones: Controlled Machine or Autonomous weapon?
Loitering Explosive Drones: Controlled Machine or Autonomous weapon?
Not only humans but also autonomous machines have been fighting in war for a long time. Some machines are controlled by a remote pilot, some can search for and destroy targets by themselves. The terrifying weapon is being developed by Anduril, the military technology company founded by Oculus creator Palmer Luckey. This company has announced its first weapon system: an adaptation of its Altius…
Tumblr media
View On WordPress
0 notes
maegalkarven · 7 months
Text
Like, no one argues with the fact that pre memory loss Durge is evil.
We are simply interested WHY they're evil.
And how much work seemed to go into making them that way.
12 notes · View notes
Text
Further Thoughts on the "Blueprint for an AI Bill of Rights"
So with the job of White House Office of Science and Technology Policy director having gone to Dr. Arati Prabhakar back in October, rather than Dr. Alondra Nelson, and the release of the "Blueprint for an AI Bill of Rights" (henceforth "BfaAIBoR" or "blueprint") a few weeks after that, I am both very interested also pretty worried to see what direction research into "artificial intelligence" is actually going to take from here.
To be clear, my fundamental problem with the "Blueprint for an AI bill of rights" is that while it pays pretty fine lip-service to the ideas of  community-led oversight, transparency, and abolition of and abstaining from developing certain tools, it begins with, and repeats throughout, the idea that sometimes law enforcement, the military, and the intelligence community might need to just… ignore these principles. Additionally, Dr. Prabhakar was director of DARPA for roughly five years, between 2012 and 2015, and considering what I know for a fact got funded within that window? Yeah.
To put a finer point on it, 14 out of 16 uses of the phrase "law enforcement" and 10 out of 11 uses of "national security" in this blueprint are in direct reference to why those entities' or concept structures' needs might have to supersede the recommendations of the BfaAIBoR itself. The blueprint also doesn't mention the depredations of extant military "AI" at all. Instead, it points to the idea that the Department Of Defense (DoD) "has adopted [AI] Ethical Principles, and tenets for Responsible Artificial Intelligence specifically tailored to its [national security and defense] activities." And so with all of that being the case, there are several current "AI" projects in the pipe which a blueprint like this wouldn't cover, even if it ever became policy, and frankly that just fundamentally undercuts Much of the real good a project like this could do.
For instance, at present, the DoD's ethical frames are entirely about transparency, explainability, and some lipservice around equitability and "deliberate steps to minimize unintended bias in Al …" To understand a bit more of what I mean by this, here's the DoD's "Responsible Artificial Intelligence Strategy…" pdf (which is not natively searchable and I had to OCR myself, so heads-up); and here's the Office of National Intelligence's "ethical principles" for building AI. Note that not once do they consider the moral status of the biases and values they have intentionally baked into their systems.
Read the rest of Further Thoughts on the "Blueprint for the AI Bill of Rights" at A Future Worth Thinking About
8 notes · View notes
Text
10th Meeting - 1st Session Group of Governmental Experts on Lethal Autonomous Weapons Systems 2024.
Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System Geneva, 4-8 March and 26-30 August 2024
Watch the 10th Meeting - 1st Session Group of Governmental Experts on Lethal Autonomous Weapons Systems 2024!
0 notes
mikeshouts · 5 months
Photo
Tumblr media
Anduril Roadrunner Twin-Turbojet VTOL Autonomous Air Vehicle: Anti-Air Threats Weapon That Can Return To Base
😱😱😱
Follow us for more Tech Culture and Lifestyle Stuff.
0 notes
killer-fun-podcast · 6 months
Link
There are plenty of reasons to be excited about what AI can do for humanity, but plenty of reasons - namely war - to be wary of the technology as well. UNKNOWN: Killer Robots isn’t QUITE as frightening as its name or background music might imply, but it’s still illuminating. Email us: [email protected] Follow us on Facebook: fb.me/KillerFunPodcast All the Tweets, er, POSTS: http://twitter.com/KillerFunPod Instagram: killerfunpodcast
0 notes
smalltofedsblog · 1 year
Text
Pentagon Clarifies ‘Confusion’ Around Autonomous Weapons And Artificial Intelligence (AI)
“POLITICO NATIONAL SECURITY DAILY By MATT BERG and ALEXANDER WARD“ With help from Daniel Lippman “The Defense Department’s original autonomous weapons policy was so unclear that even people inside the Pentagon had a hard time understanding it. Enacted in 2012, Directive 3000.09 was intended to set the record straight on how the department fields and develops autonomous and semi-autonomous…
Tumblr media
View On WordPress
0 notes
gael-garcia · 4 months
Text
PALESTINE FILM INDEX
Tumblr media
Palestine Film Index is a growing list of films from and about Palestine and the Palestinian struggle for liberation, made by Palestinians and those in solidarity with them. The index starts with films from the revolutionary period (68 - 82) made by the militant filmmakers of the Palestine Film Unit and their allies, and extends through a multitude of voices to the present day. It is by no means a complete or exhaustive representation of the vast universe that is Palestinian cinema, but is only a small fragmentary list that we hope nontheless can be used as an instrument of study & solidarity. As tools of knowledge against zionist propaganda and towards Palestinian liberation.
The century long war against Palestinians by the zionist project is one waged not only militarily but also culturally. The act of filmmaking, preservation, and distribution becomes an act against this attempted cultural erasure of ethnic cleansing. The power inherent in this form as a weapon against the genocidal project of zionism is evidenced in the ways it has been historically & currently targeted by the occupation forces: from the looting & stealing of the Palestine Cinema Institute archives during the siege of Beirut in 1982, through the long history of targeted assassinations of Palestinian filmmakers, journalists, artists, & writers (from PFU founder Hani Jawharieh, to Ghassan Kanafani, Shireen Abu Akleh, Refaat Alareer, and the over 100 journalists killed in the currently ongoing war on Gaza).
It is in this spirit of the use of film and culture as a way of focusing & transmitting information & knowledge that we hope this list can be used as one in an assortment of educational tools against hasbara (a coordinated and intricate system of zionist propaganda, media manipulation, & social engineering, etc) and all forms of propaganda that is weaponized against the Palestinian people. Zionist media & its collaborators remain one of the most effective fronts of the war, used to manufacture consent through deeply ingrained psychological manipulation of the general public agency. Critical and autonomous thought must be used as a tool of dismantling these frameworks. In this realm, film can play a vital roll in your toolkit/arsenal. Film must be understood as one front of the greater resistance. We hope in some small way we can help to distribute these manifestations of Palestinian life and the struggle towards liberation.
This list began as small aggregation to share among friends and comrades in 2021 and has since expanded to the current and growing form (it is added to almost every day). We have links for through which each film can be viewed along with descriptions, details such as run time, year, language, etc. We also have a supplemental list of related materials (texts, audio, supplemental video) that is small but growing. We have added information on contacts for distributors and filmmakers of each film in order to help people or groups who are interested in using this list to organize public screenings of these films. The makers of this list do not control the rights to these films and we strongly urge those interested in screening the works to get in touch with the filmmaker or distributors before doing so. This list was made with best intentions in mind, and in most cases with permission of filmmaker or through a publically available link, but if any film has mistakenly been added without the permission of a filmmaker involved and you would like us to remove it, or conversely if you are a filmmaker not included who would like your film to be added, or for any other thoughts, suggestions, additions, subtractions, complaints or concerns, please contact us at [email protected]. No one involved in this list is doing it as a part of any organization, foundation or non-profit and we are not being paid to do this, it is merely a labor of love and solidarity. From the river to the sea, Palestine
2K notes · View notes
metastable1 · 2 years
Link
[...] Stuart Russell: I think this is a great pair of questions because the technology itself, from the point of view of AI, is entirely feasible. When the Russian ambassador made the remark that these things are 20 or 30 years off in the future, I responded that, with three good grad students and possibly the help of a couple of my robotics colleagues, it will be a term project to build a weapon that could come into the United Nations building and find the Russian ambassador and deliver a package to him.
Lucas Perry: So you think that would take you eight months to do?
Stuart Russell: Less, a term project.
Lucas Perry:Oh, a single term, I thought you said two terms.
Stuart Russell: Six to eight weeks. All the pieces, we have demonstrated quadcopter ability to fly into a building, explore the building while building a map of that building as it goes, face recognition, body tracking. You can buy a Skydio drone, which you basically key to your face and body, and then it follows you around making a movie of you as you surf in the ocean or hang glide or whatever it is you want to do. So in some sense, I almost wonder why it is that at least the publicly known technology is not further advanced than it is because I think we are seeing, I mentioned the Harpy, the Kargu, and there are a few others, there’s a Chinese weapon called the Blowfish, which is a small helicopter with a machine gun mounted on it. So these are real physical things that you can buy, but I’m not aware that they’re able to function as a cohesive tactical unit in large numbers.
Yeah. As a swarm of 10,000. I don’t think that we’ve seen demonstrations of that capability. And, we’ve seen demonstrations of 50, 100, I think 250 in one of the recent US demonstrations. But relatively simple tactical and strategic decision-making, really just showing the capability to deploy them and have them function in formations for example. But when you look at all the tactical and strategic decision-making side, when you look at the progress in AI, in video games, such as Dota, and StarCraft, and others, they are already beating professional human gamers at managing and deploying fleets of hundreds of thousands of units in long drawn out struggles. And so you put those two technologies together, the physical platforms and the tactical and strategic decision-making and communication among the units.
Stuart Russell: It seems to me that if there were a Manhattan style project where you invested the resources, but also you brought in the scientific and technical talent required. I think in certainly in less than two years, you could be deploying exactly the kind of mass swarm weapons that we’re so concerned about. And those kinds of projects could start, or they may already have started, but they could start at any moment and lead to these kinds of really problematic weapon systems very quickly. [...]
0 notes
rudrjobdesk · 2 years
Text
DRDO के ‘मानव रहित विमान' ने अपनी पहली ही उड़ान में किया कमाल, देखें VIDEO
DRDO के ‘मानव रहित विमान’ ने अपनी पहली ही उड़ान में किया कमाल, देखें VIDEO
Image Source : DRDO Autonomous Flying Wing Technology Demonstrator. Highlights इस विमान ने पूरी तरह ऑटोमैटिक मोड में उड़ान भरी। स्ट्रैटिजिक डिफेंस टेक्नॉलजी की दिशा में यह बड़ा कदम है। फ्यूचर के इस विमान ने आसानी से टचडाउन भी किया। DRDO News: रक्षा अनुसंधान और विकास संगठन यानी कि DRDO की अनगिनत उपलब्धियों में शुक्रवार को एक और उपलब्धि का इजाफा हो गया। DRDO ने शुक्रवार को ऑटोनॉमस फ्लाइंग विंग…
Tumblr media
View On WordPress
0 notes
nothorses · 3 months
Note
You guys love suggesting trans women are aligned with males and trans men are aligned with females so badly that it's just gross at this point, transphobia with progressive wording is still transphobia ❤️
Yall really love your binaries, huh? Genuinely wild that you think the only way a trans person can be valid is by being widely socially recognized as a single binary gender accurate to the single binary gender they identify with.
I for one am a firm believer in the fact that transphobia exists, and as such, trans people are positioned outside of the cis man/cis woman socio-political binary & allowed access to neither unless and until it conditionally supports the system's ability to do them harm, like, for example, by:
aligning trans women with women when enacting misogyny against them, but not when valuing (mostly white) women as pure, valuable, and worthy of protection
aligning trans women with men when fearmongering about "dangerous male predators infiltrating women's bathrooms" (i.e. weaponizing autonomy granted by the patriarchy), but viewing them as "failed men" otherwise, and generally not valuing them as men in the context of determining who is deserving of male privilege
aligning trans men with men when discussing the "horrors" of transition- acne, body hair, balding, bottom growth, "becoming ugly"- but not when valuing men as worthy of male privilege, or when understanding them to be autonomous
aligning trans men with women when enacting misogyny against them (typically revoking autonomy), but not when valuing (mostly white) women as inherently "safe" or The Victim; or otherwise understanding them to be "traitors" to womanhood
Trans people occupy a different social/political position than cis people do. This is not new information.
You shouldn't be participating in gender-related discourse if you genuinely cannot grasp the idea that there might be a third experience.
492 notes · View notes