Tumgik
#slaughterbots
sifytech · 5 months
Text
Robots on the Battlefields of Ukraine and in the Tunnels of Gaza!
Tumblr media
Once you make a killer robot that can hunt and kill people on its own, you can't put that genie back in the box - Nigel. Read More. https://www.sify.com/technology/robots-on-the-battlefields-of-ukraine-and-in-the-tunnels-of-gaza/
0 notes
emiliciouspanda · 2 months
Text
Tumblr media
I’ve been doodlin’ Revenant a lot recently 🤭
.
.
.
.
.
.
10 notes · View notes
end-note-2021 · 8 months
Text
1 note · View note
secattention · 2 years
Link
0 notes
ascendancygallery · 2 years
Photo
Tumblr media
Conspiracy Theorist 33 A.D. (Oil on board 133cm x 57cm)
0 notes
quasi-normalcy · 3 months
Text
Hot Take: If you're in a position to say "Don't invent the torment nexus!" then it's already too late. It's individualist vanity to assume that you're the only one who can see the potential for new inventions based on already-existing trends. That said, science fiction novels like "Don't Invent the Torment Nexus" have, potentially, an important role to play in getting regulations on new technologies going before they become established. Like, I really, REALLY hope we can get an international convention against Slaughterbots up and running.
39 notes · View notes
Text
The Drowning Noodlerator and their Slaughterbot
5 notes · View notes
chloeisntstudying · 1 year
Text
hi friends! it's been a while since i posted, but i'm now on semester 2 of my first year!! i'm diving into some pretty interesting courses this semester, like science and technology in history, history of chinese medicine, as well as AI and new technology law :) i've been having lots of thoughts swirling around my brain after my tutorials, and then i remembered i have a studyblr, so why not post them here?
i had a tutorial on AI and new technology law earlier this morning, and one of the key questions posed to us was this: should we develop lethal autonomous weapons systems that act independently without human intervention, or should humans be kept in the loop in the AI's decision making? this is a pretty complex topic and we had a few lively rounds of debates, so bear with me as i parse out the different sides of the argument before sharing my thoughts.
those who supported the development of AI weapons had these main points:
it would solve the issue of human's lapses of judgement in the battlefield, which could thus minimise risks of more casualties
it's more accurate -> fewer unintended casualties
it could lessen the trauma and moral burden that humans feel whenever they're forced to kill someone
it could act as a deterrence to other countries seeking war
it could lead to wars fought entirely by AI without the need for human soldiers
those who did not support the development had these main points:
a machine does not have morals nor feel empathy, so they shouldn't have the final say in who to kill (e.g. this clip where a soldier shares how his team spared the life of a little girl, while AI wouldn't've)
humans should take agency and responsibility especially when it comes to taking of another's life; by pushing responsibility to the AI, we may end up killing more mindlessly than before (lessening the value of life, in a way)
AI isn't absolutely right all the time; if humans aren't at the helm, AI may make the wrong decisions or get hacked
our professor then showed us a clip from 'Slaughterbots', where AI weapons fell into the wrong hands and used to target innocents, simply because they showed support for a certain political party. the debate then turned to the regulation of AI weapons and whether it is even possible to regulate.
personally, i feel like the development of AI weapons is an inevitability. even if governments aren't the ones funding such research, private companies would certainly do it for the money. just imagine how high of a price such weapons would fetch! if we want to stop the development of such high-risk high-reward weapons, there'll need to be international cooperation. think about it: even if certain countries agree not to develop such weapons, if there are other countries who are developing it, those in the agreement would simply be at the losing end. in order to protect themselves, everyone would end up developing such weapons - it's a tit-for-tat reaction that has been observed throughout history, like in the making of nuclear weapons back in the cold war. but international cooperation will likely never occur, since everyone will inevitably want to protect their country the best that they can.
assuming that countries continue to develop AI weapons, should human intervention be factored into AI's decision-making process ? i do not think there is a straightforward answer to this, but i do think that by clearly defining the purpose of such weapons, the issues of mass carnage and morality could be mitigated. i think that the scenarios of AI choosing to kill everyone, or people using AI to kill everyone with a push of a button, all stem from their purpose of attacking. to attack is to maximise collateral damage on the other side. but what if the purpose of AI weapons is to defend? shoot only when necessary, no matter a machine or a human behind the decisions. wouldn't this reduce the risks of mass carnage that people are so worried about?
of course, i'm aware that this is an idealistic view of the situation; someone has to be the one to attack for us to defend, after all. people would even argue that attacking first is a way of defending themselves, through eliminating the enemy before they can eliminate you. but if we just look at the question of morality in the usage of AI weapons, then this is my answer: program it to defend, not attack, and you'll find yourself having an easier time sleeping at night.
as for the regulation of AI weapons, it is obvious that we'd need laws to ensure that these don't fall in the wrong hands. for instance, perhaps only the government can utilise such weapons. we'd need to look at who has access to these weapons and why, and whether the scale of people who would benefit from the weapons usage is justifiable when weighed against the risks. but even with regulations in place, it'll likely be flouted anyways. just take a look at gun control, or other similar weapons. there's always the black market and illegal traders, illegal research and development labs. it is unlikely that we can ever have complete control over the development and distribution of such coveted weapons.
overall, i think that as long as there is development of such weapons, there will always be the issues of morality and distribution (and as i assumed, based on human greed and past patterns, that development will inevitably continue with limited internation cooperation). i know that this seems to paint a pretty bleak picture of the future of AI weapons, with the bad guys always finding a way, but i guess that's just the reality of life. we just have to try our best to make the best out of it, and try to maximise the benefits while limiting the harms. i think it is really up to the dedication of governments and regulatory authorities on how hard they want to crack down on these issues -- or perhaps not crack down on them at all, with other more pressing issues at hand.
if you made it all the way here, thanks for reading my thoughts and rambles. these posts won't really be organised in any way, since they're just a way for me to sort through my thoughts and opinions after classes and clear my brain :)
2 notes · View notes
tommychook · 2 months
Video
youtube
Slaughterbots - if human: kill()
0 notes
dertaglichedan · 7 months
Text
As the Defense Department is pushing aggressively to modernize its forces using fully autonomous drones and weapons systems, critics fear the start of a new arms race that could dramatically raise the risk of mass destruction, nuclear war and civilian casualties.
The Pentagon and military tech industry are going into overdrive in a massive effort to scale out existing technology in what has been the Replicator initiative. It envisions a future force in which fully autonomous systems are deployed in flying drones, aircraft, water vessels and defense systems — connected through a computerized mainframe to synchronize and command units.
Arms control advocates fear the worst and worry existing guardrails offer insufficient checks, given the existential risks. Critics call self-operating weapons “killer robots” or “slaughterbots” because they are powered by artificial intelligence (AI) and can technically operate independently to take out targets without human help.
These types of systems have rarely been seen in action, and how they will affect combat is largely unknown, though their impact on the landscape of warfare has been compared to tanks in World War I.
I***
Imagine it getting hacked. What’s the odds???
1 note · View note
sifytech · 1 year
Text
Slaughterbots: The Nuclear Condition
Tumblr media
Welcome back Sifyiites, to Episode 2 of our slaughterbots podcast. Joining us are Sindhu, Satyen, and Nigel and me, Prathmesh Kher. Read More. https://www.sify.com/podcast/slaughterbots-the-nuclear-condition/
0 notes
hackernewsrobot · 8 months
Text
Sci-Fi Short Film “Slaughterbots” [video] (2017)
https://www.youtube.com/watch?v=O-2tpwW0kmU
0 notes
eurekadiario · 8 months
Text
Una carrera armamentística con IA puede crear un mundo aterrorizado a "enjambres de robots asesinos", advierte un fundador de Skype
"Tal vez estemos creando un mundo en el que ya nadie esté seguro sin confinarse en casa porque podrías ser perseguido por enjambres de robots asesinos".
Tumblr media
© Bulgac / Getty Images
Estas palabras de advertencia las ha dicho Jann Tallinn, uno de los ingenieros que participó en el nacimiento de Skype. Tallinn las ha pronunciado en una reciente entrevista con Al Jazeera.
El programador estonio es también el fundador del Cambridge Centre for the Study of Existential Risk y del Future of Life Institute, dos organizaciones que se dedican al estudio y la mitigación de potenciales riesgos, y en específico, de los riesgos derivados del desarrollo de tecnologías avanzadas con inteligencia artificial.
Las referencias de Tallinn a enjambres de robots asesinos recuerdan un cortometraje de 2017 titulado Slaughterbots que publicó el propio Fuutre of Life Institute como parte de una campaña en la que se advertía de los riesgos que entraña la implantación de tecnologías IA en el desarrollo de armas.
La película plantea un futuro distópico en el que el mundo ha sido conquistado por drones militares controlados por una inteligencia artificial.
A medida que la IA continúa evolucionando, a Tallinn le preocupa especialmente las implicaciones que podrían tener sus usos militares.
"Poner la IA en el ámbito militar hace que el control de la humanidad sobre esta tecnología se torne más insostenible, porque es en ese momento en el que su desarrollo formará parte de una carrera armamentística", explicó Tallinn en esa entrevista. 
"Cuando estás en medio de una carrera de armas, no tienes mucho margen de maniobra como para pensar en las implicaciones que puede tener la tecnología que estás creando. Solo puedes pensar en las posibilidades que supone y qué ventaja estratégica te puede dar".
Además, que la IA sea ya parte de la guerra podría hacer que sea prácticamente imposible atribuir y responsabilizar a nadie de futuros ataques, abunda el especialista.
"La evolución natural para una guerra completamente automatizada es la aparición de enjambres de diminutos drones que cualquiera con dinero pueda producir y desplegar de forma casi anónima".
El Future of Life Institute ha asegurado a Business Insider que comparte las declaraciones que su fundador ha hecho en Al Jazeera.
Los miedos de Tallinn y su instituto en realidad son temores que existen desde hace años. El Future for Life Institute fue fundado hace casi una década, en 2014, y pronto atrajo la atención de magnates como Elon Musk, quien donó 10 millones de dólares a la organización en 2015. 
Pero la problemática ha recibido más atención últimamente, justo después de que OpenAI lanzara ChatGPT a finales del año pasado y otros modelos de IA generativa llegaran al gran público, haciendo que se disparara el temor a que la IA acabe quitándole puestos de trabajo a los humanos. Ahora la preocupación la comparten desde investigadores, famosos, tecnólogos y ciudadanos normales.
También el cineasta Christopher Nolan ha abundado recientemente que la IA podría llegar a tener su momento Oppenheimer. En síntesis, muchos investigadores se están cuestionando la responsabilidad que podrían tener a la hora de desarrollar una tecnología cuyas consecuencias podrían ser imprevisibles.
A principios de este mismo año centenares de expertos, entre ellos el propio Musk, el cofundador de Apple Steve Wozniak, el CEO de Stability AI Emad Mostaque, o investigadores del laboratorio de IA DeepMind de Google y profesores e investigadores firmaron y compartieron una carta abierta propuesta por el propio Future of Life Institute. En la misiva se pedía una pausa de 6 meses en la investigación y desarrollo de modelos de IA.
Aunque Musk firmó, el multimillonario CEO de X, SpaceX o Tesla estaba ya contratando personal para lanzar su propia iniciativa sobre IA generativa, que recibe el nombre de xAI.
"La IA avanzada podría representar un profundo cambio en la historia de la vida en la Tierra, y debería ser planeada y gestionada con un inmenso cuidado y grandes recursos", exponía aquella carta abierta.
"Desafortunadamente, ese nivel de planeamiento no se está dando, y en los últimos meses se ha visto que los laboratorios IA se han lanzado a una carrera sin control para desarrollar y desplegar mentes digitales cada vez más potentes que nadie —ni siquiera sus creadores— podrían llegar a entender, predecir o controlar".
0 notes
hi36go · 11 months
Link
Killer Drone
0 notes
zoesquonk · 1 year
Text
Short fic I put on cohost, which I'm thinking to use for longer long-form stuff.
1 note · View note
vgianfrancesco · 1 year
Text
Why is it that battling any bird Pokémon is hell on fucking earth. Like the Pokémon you’ve been able to barely keep alive if you hit them with a tiny bit of electricity have become Slaughterbots and Murder Golems and hit you like a giant fucking truck and are damn near impossible to catch.
0 notes