Tumgik
Text
Tumblr media
Today is one of the last posts I'll probably make on here.
Yes, it is indeed sad, but I am also indeed basically out of cats.
It's not that there aren't more things to talk about.  The list of ways in which AI is affecting our lives is outpacing the rate at which I can keep pumping out these cats!  It's been that way from the start though.  But a line has to be drawn at some point, right?  I can't keep doing this forever.
Actually, when I started this, I really thought that I'd just do, like, 5 or 10.  But then the hard truths just kept coming.  Well, on tumblr, but mostly being beamed directly into my brain.  So I kept making the cats.
But while the leaves on the tree of topics we could cover keep growing, the trunk, and many of the branches, remain steady.  And I think this provides a good base to cover the current state of AI.
Will I post again?  At least one more time, probably.  Maybe a few.  Will there be more cats?  Less clear.  Will there be more hard truths?  Always.
This last post encapsulates everything I, and many experts, believe about AI.  Yes, it's grown incredibly capable.  Yes, it has the potential to change the world in seemingly incredible ways.  When deployed right, it makes us all that much more capable.  When deployed "wrongly," it makes only a few people more powerful, and leaves the rest of us weakened.  I truly believe that AI can be a force for equity and good.  But it won't be that force unless we make it work that way.  And, if AI doesn't work for everybody, then it may as well work for nobody.
For those of you who swung by, thank you for the ride.  It's not often you revive a friend's ~10 year old project.  Thank you, Mittens Humblecat, for the opportunity!
-Ace
5 notes · View notes
Text
Tumblr media
I've spoken before about how the Terminator Doomsday Scenario is probably not imminent.  And, that is true!
But when we get to the idea of AI-powered weaponry, there are still real risks.  There are also opportunities.  I mean, it might be better if toys blow up instead of people.  And, they actually probably have the potential to be more precise and "surgical" than a human operator, meaning they could theoretically focus on targeting combatants only (though as we know, AI systems are easily fooled).
But in order for killbots to not be a major threat to society, they have to be controllable.  And there are at least a few barriers to that guarantee.
1. There must be a killswitch that the system cannot override.  That means it can't be purely software, or at least not something that is interpreted through a learned model.  Because it is, at least conceptually possible, for a system that is neural to misinterpret a "stop" command.  We just don't have the guarantees.
2. There must be a backup plan if communications to the 'bot fails.  This should be obvious.  But moreover, we already have examples of "rogue" weaponry killing people at an alarming rate that we have trouble removing; i.e. landmines.
3. There has to be some way for humans to intervene.  This is the trickiest part.  The power of AI-powered weaponry is that it can respond faster than humans can.  The weakness of AI-powered weaponry is that it can respond faster than humans can.  What happens when a machine can kill 5 people before a human can even stop it?  What happens if they're the wrong people?  What happens if that same machine is not, say, firing a gun, but firing missiles?
It is always difficult to regulate weaponry; even nuclear arms treaties have seemed to take a step back rather than forward as of late.  And it is important to not let irrational fear dictate this technology; it is too easy to look at robots that can kill, make the mental leap to Skynet and annihilation, and then say the problem, fundamentally, is AI and it should be banned.  I won't cite a certain Time article here, because it's trash and neither Time nor the article's authors deserve the clicks, but this argument has been posited before.  It's a distraction from a problem that policymakers must discuss, in earnest, measuredly.
1 note · View note
Text
Tumblr media
Early update this week due to travel.
There was a recent story that a dental ad was using Tom Hanks's likeness --- an AI generated image, without his consent. There is so much data out there about Tom Hanks that it's not hard to do.
But even for those of us who aren't Tom Hanks, is faking someone's likeness really that crazy? It's been shown that computers can take decent stabs at people's faces from their voices. From a few images, you can create lots of fake images of someone. Okay, yeah, they're not all perfect, but they don't all have to be. From limited footage you can animate someone to say what you want. And not just their mouth, either. Have you done 23andme? It's not too hard to guess someone's basic features from their genome. No, you can't create a full person. But what happens when these data sources are pooled?
The fact is, if someone had that data, as technology gets better, they would be able to fake more of you. So much data about people is on the web, sometimes owned by companies, sometimes public. And more and more, data gets leaked.
People have used fake audio to take people's identity and use that to scam people. That's not quite as futuristic our bizarre as holographic Tom Hanks telling you to see a dentist. But the threat is real, and over time, it will probably get realer.
1 note · View note
Text
Tumblr media
What is artificial intelligence?
Over the years, it’s been defined in many ways.  Alan Turing, considered to be the “father” of AI, introduced the Turing Test, and claimed that a machine was artificially intelligent if it could convincingly mimic the speech patterns of a human.  If you were talking to someone over the phone, could a computer convincingly speak to you as a human would, to the point that you wouldn’t even question its humanhood?
More recently, there has been an insurgence of folks who consider AI to be a program that learns, or has learned its capabilities from data or experience.
These are both good attempts, but they are also both wrong.  Yes, it sounds weird to call out the father of artificial intelligence as being wrong about artificial intelligence.  But, to quote Isaac Newton, “If I have seen that Alan Turing is wrong, it is by standing upon the shoulders of Alan Turing.”  Or something like that.  With 100 years of additional hindsight, he would probably think he’s wrong too.
The problem with the first, i.e. Alan Turing’s definition, is that there are many things that sound intelligent to the average person but are not: parrots, tape recorders, my ex-partner.  And there are many things that do not sound intelligent or even comprehensible to the average person but at least on some level are: Computer bytecode, secret messages, Dan Quayle.  In fact, computers have even “won” Turing tests by simply appearing convincingly, humanly confused.
The problem with the second definition is that it precludes prescribed knowledge.  If you encode a very clever reasoning capability into software, is it not smart?  Is it not intelligent?  Even throughout nature, many organisms are born with knowledge imprinted in their neural circuitry, typically survival skills for themselves or their species.
So, what is artificial intelligence?
Classically, the accepted definition in the field has been that AI is search.  No, this does not mean robots that can go find buried treasure; rather, this means agents that intellectually can logically consider possibilities in order to find a solution to a problem.  They are searching for an answer, and they do that by inductively and/or deductively exploring candidate solutions through trial and error.  When they find one that works, they have simulated its result, and thus can prove to you that they have found the answer.
This definition worked for a long time.  You use this type of technology, maybe even every day.  A GPS navigation app searches for a path from your starting point to your requested destination, and can show you a path from point A to point B that you can see is correct, maybe even optimal.  When you do a text search for a file on your computer, when it finds you the file with the name you’re looking for, you can see that it’s the one you wanted.
This version of AI racked up all sorts of wins for decades.  But, it also pushed up against the limits of tractability and observability.
A perfect example of this is in chess.  Chess is perhaps the most classic application of AI, ever since a man in 1770 dressed up as a robot and pretended to be a chess-playing computer, and if you don’t believe me look up “The Mecahnical Turk.”  For the longest time, humans summarily defeated their silicon counterparts in chess and similar complexity board games (such as Shogi and Go).  The problem was that AI players just couldn’t cut by using search.  These co-called chess engines worked by imagining a move, then imagining all of the moves that their opponent could play in response, then imagining all of the moves that they could play in response to all of those moves…and so on.  So, if a computer wanted to consider one move, it might then have to consider four countermoves by its opponent, and then the 4 x 4 countermoves it would play in response to those, then the 4 x 4 x 4 countermoves its opponent could play…even if we consider that a player only has up to four moves at any given time (and they usually have more), by looking ahead to just move 20 it has to consider over a trillion moves.  Even being clever about pruning obviously bad moves, a computer can’t consider all reasonable strategies and prove to you it’s making the best moves.  Its opponent would die of old age before it finished that search.
So, chess AI started relying on heuristics.  It would look ahead only three or four moves, then decide “is this a good position to be in?”  It did so by applying a heuristic function — based on what it knew about the future of the game, it would take a guess as to how good the game position was, based on a mathematical score it would assign.
And by picking good heuristic functions and searching just deep enough at any given time, IBM’s Deep Blue Supercomputer was able to finally beat the world champion and top 3 chess player of all time, Garry Kasparov, in 1996.  
These same heuristic guesses are also used in cases of partial observability, by virtue of necessity.  In instances where a computer can’t exhaustively search all possible outcomes because it doesn’t have all information, it has to get by on some amount of guesswork.  A self-driving car, for example, can’t use an exhaustive search in choosing its next maneuver.  It might know all the roads, but it has to take guesses about the position of other cars outside of the range of its cameras.
But coming up with these heuristic functions is difficult, and arguably worse, it’s boooring.
Luckily, a subdiscipline of AI, called “machine learning” or ML, had a potential solution to automate away all of the boring (read: difficult) aspects.  Rather than prescribe mathematical guesses for things computers can’t see, what if computers could learn to make good guesses, from data or experience?  After all, this is what humans do.
Machine learning allows computers to look at datasets, use some data to try to improve its heuristics, use some data to test the efficacy of that heuristic, and iteratively try to continue to improve it some more.  That “testing” step is key, by the way — machine learning fundamentally is only useful if it produces heuristics that are predictive on some “test” data that is not used for training.  That predictiveness, versus pure datafitting, is the fundamental difference between statistics and machine learning.  Machine learning is trying to be useful in the future, but it’s not necessarily trying to explain what it’s seen in the past.
From the advent of machine learning in the 1940s to the early 2010s, machine learning worked…to a point.  Like, not really super well in most domains.  Like, 80-90% well. Computers were B+ students in many domains.  Yeah, you could use them to reasonably guess property values.  But they had difficulty with some domains.  Images.  Words.  Predicting the behavior of humans.  In the 1950s, Marvin Minsky famously thought that learning models of human vision was a problem that could be solved in two months, but by 2010 computers still couldn’t solve CAPTCHAs (thankfully) or tell you how many cats were in a given picture (The answer is always: not enough).  Until the late 00s, computers weren’t competitive at trivia games such as Jeopardy!, not because they didn’t know a lot of facts, but because they couldn’t understand the language.  And even when they eventually triumphed, they did so through statistical guesswork, matching key phrases with corresponding knowledge in their databanks.  In the mid 00s, Netflix finally was able to predict a movie you might like to watch next — to a roughly 90% success rate.
All of these attempts at creating human-like intelligence were largely very statistically, mathematically based.  But humans probably aren’t constantly running numerical computations in their brains for every decision they make.  What if we tried a different, more bioinspired approach, such as mimicking the human brain?  Neural networks, perhaps one of the most famous types of machine learning model, were invented to do just that.
Now, it’s important to note that these “neural networks,” really kind of sort of look nothing like neural networks of the brain.  They’re, uhh…made of something we call “neurons.”  But these “neurons” are more like simple mathematical switches, like digital logic gates on a computer.  But computers can do a lot, ad so could these artificial neural networks!  And without going too far into the details, each individual neuron can predict more types of phenomena than the most classical statistical models.  So, you could imagine the potential power these neurons would provide when tons of them were linked up together into a massive circuit!
That’s right, they provided no power whatsoever, because designing that circuit — what types of switches there are and how they are hooked up to one another — was itself a massive search problem that computers couldn’t solve.  Neural networks sucked.
That is, until 2012.
In 2012, certain parts of computer hardware (specifically the GPU, which was traditionally used for creating graphics) had gotten more powerful, we learned to create better search algorithms for neural networks, and we had invented new types of neural networks that were easier to search.  And suddenly, these new hardware and new software approaches were able to be combined to design very large and very capable neural networks in a matter of days.  With the colloquially titled “AlexNet,” Deep learning was proven to be both practical.
And how did we harness this deep learning power in order to solve humanity’s most pressing problems?
First, we used deep learning in order to be able to learn to identify cats in pictures.  That was called supervised learning.  Then, we were able to determine if cats in pictures were different enough from one another to be put into separate “classes”; this is commonly known as unsupervised learning.  Then, we were able to teach robot cats how to walk, from their own experience, in what’s referred to as semi-supervised or “reinforcement learning.”  (This same type of machine learning has also led computers to finally reign victorious in Shogi and Go, and extend their dominance in chess.)
And now, in 2023, just ten years later, we have reached the apex of computer intelligence.  We are able to use deep learning to generate images of cats, in what’s known as “generative machine learning,” or “generative AI.”
Clearly, this book is AI’s magnum opus.
All machine learning is on some level guesswork, but generative machine learning, which fundamentally draws outputs based on probabilities, is the “guessiest” of them all.  It provides plausible possibilities, such as images, sentences, or music, based on what a person wants, without any guarantees of correctness.
Oh dear.  I fear we have lost the plot, in this essay, in the field of AI, and maybe in society.
I claimed that AI is search.  Machine learning-powered AI is search that uses some guesswork, relying on heuristics learned from data.  And generative AI is also search, but it’s a quick search, one in which the computer is so confident about what it has learned that it assumes that the very first guess it makes is correct and declares victory.
Is this intelligent, if there is no verification?
Is this artificial, if it’s so based in real-world data?
At the beginning of this chapter, I was arrogant.  I said, pompously, that Alan Turing, as well as many of my living colleagues, were wrong with their two definitions of AI, and that my third definition was correct.  But maybe I’m no better than a generative AI algorithm, immediately confident that my learned instincts would necessarily produce me the right answer.
Generative AI is the “new hotness” in AI, but it is but one flavor.  Every iteration listed above has come with its own strengths and weaknesses, and with those weaknesses, hidden repercussions.
3 notes · View notes
Text
Tumblr media
Algorithmic bias, misinformation, accountability, AI-powered weaponry, technological displacement, data ownership...these issues have been around for decades. We just tend to be complacent until shit hits the fan.
6 notes · View notes
Text
Tumblr media
In my opinion, one of the coolest AI research projects was published in 2020, on "Xenobots," automata constructed, frankenstein-style, out of a collection of frog eggs. They didn't have on-board computation or control, but they were algorithmically designed, optimized in shape and cell type (passive or actuating) for forward taxi. In fact, for those in the know on these types of methods, the Xenobots were actually designed by evolutionary algorithms.
Imagine that! Synthetic evolution creating synthetic organisms.
There are a few caveats here. It should be noted that I just lied to you. They're not really organisms, because they can't grow (they have to be created by hand) and they don't genetically reproduce. (But whoa, there is a follow-on that shows some limited physical reproduction capability.) And, they don't live long.
So we're not really "playing god" yet. We haven't designed a genotype, just a phenotypes, so we're playing legos at this point.
But it's worth noting that we can edit genomes! And we're starting to understand the genetic code more and more each day.
This is easily the most speculative cat so far, but there will likely come a day when we can design and grow wholesale artificial organisms. There is nothing inherently optimal about evolution as it exists in nature; evolution is merely a description of a system in which things that can change and where those that are more likely to reproduce do, in fact, reproduce. But if we can design organisms that outcompete evolution at a faster pace, then one day we might create whole new species. In the short-term, that's AI-designed GMOs for better farming. In the long-term, that's AI-designed dragons.
But in the short-term, that's also AI-designed invasive species, and in the long-term, that's also AI-designed virulent diseases.
Just something to keep an eye on, or maybe some day, three eyes on.
3 notes · View notes
Text
Tumblr media
The color of the cats here is no accident. Many job interviews now require a component that records your face. And these systems could learn to be biased against women and minorities. It has certainly already happened once before, even without a video component.
13 notes · View notes
Text
Tumblr media
AI has the power to create deterrents, foster global collaboration with efficient exchange of ideas and cultural understanding, modify warfare to be more targeted toward opposing militaries, and involve fewer humans...
Or we could just wind up blowing each other up as quickly as possible, in wars caused by as much disinformation as possible.
I guess we'll see!
5 notes · View notes
Text
Tumblr media
I'm not really liking how its dunce cap seems to be a biological component of its fleshy head.
...anyway.
A year ago, one of the biggest advancements in AI technology was also one of the dumbest. It's titled "Large Language Models are Zero-Shot Reasoners," by Takeshi Kojima et al. Here, "zero-shot reasoner" means that Large Language Models, in this case GPT, can solve a problem on the first try, without even one round (shot) of additional training specifically for that problem. In other words, you ask the model a problem, it thinks, it gives an answer. It doesn't need any feedback about if it were right or wrong or anything like that.
If this doesn't seem surprising, then that's because you feel that this is the way it should be. And this is the way it is today. ChatGPT usually gets things right on the first try. Or it doesn't. I'd say it gets most, easier things, mostly "right" more often than not, though. The point is, you don't have to iterate with it even once in most cases.
But ChatGPT uses GPT-4 or GPT-3.5. This paper was looking at the previous model, GPT-3, which was smart for an AI system, but a lot dumber than GPT-4. It wasn't solving hard math problems with regularity. But the authors were using it to do just that. How, you ask?
"Let's think step by step."
Yep, that's it. Those five words before any question led the efficacy of GPT to skyrocket from 18% to a whopping 72%!!!!!!!
WHAT.
The other day, DeepMind researchers released a preprint ("Large Language Models as Optimizers," by Chengrun Yang et al) that showed that further performance boosts can be attained on new models by starting off a math question by saying "Take a deep breath and work on this problem step-by-step," bringing one of their language models (PaLM 2-L-IT) up to 80% success. That's a B-! That's respectable!
But here's the thing: algorithms, epsecially language models, already think step by step. And, let me be the first to tell you:
Computers don't breathe.
I'm sorry you had to find out this way. Take all the time you need to collect yourself, the rest of this post will be here when you're ready.
What is going on here? Large language models that power applications like ChatGPT are trained from data. And embedded in that data are stories about human emotions, human behavior. And that means that somewhere, embedded in the model, are the same priors. Somewhere, the model has an internal state of gung-ho brashness, or nervousness --- or at least a silico version of that.
What I don't know why these "hacks" work. I don't know what changes in the model, internally, to make it more effective. If I knew --- if we knew --- we would just change that directly. Maybe, from this research, we'll know how. But this is evidence that buried somewhere deep within each of these models is a hidden internal behavioral bias informed by the writings of humanity. And that can capture all of our potential --- or, all of our risk.
Once burned twice shy, I always feel we are just a few rounds of prompt engineering away from a second Tay.
1 note · View note
Text
Tumblr media
I remember reading about Sophia soon after "she" was unveiled. It was like watching some sort of surreal vintage soviet sci-fi --- a humanoid head not unfit to live in Disney World's Hall of Presidents looked grafted on to a circa 2000 Honda Asimo, as if it were a high-tech Sid reject from Toy Story. A cool bit of animatronic work, sure, but by no means intelligent. It was relying on some sort of ancient expert systems technology for its dialogue, its control algorithms were as janky as humanoid robots' control algorithms ever were, and I wouldn't trust it to hold a glass of water let alone something important.
And yet the media loved it.
I was annoyed. It was the highest-profile example of news media parroting a press release as if it were true, with almost no due diligence. Oh haha "hot robot tells SXSW she wants to destroy world," great reporting CNBC. "Robot tells UN she can run world better than them." I will go on record and say that the CEO of this company is a grade-A grifter, that for n years he would push this non-functional product as if its a next-gen organism. His activity in creating crappy humanoid robots is not malpractice, but his existence as a human is malpractice.
Putting a chatbot into a crappy robot does not make it any more human, sentient, or interesting to talk about than the crappy robot by itself or the non-sentient chatbot by itself. It's just three pieces of technology stapled together, with no new capabilities. When you convince a country to give a robot citizenship in some desperate play for relevancy on a world stage, you are wasting everyone's time and attention, and pretending to be doing something profoundly philosophical, when in fact, you quite frankly, are not.
More recently I saw a company claim it has a robot CEO. Oh, does it make better decisions now that it's in a robot body?
Any time you see a story like this, please do me a favor and do not click on these links. Do not give them ad revenue. Those reporting on these stories in 2023 do not deserve credit for filling out niches in this techno-toxic ecosystem. Just assume that everything inside is bullshit, and move on.
Ah crap, I should have said that before I put all the links in here.
Incidentally there is a pseudo-weird weird pattern with making all of these human-like robots women. Don't get me wrong, I want women to keep chipping away at that glass ceiling. I just want it to matter when they do.
3 notes · View notes
Text
Tumblr media
IP law has always been thorny. It's run by legislators who are very busy with many other responsibilities, and thus, filled with loopholes. Patent trolls, squatters, and other bad actors highlight the more malicious side of the patent world. But many would argue that patents also sometimes are granted for too long, and that that can inhibit access to technology and processes that can otherwise aid the world --- especially when the patented invention is the only way to solve the problem it's addressing.
But patents also give individuals and small companies runway to develop their inventions into products, and gives larger companies incentive to innovate. If you spend the time, money, and resources to invent something new, you want to make sure somebody isn't simply going to swoop in and copy it. Technical infrastructure isn't always enough if the method is non-obvious to discover but simple to do once you know the trick.
So what happens when AI begins automating the process of invention? After all, aside from the capital of filing, what's to stop an AI system from simply generating tons of ideas and submitting them for application, especially if it can deduce, that, with high probability, those ideas will work? And, a step further, what's to stop an AI system from automating litigation, as has already started happening? Well, now you have a system where invention --- and one avenue to prosperity --- is taken out of the hands of the people and put in the hands of the machine and its owner.
One way to address this is to make AI-powered patents unenforceable. But then, as the good cat above says, who owns anything anymore? Right now, AI cannot own the patent, but a human (controller?) of an AI system can be listed. But if that system is left to run fully autonomously...well, who owns that? The person who flipped the switch and turned it on? The person who owns the computers? So far, unless credit can be properly attributed, the answer is...nobody. We need ways to assign credit. And until we do, we'll have systems where everyone can use AI to invent, but the invention is nothing more than a lottery-ticket, something that has the power to turn into something valuable, but has no intrinsic value itself.
1 note · View note
Text
Tumblr media
With apologies to South Park. Actually, on second thought, with no apologies to South Park. They're doing just fine.
We're back for what might be a final run here, and we're going to kick it off with X-Risk, or existential risk. This is something that has captured the general public's imagination for the longest of times; you can go back a century and see movies about killer robots taking over the globe.
But nowadays, even those fanciful discussions are much less tangible. People don't talk about a "robot apocalypse," or a "War Games" type scenario, people tend to begin and end the conversation at "existential risk." AI is an existential risk, because it will become AGI (artificial general intelligence), and then...
...and then what, exactly? I don't really know! And if you talk to these types of people, they don't really want to fill in the blanks.
Let's break down a few misconceptions first. "AGI" is not some godly supercomputer. It's not Ultron, it's not HAL 9000 --- AGI refers to an AI system that can do "general" reasoning. Whereas narrow AI has focused on solving problems with a very clearly specified input in a very clearly specified format, AGI systems would, conceptually, be able to process arbitrary forms of input, and not be told a task to high-specificity. But, that AGI system would then be able to take that "mushy" input and do something useful with it. AGI engines are general reasoners.
You know who else are general reasoners? Humans. And the advent of humans has not led to the apocalypse.
Well, not yet, anyway. And certainly not inherently. We've been around for millions of years.
Some argue that Large Language Models like ChatGPT are getting close to AGI, and that might or might not be true, with the exception of being confined to textual input and general disembodiment. It's going to be a long time before computers are able to adequately reason about certain sensory inputs such as touch, or smell.
People often conflate AGI with ASI --- or "Artificial Superintelligence" --- which is AGI that far surpasses human capabilities. That's not nearly as close as AGI, which, is not necessarily close in and of itself. AI's biggest possible advantage over humans is their silicon form. If neural networks can help computers reason at a high-level, and they can perform numerical calculations way faster than humans, then computers can surpass humans on at least one front --- speed and efficiency. But, with the caveat that I've been wrong before, I'll say that it's doubtful that computers will imminently completely surpass humans in terms of reasoning capabilities. Computers might have more data to draw from, but that data is, typically, from humans, and while it's easy to learn something that you're taught, it's harder to discover new knowledge.
What does this mean for existential risk? Well, I'd argue we're at least two steps away from reaching ASI. But even if we reached ASI, it's not clear how that dooms humans. Yes, there are risks, but how would ASI lead to humans going extinct? And why would that lead to humans going extinct. Science fiction often presumes the existence of some authoritarian, rogue, or evil AI system that can't be stopped. But it's not clear why, in the real world and not Hollywood, such a system would emerge in the first place.
Somebody who believes this stuff, please, I implore you. Draw me the through-line.
1 note · View note
Text
We are taking a second, brief hiatus :). Then it will probably be the home stretch before we're out of hard truths again :(
5 notes · View notes
Text
Tumblr media
This blurb is courtesy of @Pin__Terest on Twitter. I could not think of a more apt description. Clearly, there is vast knowledge there buried in there from (literal) epochs of experience. But, really, after you've managed to finesse out what you think you want, how can you even be sure it's right?
6 notes · View notes
Text
Tumblr media
Honestly, computers kill every day. They kind of have to, right? They run so many aspects of our lives, computer glitches happen, machines malfunction, people get hurt or die. We can call those accidental deaths. We (for some reason) operate under the assumption that computers will make mistakes and malfunction, and say that that is okay, it's all part of the Grand Calculus. I say "for some reason" because humans have weirdly lax standards when it comes to general computing reliability, but at least we treat most of these systems as safety-critical systems, which do have higher standards. If you go up the stack a bit, you can find lots of places where computers contribute to people's deaths --- but it's hard to say it directly causes them. Algorithmic credit systems with embedded biases and hospital analytics, for example, make decisions that vastly impact a person's well-being. It's really hard to quantify what the impact of these systems are, but still, it's probably under studied. The cynic in me says that maybe this type of research is strategically underfunded, though it's also possible that a mixture of difficulty and the quandary of choice of where to focus one's efforts has led to us getting poor overall estimates. But I almost feel it should be possible to have a running scoreboard. Going further up the stack, we start getting into the land of autonomous, AI-powered cyberphysical systems. The thing is, there probably are far fewer deaths to speak of here! For example, a Tesla car on autopilot accidentally strikes someone darting through the street at night. Even the larger numbers reported are likely a safer rate than human driving. These are accidental deaths. But the weird thing is, people tend to dissociate this from "robots" or "AI" killing, even though is exactly what happened --- a robot car killed a person. It wasn't a murder, and that's a key difference. It wasn't supposed to do that. But that doesn't mean it wasn't a bad outcome, and probably it is the most common, baddest outcome we've seen so far.
Going further up the stack still, we get to the land of military AI weaponry. People tend to get extremely cagey here, and scared. It's easy to imagine a scenario where weaponry becomes more and more autonomous and more and more decisions are taken out of humans' hands. The biggest issue is not machines overthrowing their human owners, but rather, machines making decisions so quickly that humans don't have a chance to stop them --- even if they're effectively doing the thing that humans asked them to. This could be both accidental killing and purposeful killing. Famously in the 1980s, the U.S. and Russia almost went to nuclear war --- not because either side was actually threatening a strike, but because of a computer glitch. Sun glares led to U.S. computer systems confusing solar radiation with Soviet missiles. A human operator had to make a judgment call, and ultimately decided, no, it doesn't make sense that the Soviets would launch such a strike on America, not at this time, not in this way. It certainly helped that there was always a landline of communication between the two superpowers at the time, in part to make sure that situations like this did not happen. But what happens when this is not the decision of a human, but of a computer, and that system makes a mistake? This can also happen in war, in real-time; a machine with a gun has to decide whether or not to shoot a target. Is it a threat, or a civilian? What happens when it's a rebel, who could be both? There are serious ethical quandaries here. We can make them into AI systems, and if we are really going down this path, we must. At the least, I personally don't think there is anything inherently unethical about AI weapons in war. I'd rather have the machines largely kill each other than to kill human soldiers. And it's even possible, that like with automobiles, they do outperform humans. People don't like the apparent stochasticity that comes with this, and I get that. But I see nothing signaling that this is inherently worse than war as usual, although I do wonder what a cyber attack would mean in a case like this.
People are uncomfortable when you can say that robots can murder people. It was the source of much hullabaloo and debate in San Francisco, recently. There was a report of a drone killing an operator in simulation (simulation!!), and people became concerned. These stoke visions of, well, the terminator. And it worries me that technology creeps into people's lives, sometimes takes those lives away, and if it happens slowly enough, people lose the ability to care. There's already enough run by algorithms in our lives that people do not bat an eye to. It feels like the old adage about the robotic frog in the slowly heated oil bath (or I think that was the gist, anyway). Some people will go all the way up the stack, and worry about "existential risk," real-world analogues of Skynet enslaving or exterminating the globe. But I'll argue, before worrying about that, worry about all these other layers that are already getting people killed first.
People who should know better have a history of worrying about the future. What happens when AI eventually kills someone, they ask, when it's clearly already happened? Drones have clearly already done this. I'd love to see their surprised Pikachu reactions to that detail. I'm not telling you to panic, but don't worry about the future --- worry about the present.
6 notes · View notes
Photo
Tumblr media
Most companies are in business solely to make money, even potentially legally obligated to for shareholders.  As such, they are aligned with their wallet.  Don’t expect them to make products that are good for their customers if it conflicts with revenue, market share, investors’ goals, partners’ goals, and so on.
At the end of the day, believe it or not, capitalism rears its ugly head once again, and is a cause for many of the other issues listed here.  Is it profitable?  Then if it’s discriminatory, delusional, gatekeeping, thieving, dangerous --- none of that matters until it significantly hits a company’s bottom line or reputation.  This is not unique to AI, but when a new technology has so many thorny issues needing to be solved already, the potential consequences are combinatorial.
Also, yes, you should be concerned that two of these business cats are not wearing pants.
6 notes · View notes
Photo
Tumblr media
It’s really dystopic, when you think about it --- people are being paid poverty wages to potentially automate themselves out of a job.  That linked story is just one of many, but suffice to say, there isn’t enough in the way of open, raw textual data to train AI models for all modalities (such as conversational chat) over such a wide variety of topics that people care about.  The things people say on message boards only gets you so far, and as we’ve seen, companies are trying to restrict access on those.
But wait, it gets weirder and wilder, because humans are accidentally striking back, using AI systems to automate their own training labor.  The inevitable result is a nightmare cyborg matryoshka of self-corrupting data.  We’ve seen recent reports that training on artificial data can cause model corruption.  Unwittingly, it seems, the snake is biting its own tail, but I for one welcome our new ouroboros overlords.
3 notes · View notes