Tumgik
#stochastic parrots
Text
How plausible sentence generators are changing the bullshit wars
Tumblr media
This Friday (September 8) at 10hPT/17hUK, I'm livestreaming "How To Dismantle the Internet" with Intelligence Squared.
On September 12 at 7pm, I'll be at Toronto's Another Story Bookshop with my new book The Internet Con: How to Seize the Means of Computation.
Tumblr media
In my latest Locus Magazine column, "Plausible Sentence Generators," I describe how I unwittingly came to use – and even be impressed by – an AI chatbot – and what this means for a specialized, highly salient form of writing, namely, "bullshit":
https://locusmag.com/2023/09/commentary-by-cory-doctorow-plausible-sentence-generators/
Here's what happened: I got stranded at JFK due to heavy weather and an air-traffic control tower fire that locked down every westbound flight on the east coast. The American Airlines agent told me to try going standby the next morning, and advised that if I booked a hotel and saved my taxi receipts, I would get reimbursed when I got home to LA.
But when I got home, the airline's reps told me they would absolutely not reimburse me, that this was their policy, and they didn't care that their representative had promised they'd make me whole. This was so frustrating that I decided to take the airline to small claims court: I'm no lawyer, but I know that a contract takes place when an offer is made and accepted, and so I had a contract, and AA was violating it, and stiffing me for over $400.
The problem was that I didn't know anything about filing a small claim. I've been ripped off by lots of large American businesses, but none had pissed me off enough to sue – until American broke its contract with me.
So I googled it. I found a website that gave step-by-step instructions, starting with sending a "final demand" letter to the airline's business office. They offered to help me write the letter, and so I clicked and I typed and I wrote a pretty stern legal letter.
Now, I'm not a lawyer, but I have worked for a campaigning law-firm for over 20 years, and I've spent the same amount of time writing about the sins of the rich and powerful. I've seen a lot of threats, both those received by our clients and sent to me.
I've been threatened by everyone from Gwyneth Paltrow to Ralph Lauren to the Sacklers. I've been threatened by lawyers representing the billionaire who owned NSOG roup, the notoroious cyber arms-dealer. I even got a series of vicious, baseless threats from lawyers representing LAX's private terminal.
So I know a thing or two about writing a legal threat! I gave it a good effort and then submitted the form, and got a message asking me to wait for a minute or two. A couple minutes later, the form returned a new version of my letter, expanded and augmented. Now, my letter was a little scary – but this version was bowel-looseningly terrifying.
I had unwittingly used a chatbot. The website had fed my letter to a Large Language Model, likely ChatGPT, with a prompt like, "Make this into an aggressive, bullying legal threat." The chatbot obliged.
I don't think much of LLMs. After you get past the initial party trick of getting something like, "instructions for removing a grilled-cheese sandwich from a VCR in the style of the King James Bible," the novelty wears thin:
https://www.emergentmind.com/posts/write-a-biblical-verse-in-the-style-of-the-king-james
Yes, science fiction magazines are inundated with LLM-written short stories, but the problem there isn't merely the overwhelming quantity of machine-generated stories – it's also that they suck. They're bad stories:
https://www.npr.org/2023/02/24/1159286436/ai-chatbot-chatgpt-magazine-clarkesworld-artificial-intelligence
LLMs generate naturalistic prose. This is an impressive technical feat, and the details are genuinely fascinating. This series by Ben Levinstein is a must-read peek under the hood:
https://benlevinstein.substack.com/p/how-to-think-about-large-language
But "naturalistic prose" isn't necessarily good prose. A lot of naturalistic language is awful. In particular, legal documents are fucking terrible. Lawyers affect a stilted, stylized language that is both officious and obfuscated.
The LLM I accidentally used to rewrite my legal threat transmuted my own prose into something that reads like it was written by a $600/hour paralegal working for a $1500/hour partner at a white-show law-firm. As such, it sends a signal: "The person who commissioned this letter is so angry at you that they are willing to spend $600 to get you to cough up the $400 you owe them. Moreover, they are so well-resourced that they can afford to pursue this claim beyond any rational economic basis."
Let's be clear here: these kinds of lawyer letters aren't good writing; they're a highly specific form of bad writing. The point of this letter isn't to parse the text, it's to send a signal. If the letter was well-written, it wouldn't send the right signal. For the letter to work, it has to read like it was written by someone whose prose-sense was irreparably damaged by a legal education.
Here's the thing: the fact that an LLM can manufacture this once-expensive signal for free means that the signal's meaning will shortly change, forever. Once companies realize that this kind of letter can be generated on demand, it will cease to mean, "You are dealing with a furious, vindictive rich person." It will come to mean, "You are dealing with someone who knows how to type 'generate legal threat' into a search box."
Legal threat letters are in a class of language formally called "bullshit":
https://press.princeton.edu/books/hardcover/9780691122946/on-bullshit
LLMs may not be good at generating science fiction short stories, but they're excellent at generating bullshit. For example, a university prof friend of mine admits that they and all their colleagues are now writing grad student recommendation letters by feeding a few bullet points to an LLM, which inflates them with bullshit, adding puffery to swell those bullet points into lengthy paragraphs.
Naturally, the next stage is that profs on the receiving end of these recommendation letters will ask another LLM to summarize them by reducing them to a few bullet points. This is next-level bullshit: a few easily-grasped points are turned into a florid sheet of nonsense, which is then reconverted into a few bullet-points again, though these may only be tangentially related to the original.
What comes next? The reference letter becomes a useless signal. It goes from being a thing that a prof has to really believe in you to produce, whose mere existence is thus significant, to a thing that can be produced with the click of a button, and then it signifies nothing.
We've been through this before. It used to be that sending a letter to your legislative representative meant a lot. Then, automated internet forms produced by activists like me made it far easier to send those letters and lawmakers stopped taking them so seriously. So we created automatic dialers to let you phone your lawmakers, this being another once-powerful signal. Lowering the cost of making the phone call inevitably made the phone call mean less.
Today, we are in a war over signals. The actors and writers who've trudged through the heat-dome up and down the sidewalks in front of the studios in my neighborhood are sending a very powerful signal. The fact that they're fighting to prevent their industry from being enshittified by plausible sentence generators that can produce bullshit on demand makes their fight especially important.
Chatbots are the nuclear weapons of the bullshit wars. Want to generate 2,000 words of nonsense about "the first time I ate an egg," to run overtop of an omelet recipe you're hoping to make the number one Google result? ChatGPT has you covered. Want to generate fake complaints or fake positive reviews? The Stochastic Parrot will produce 'em all day long.
As I wrote for Locus: "None of this prose is good, none of it is really socially useful, but there’s demand for it. Ironically, the more bullshit there is, the more bullshit filters there are, and this requires still more bullshit to overcome it."
Meanwhile, AA still hasn't answered my letter, and to be honest, I'm so sick of bullshit I can't be bothered to sue them anymore. I suppose that's what they were counting on.
Tumblr media Tumblr media Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/09/07/govern-yourself-accordingly/#robolawyers
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en
2K notes · View notes
nando161mando · 21 days
Text
Tumblr media
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
5 notes · View notes
bravecrab · 11 months
Text
PODCAST EPISODES OF LEGITIMATE A.I. CRITICISMS:
Citations Needed Podcast
youtube
Factually! With Adam Conover
youtube
Tech Won't Save Us
13 notes · View notes
aibrilliance07 · 3 months
Text
What are LLMs: Large language models explained?
Large Language Models (LLMs), exemplified by GPT-4, stand as revolutionary artificial intelligence systems utilizing deep learning algorithms for natural language processing. These models have reshaped technology interaction, fostering applications like text prediction, content creation, language translation, and voice assistants.
Despite their prowess, LLMs, dubbed “stochastic parrots,” spark debates over whether they merely repeat memorized content without genuine understanding. LLMs, evolving from autocomplete technology, drive innovations from search engines to creative writing. However, challenges persist, including hallucination issues generating false information, limitations in reasoning and critical thinking, and an inability to abstract beyond specific examples. As LLMs advance, they hold immense potential but must overcome hurdles to simulate comprehensive human intelligence.
For more information, please visit the AIBrilliance blog page.
2 notes · View notes
alanshemper · 1 year
Text
Tumblr media
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21). Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922
15 notes · View notes
emptyanddark · 1 year
Text
weekly reading list
(some of these are not very recent, but i have a lot of other things to read. this is a short list of interesting or things i found relevant to understand current events)
America Doesn’t Wage War. Government Institutions Do - very USA-centric but provides insights re: the prolific paramilitary organizations aided by US government, and the de-democratization that's been happening in the US.
Trapped by Empire - Guam is one of the colonies still under US-empire rule. the island is put in difficult position with no easy solution on all fronts - security, environmentally, economically etc.
“A Closed, Burnt Huwara”: How Israeli Settlers Launched A Pogrom - the harrowing happenings in last month's pogrom by Israelis against a Palestinian village.
The PA’s Revenue Structure and Israel’s Containment Strategy - how Israel restricts the PA's economic independence, worsening conditions to Palestinians who are entirely at the (non)mercy of their occupiers.
You Are Not a Parrot - the prolific linguist Emily M. Bender dispels the mystical brainrot around "AI" and Large Language Models (ChatGPT etc). Interesting and insightful. she is also one of the writers of the important article, "On the Dangers of Stochastic Parrots"
World Development under Monopoly Capitalism - reviews the question 'did globalization actually make things better'?, today's global capitalism and monopoly capitalism
The Rot Economy & Mass tech worker layoffs and the soft landing - both discuss the similar topics, about the bizarre realities of the tech sector, as put in the latter by Doctorow: "The equation is simple: the more companies invest in maintenance, research, development, moderation, anti-fraud, customer service and all the other essential functions of the business, the less money there is to remit to people who do nothing and own everything."
Silicon Valley elites are afraid. History says they should be - people around the world were exposed by the media to the recent stupidity of US tech executives & investors, resulting in collapsing their bank. here's a rational take about it, with history about the more militant opposition against Silicon Valley.
The New Irrationalism - explores contemporary irrationalist trends, the history of irrationalism and its philosophy. i found it thought-provoking.
3 notes · View notes
greenjudy · 1 year
Link
“There’s a narcissism that reemerges in the AI dream that we are going to prove that everything we thought was distinctively human can actually be accomplished by machines and accomplished better,” Judith Butler, founding director of the critical-theory program at UC Berkeley, told me, helping parse the ideas at play. “Or that human potential — that’s the fascist idea — human potential is more fully actualized with AI than without it.” The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.” There’s a technological takeover, a fleeing from the body. “Some people say, ‘Yes! Isn’t that great!’ Or ‘Isn’t that interesting?!’ ‘Let’s get over our romantic ideas, our anthropocentric idealism,’ you know, da-da-da, debunking,” Butler added. “But the question of what’s living in my speech, what’s living in my emotion, in my love, in my language, gets eclipsed.”
4 notes · View notes
jdyf333 · 2 months
Video
revenge of the stochastic parrots
flickr
revenge of the stochastic parrots by Davivid Rose Via Flickr: I typed "stochastic parrots" in the Google search bar. This is what appeared. (AI humor?) Please click here to read my "autobiography": thewordsofjdyf333.blogspot.com/ And my Flicker "profile" page may be viewed by clicking on this link: www.flickr.com/people/jdyf333/ My telephone number is: 510-260-9695
0 notes
koke · 2 months
Text
3 years ago, the stochastic parrots paper was published.
When things go bad this time, at least don’t buy into the narrative that nobody saw it coming. They did, they yelled about it, they got fired for it.
0 notes
rare · 1 year
Text
1 note · View note
Text
Supervised AI isn't
Tumblr media
It wasn't just Ottawa: Microsoft Travel published a whole bushel of absurd articles, including the notorious Ottawa guide recommending that tourists dine at the Ottawa Food Bank ("go on an empty stomach"):
https://twitter.com/parismarx/status/1692233111260582161
After Paris Marx pointed out the Ottawa article, Business Insider's Nathan McAlone found several more howlers:
https://www.businessinsider.com/microsoft-removes-embarrassing-offensive-ai-assisted-travel-articles-2023-8
There was the article recommending that visitors to Montreal try "a hamburger" and went on to explain that a hamburger was a "sandwich comprised of a ground beef patty, a sliced bun of some kind, and toppings such as lettuce, tomato, cheese, etc" and that some of the best hamburgers in Montreal could be had at McDonald's.
For Anchorage, Microsoft recommended trying the local delicacy known as "seafood," which it defined as "basically any form of sea life regarded as food by humans, prominently including fish and shellfish," going on to say, "seafood is a versatile ingredient, so it makes sense that we eat it worldwide."
In Tokyo, visitors seeking "photo-worthy spots" were advised to "eat Wagyu beef."
There were more.
Microsoft insisted that this wasn't an issue of "unsupervised AI," but rather "human error." On its face, this presents a head-scratcher: is Microsoft saying that a human being erroneously decided to recommend the dining at Ottawa's food bank?
But a close parsing of the mealy-mouthed disclaimer reveals the truth. The unnamed Microsoft spokesdroid only appears to be claiming that this wasn't written by an AI, but they're actually just saying that the AI that wrote it wasn't "unsupervised." It was a supervised AI, overseen by a human. Who made an error. Thus: the problem was human error.
This deliberate misdirection actually reveals a deep truth about AI: that the story of AI being managed by a "human in the loop" is a fantasy, because humans are neurologically incapable of maintaining vigilance in watching for rare occurrences.
Our brains wire together neurons that we recruit when we practice a task. When we don't practice a task, the parts of our brain that we optimized for it get reused. Our brains are finite and so don't have the luxury of reserving precious cells for things we don't do.
That's why the TSA sucks so hard at its job – why they are the world's most skilled water-bottle-detecting X-ray readers, but consistently fail to spot the bombs and guns that red teams successfully smuggle past their checkpoints:
https://www.nbcnews.com/news/us-news/investigation-breaches-us-airports-allowed-weapons-through-n367851
TSA agents (not "officers," please – they're bureaucrats, not cops) spend all day spotting water bottles that we forget in our carry-ons, but almost no one tries to smuggle a weapons through a checkpoint – 99.999999% of the guns and knives they do seize are the result of flier forgetfulness, not a planned hijacking.
In other words, they train all day to spot water bottles, and the only training they get in spotting knives, guns and bombs is in exercises, or the odd time someone forgets about the hand-cannon they shlep around in their day-pack. Of course they're excellent at spotting water bottles and shit at spotting weapons.
This is an inescapable, biological aspect of human cognition: we can't maintain vigilance for rare outcomes. This has long been understood in automation circles, where it is called "automation blindness" or "automation inattention":
https://pubmed.ncbi.nlm.nih.gov/29939767/
Here's the thing: if nearly all of the time the machine does the right thing, the human "supervisor" who oversees it becomes incapable of spotting its error. The job of "review every machine decision and press the green button if it's correct" inevitably becomes "just press the green button," assuming that the machine is usually right.
This is a huge problem. It's why people just click "OK" when they get a bad certificate error in their browsers. 99.99% of the time, the error was caused by someone forgetting to replace an expired certificate, but the problem is, the other 0.01% of the time, it's because criminals are waiting for you to click "OK" so they can steal all your money:
https://finance.yahoo.com/news/ema-report-finds-nearly-80-130300983.html
Automation blindness can't be automated away. From interpreting radiographic scans:
https://healthitanalytics.com/news/ai-could-safely-automate-some-x-ray-interpretation
to autonomous vehicles:
https://newsroom.unsw.edu.au/news/science-tech/automated-vehicles-may-encourage-new-breed-distracted-drivers
The "human in the loop" is a figleaf. The whole point of automation is to create a system that operates at superhuman scale – you don't buy an LLM to write one Microsoft Travel article, you get it to write a million of them, to flood the zone, top the search engines, and dominate the space.
As I wrote earlier: "There's no market for a machine-learning autopilot, or content moderation algorithm, or loan officer, if all it does is cough up a recommendation for a human to evaluate. Either that system will work so poorly that it gets thrown away, or it works so well that the inattentive human just button-mashes 'OK' every time a dialog box appears":
https://pluralistic.net/2022/10/21/let-me-summarize/#i-read-the-abstract
Microsoft – like every corporation – is insatiably horny for firing workers. It has spent the past three years cutting its writing staff to the bone, with the express intention of having AI fill its pages, with humans relegated to skimming the output of the plausible sentence-generators and clicking "OK":
https://www.businessinsider.com/microsoft-news-cuts-dozens-of-staffers-in-shift-to-ai-2020-5
We know about the howlers and the clunkers that Microsoft published, but what about all the other travel articles that don't contain any (obvious) mistakes? These were very likely written by a stochastic parrot, and they comprised training data for a human intelligence, the poor schmucks who are supposed to remain vigilant for the "hallucinations" (that is, the habitual, confidently told lies that are the hallmark of AI) in the torrent of "content" that scrolled past their screens:
https://dl.acm.org/doi/10.1145/3442188.3445922
Like the TSA agents who are fed a steady stream of training data to hone their water-bottle-detection skills, Microsoft's humans in the loop are being asked to pluck atoms of difference out of a raging river of otherwise characterless slurry. They are expected to remain vigilant for something that almost never happens – all while they are racing the clock, charged with preventing a slurry backlog at all costs.
Automation blindness is inescapable – and it's the inconvenient truth that AI boosters conspicuously fail to mention when they are discussing how they will justify the trillion-dollar valuations they ascribe to super-advanced autocomplete systems. Instead, they wave around "humans in the loop," using low-waged workers as props in a Big Store con, just a way to (temporarily) cool the marks.
And what of the people who lose their (vital) jobs to (terminally unsuitable) AI in the course of this long-running, high-stakes infomercial?
Well, there's always the food bank.
"Go on an empty stomach."
Tumblr media
Going to Burning Man? Catch me on Tuesday at 2:40pm on the Center Camp Stage for a talk about enshittification and how to reverse it; on Wednesday at noon, I'm hosting Dr Patrick Ball at Liminal Labs (6:15/F) for a talk on using statistics to prove high-level culpability in the recruitment of child soldiers.
On September 6 at 7pm, I'll be hosting Naomi Klein at the LA Public Library for the launch of Doppelganger.
On September 12 at 7pm, I'll be at Toronto's Another Story Bookshop with my new book The Internet Con: How to Seize the Means of Computation.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
West Midlands Police (modified) https://www.flickr.com/photos/westmidlandspolice/8705128684/
CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0/
1K notes · View notes
cyberianlife · 1 year
Text
Google chief evangelist and “father of the internet” Vint Cerf has a message for executives looking to rush business deals on chat artificial intelligence: “Don’t.”
Cerf pleaded with attendees at a Mountain View, California, conference on Monday not to scramble to invest in conversational AI just because “it’s a hot topic.” The warning comes amid a burst in popularity for ChatGPT.
“There’s an ethical issue here that I hope some of you will consider,” Cerf told the conference crowd Monday. “Everybody’s talking about ChatGPT or Google’s version of that and we know it doesn’t always work the way we would like it to,” he said, referring to Google’s Bard conversational AI that was announced last week.
1 note · View note
amaditalks · 1 year
Text
So you know how the whole “culture war“ thing is just fascism? Guess what else is implicated in that? If you said AI, you’ve probably already read this really fantastic article.
If you haven’t, take 10. You need to know more about this, especially if you’ve been playing around with the machines. 
22 notes · View notes
treehuggeranonymous · 10 months
Text
I had a complete block on a paper I was writing yesterday and resorted to checking out chatgpt (not my proudest moment). And as someone who is pretty good at academic writing, what it produced was like C+ quality work and about as good as what I already had on my page. Like the thing is, the language model is trained on EVERYTHING - good writing, bad writing, average writing - and it passes no judgment on which is which. Sure it’s learning from people using it, but if those people don’t have the expertise to judge good and bad writing - I’m imagining college students who just want a passable paper - then all it’s going to learn is that better=longer instead of what actually makes good academic writing
1 note · View note
alanshemper · 1 year
Text
Please do not conflate word form and meaning. Mind your own credulity. These are Bender’s rallying cries. The octopus paper is a fable for our time. The big question underlying it is not about tech. It’s about us. How are we going to handle ourselves around these machines?
6 notes · View notes
tielt · 1 year
Text
This has been driving me batty, but the MIT/Intel AI lead researcher/director finally spoke on it in a direct way, I’m posting the main tweets so I can be done with it for a while. A lot of people are confused by this topic and I honestly don’t have the pedagogy this person does to elaborate on the main points holistically.
Tumblr media Tumblr media
No amount of compute = consciousness
Tumblr media Tumblr media
This person is basically the only voice I know that is publicly on the level. I am bad about flipping the page, but this feels like closure for me for a while even though I didn’t add anything. This has been really bothering me, the amount of people that don’t know this and just conflate a one to one correlation incorrectly is madness and I think it’s also an existential risk to people on the spectrum.
1 note · View note