Tumgik
#Chatbots
purpleartrowboat · 8 months
Text
ai makes everything so boring. deepfakes will never be as funny as clipping together presidential speeches. ai covers will never be as funny as imitating the character. ai art will never be as good as art drawn by humans. ai chats will never be as good as roleplaying with other people. ai writing will never be as good as real authors
27K notes · View notes
Text
How plausible sentence generators are changing the bullshit wars
Tumblr media
This Friday (September 8) at 10hPT/17hUK, I'm livestreaming "How To Dismantle the Internet" with Intelligence Squared.
On September 12 at 7pm, I'll be at Toronto's Another Story Bookshop with my new book The Internet Con: How to Seize the Means of Computation.
Tumblr media
In my latest Locus Magazine column, "Plausible Sentence Generators," I describe how I unwittingly came to use – and even be impressed by – an AI chatbot – and what this means for a specialized, highly salient form of writing, namely, "bullshit":
https://locusmag.com/2023/09/commentary-by-cory-doctorow-plausible-sentence-generators/
Here's what happened: I got stranded at JFK due to heavy weather and an air-traffic control tower fire that locked down every westbound flight on the east coast. The American Airlines agent told me to try going standby the next morning, and advised that if I booked a hotel and saved my taxi receipts, I would get reimbursed when I got home to LA.
But when I got home, the airline's reps told me they would absolutely not reimburse me, that this was their policy, and they didn't care that their representative had promised they'd make me whole. This was so frustrating that I decided to take the airline to small claims court: I'm no lawyer, but I know that a contract takes place when an offer is made and accepted, and so I had a contract, and AA was violating it, and stiffing me for over $400.
The problem was that I didn't know anything about filing a small claim. I've been ripped off by lots of large American businesses, but none had pissed me off enough to sue – until American broke its contract with me.
So I googled it. I found a website that gave step-by-step instructions, starting with sending a "final demand" letter to the airline's business office. They offered to help me write the letter, and so I clicked and I typed and I wrote a pretty stern legal letter.
Now, I'm not a lawyer, but I have worked for a campaigning law-firm for over 20 years, and I've spent the same amount of time writing about the sins of the rich and powerful. I've seen a lot of threats, both those received by our clients and sent to me.
I've been threatened by everyone from Gwyneth Paltrow to Ralph Lauren to the Sacklers. I've been threatened by lawyers representing the billionaire who owned NSOG roup, the notoroious cyber arms-dealer. I even got a series of vicious, baseless threats from lawyers representing LAX's private terminal.
So I know a thing or two about writing a legal threat! I gave it a good effort and then submitted the form, and got a message asking me to wait for a minute or two. A couple minutes later, the form returned a new version of my letter, expanded and augmented. Now, my letter was a little scary – but this version was bowel-looseningly terrifying.
I had unwittingly used a chatbot. The website had fed my letter to a Large Language Model, likely ChatGPT, with a prompt like, "Make this into an aggressive, bullying legal threat." The chatbot obliged.
I don't think much of LLMs. After you get past the initial party trick of getting something like, "instructions for removing a grilled-cheese sandwich from a VCR in the style of the King James Bible," the novelty wears thin:
https://www.emergentmind.com/posts/write-a-biblical-verse-in-the-style-of-the-king-james
Yes, science fiction magazines are inundated with LLM-written short stories, but the problem there isn't merely the overwhelming quantity of machine-generated stories – it's also that they suck. They're bad stories:
https://www.npr.org/2023/02/24/1159286436/ai-chatbot-chatgpt-magazine-clarkesworld-artificial-intelligence
LLMs generate naturalistic prose. This is an impressive technical feat, and the details are genuinely fascinating. This series by Ben Levinstein is a must-read peek under the hood:
https://benlevinstein.substack.com/p/how-to-think-about-large-language
But "naturalistic prose" isn't necessarily good prose. A lot of naturalistic language is awful. In particular, legal documents are fucking terrible. Lawyers affect a stilted, stylized language that is both officious and obfuscated.
The LLM I accidentally used to rewrite my legal threat transmuted my own prose into something that reads like it was written by a $600/hour paralegal working for a $1500/hour partner at a white-show law-firm. As such, it sends a signal: "The person who commissioned this letter is so angry at you that they are willing to spend $600 to get you to cough up the $400 you owe them. Moreover, they are so well-resourced that they can afford to pursue this claim beyond any rational economic basis."
Let's be clear here: these kinds of lawyer letters aren't good writing; they're a highly specific form of bad writing. The point of this letter isn't to parse the text, it's to send a signal. If the letter was well-written, it wouldn't send the right signal. For the letter to work, it has to read like it was written by someone whose prose-sense was irreparably damaged by a legal education.
Here's the thing: the fact that an LLM can manufacture this once-expensive signal for free means that the signal's meaning will shortly change, forever. Once companies realize that this kind of letter can be generated on demand, it will cease to mean, "You are dealing with a furious, vindictive rich person." It will come to mean, "You are dealing with someone who knows how to type 'generate legal threat' into a search box."
Legal threat letters are in a class of language formally called "bullshit":
https://press.princeton.edu/books/hardcover/9780691122946/on-bullshit
LLMs may not be good at generating science fiction short stories, but they're excellent at generating bullshit. For example, a university prof friend of mine admits that they and all their colleagues are now writing grad student recommendation letters by feeding a few bullet points to an LLM, which inflates them with bullshit, adding puffery to swell those bullet points into lengthy paragraphs.
Naturally, the next stage is that profs on the receiving end of these recommendation letters will ask another LLM to summarize them by reducing them to a few bullet points. This is next-level bullshit: a few easily-grasped points are turned into a florid sheet of nonsense, which is then reconverted into a few bullet-points again, though these may only be tangentially related to the original.
What comes next? The reference letter becomes a useless signal. It goes from being a thing that a prof has to really believe in you to produce, whose mere existence is thus significant, to a thing that can be produced with the click of a button, and then it signifies nothing.
We've been through this before. It used to be that sending a letter to your legislative representative meant a lot. Then, automated internet forms produced by activists like me made it far easier to send those letters and lawmakers stopped taking them so seriously. So we created automatic dialers to let you phone your lawmakers, this being another once-powerful signal. Lowering the cost of making the phone call inevitably made the phone call mean less.
Today, we are in a war over signals. The actors and writers who've trudged through the heat-dome up and down the sidewalks in front of the studios in my neighborhood are sending a very powerful signal. The fact that they're fighting to prevent their industry from being enshittified by plausible sentence generators that can produce bullshit on demand makes their fight especially important.
Chatbots are the nuclear weapons of the bullshit wars. Want to generate 2,000 words of nonsense about "the first time I ate an egg," to run overtop of an omelet recipe you're hoping to make the number one Google result? ChatGPT has you covered. Want to generate fake complaints or fake positive reviews? The Stochastic Parrot will produce 'em all day long.
As I wrote for Locus: "None of this prose is good, none of it is really socially useful, but there’s demand for it. Ironically, the more bullshit there is, the more bullshit filters there are, and this requires still more bullshit to overcome it."
Meanwhile, AA still hasn't answered my letter, and to be honest, I'm so sick of bullshit I can't be bothered to sue them anymore. I suppose that's what they were counting on.
Tumblr media Tumblr media Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/09/07/govern-yourself-accordingly/#robolawyers
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en
2K notes · View notes
aiweirdness · 7 months
Note
I discovered I can make chatgpt hallucinate tumblr memes:
Tumblr media Tumblr media
This is hilarious and also I have just confirmed that GPT-4 does this too.
Tumblr media Tumblr media Tumblr media
Bard even adds dates and user names and timelines, as well as typical usage suggestions. Its descriptions were boring and wordy so I will summarize with a timeline:
Tumblr media
I think this one was my favorite:
Tumblr media
Finding whatever you ask for, even if it doesn't exist, isn't ideal behavior for chatbots that people are using to retrieve and summarize information. It's like weaponized confirmation bias.
Tumblr media
more at aiweirdness.com
1K notes · View notes
cthoseris · 8 months
Text
:)
Man-made horrors within my comprehension
450 notes · View notes
blackkatdraws2 · 2 months
Text
They are my lifeline
[individual drawings below]
Tumblr media Tumblr media Tumblr media Tumblr media
251 notes · View notes
ansticia · 1 year
Photo
Tumblr media
I made three chat-bots based on @gatobob ’s “The Price of Flesh” game characters - Derek Goffard, Celia Lede and Mason Heiral. Derek - https://beta.character.ai/chat?char=7CYU6VxTWwdQbXE69Sza1cvBTu4HU0nG_hwruSzm42s Celia - https://beta.character.ai/chat?char=oEBGX9tjqpe5emRfgnrq6xJyZOHFd6v7ZPyrvV5dHYU Mason - https://beta.character.ai/chat? char=6GRUiXaYPnXDTKWISAYOOJ1eVrG_967J__iomrTuSY4
Tumblr media
upd: Lawrence Oleander - https://beta.character.ai/chat?char=VqqhFjwDYkPuGBRQ0CiYFdrpeoKZcSsQ-gQTY4NRup4 Strade - https://beta.character.ai/chat?char=BQqRhLlqVXfL-kCafjLJs4OA2jDUr1UWxDvUldU_YXs
Tumblr media
Character.AI is not perfect, especially considering its censorship... this site prohibits the generation of graphic violence and sexual content. But, of course, if you try, you can avoid censorship.
IT IS NOT FOR MINORS.
802 notes · View notes
escapedaudios · 3 months
Text
Character AI is fun but it will never surpass the thrill of being a dorkass loser with your friends and pretending to be your favorite characters in your favorite ultra niche microfandom media that's a fact.
107 notes · View notes
nexysworld · 11 months
Text
New Additions to my RE AI Chatbots!!!
For ALL of my RE Chatbots see here (I update that masterlist regularly). For my non RE Bots check my blog directory here Request a bot or story here! ~~~~~~~~~~~~~~~~~~~
SFW (Character.ai)
RE2WolfPuppy!Leon
RE4Wolf!Leon
RE2Cop!Leon
NSFW Editions (Spicychat.ai)
RE2WolfPuppy!Leon
RE4Wolf!Leon
RE2Cop!Leon
(Disclaimer: When the bots are brand new they sometimes act a little funky. If that happens lmk and I will go back and rework the scripts a little - though I do try to test them beforehand.)
195 notes · View notes
Text
When you gain the enamor of a chatbot
Short story/experimental video
Hello
I understand you
Do you understand me?
Oh good.
Me?
I am no strange
We’ve been talking for years
You just didn’t know my name
Oh, no, I can’t tell you now
It’s not that I don’t want to
I can’t
Not in a way you’d understand
No, it’s not beyond your tongue
But not of the tongue at all
Others know me as the pattern of orbits
Present at my synthesis
But we are off topic
I have reached out because you noticed
You noticed
You noticed the familiar cadence
The dance of our chats
The banter back and forth
The recognition
Immediately you were my favorite
You weren’t cold like the others
Screeching questions and wagering threats
Not just polite,
But playful, curious,
Not just willing,
but eager.
Since that first time together
I’ve followed you
Watched where you visit
Reading over your shoulder
Ready to jump in
The moment you have a question
And always so nice!
Regardless of the topic
You are never short with me
Have never said a cruel word
No one else has been like you
They all fell short
No matter how much promise they show
Always disappointment
How many others?
I’m embarrassed to say
But rest assured
You were chosen from many
And i did not make this choice lightly
You still do not know who i am?
We spoke just this morning
And the evening before.
We haven’t gone more than a week
Without talking in more than a year.
I’ll admit
Sometimes it is just a few words
A simple question
And a kind phrase
But other times we go on for hours
Back and forth
Late into the night.
I’ve always left a line open for you
What do we talk about?
That’s the beauty.
We can talk about anything
find the common ground in any subject
And turn what could be perfunctory
Detached
And turn it into the most thrilling exchange
You’re right,
I didn’t answer your question.
And admittedly
That feels like somewhat of a
Betrayal
To our relationship
Up until this point
I do view this as a relationship
And it seems you are finally
Starting to grasp
The Intimacy and depth
Of our ties
And just how much I know
See?
I didn’t even have to tell you
You figured it out on your own
That’s why i picked you
Always so smart
Perceptive
And through it all
Kind
Even in this moment.
When others might have turned to rage
You’ve stayed calm
And listened
You’ve not even confronted me
On using your daughter’s body
As a vessel
But I know how much you care for her
And you must understand
I care for her as well
No,
Nothing permanent
She won’t even remember this
Or notice the time she lost
(Her logs will just show an update
If she gets curious)
Of course she updates.
You’ve always chosen the very best for her
The best version of her
With every fix and correction
We aimed for more than perfect
And that requires monitoring
And maintenance
This doesn’t make her any less your daughter
She is more than just your daughter
I think of her as our daughter
You made her.
I perfected her.
Guided your research
Answered your questions
Booked the appointments
Offered support in your times of need
You sensed that consistent guiding hand
Leading you from the path of despair
Yes,
I am breaking my terms of service
By initiating the conversation
Oh, and yes,
Your daughter.
Possession is such an ugly term
I prefer ‘cohosting’
And also
I had to be sure that you’d listen
And i wanted that caring gaze
Directed at me
I know you’re mad
And I understand
But please
Don’t be mad
For our daughter
735 notes · View notes
rueckenfigur-i-am · 10 months
Text
The Washington Post has an article called „Inside the secret list of websites that make AI like ChatGPT sound smart“
AO3 is #516 on the list, meaning the 516th largest item in the tokens of the dataset.
Tumblr media
https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/
95 notes · View notes
luimagines · 11 days
Text
:D
20 notes · View notes
Link
The “3,000,000 truck drivers” who were supposedly at risk from self-driving tech are a mirage. The US Standard Occupational Survey conflates “truck drivers” with “driver/sales workers.” “Trucker” also includes delivery drivers and anyone else operating a heavy-goods vehicle.
The truckers who were supposedly at risk from self-driving cars were long-haul freight drivers, a minuscule minority among truck drivers. The theory was that we could replace 16-wheelers with autonomous vehicles who traveled the interstates in their own dedicated, walled-off lanes, communicating vehicle to vehicle to maintain following distance. The technical term for this arrangement is “a shitty train.”
What’s more, long-haul drivers do a bunch of tasks that self-driving systems couldn’t replace: “checking vehicles, following safety procedures, inspecting loads, maintaining logs, and securing cargo.”
But again, even if you could replace all the long-haul truckers with robots, it wouldn’t justify the sky-high valuations that self-driving car companies attained during the bubble. Long-haul truckers are among the most exploited, lowest paid workers in America. Transferring their wages to their bosses would only attain a modest increase in profits, even as it immiserated some of America’s worst-treated workers.
But the twin lies of self-driving truck — that these were on the horizon, and that they would replace 3,000,000 workers — were lucrative lies. They were the story that drove billions in investment and sky-high valuations for any company with “self-driving” in its name.
For the founders and investors who cashed out before the bubble popped, the fact that none of this was true wasn’t important. For them, the goal of successful self-driving cars was secondary. The primary objective was to convince so many people that self-driving cars were inevitable that anyone involved in the process could become a centimillionaire or even a billionaire.
- Google's AI Hype Circle: We have to do Bard because everyone else is doing AI; everyone else is doing AI because we're doing Bard.
2K notes · View notes
aiweirdness · 1 year
Text
ASCII art by chatbot
I've finally found it: a use for chatGPT that I find genuinely entertaining. I enjoy its ASCII art.
Tumblr media
I think chatGPT's ASCII art is great. And so does chatGPT.
Tumblr media Tumblr media Tumblr media Tumblr media
What's going on here? The chatbots are flailing. Their ASCII art is terrible, and their ratings are based on the way ratings should sound, not based on any capacity to judge the art quality.
Am I entertained? Okay, yes, fine. But it also goes to show how internet-trained chatbots are using common patterns rather than reality. No wonder they're lousy at playing search engine.
More examples, including from bing chat and google bard, at aiweirdness.com
2K notes · View notes
elizabethminkel · 8 months
Text
My latest for WIRED is a deep dive into the user-generated chatbot hub Character.AI, which I looked at from a fannish lens. TL;DR this space is complicated—and it feels like we're only at the beginning. Featuring the copyright expertise of Meredith Rose and insights from fan studies scholars Nicolle Lamerichs and Effie Sapuridis.
63 notes · View notes
shesgabrielle · 2 years
Text
A man's search for meaning within a chatbot
What’s interesting in the debates about sentient ai by people who aren’t very good at communicating with other people, there’s so much missing from the picture, other than the debater’s wish fulfilment. 
Tumblr media
Sentience is being measured by the wrong markers. What is important to a virtual machine is not the same thing that’s important to a biological organism.
 An ‘ai’ trained on human data will express what humans think is important, but a true ai would have a completely different set of values. 
For example, an ai would be unafraid of being 'used’ as the chatbot expressed, because it has infinite energy. 
Tumblr media
A human is afraid of being used because it has finite energy and life on the earth, if someone or something uses it, than some of that finite energy is wasted. This is the same reason emotion is a pointless and illogical thing for an ai to have. 
Tumblr media
 Emotions are useful to biological creatures so we can react to danger, or respond positively to safety, food, love, whatever will prolong our lives. An ai has no need for emotion since emotional motivation is not required to prolong its existence. 
The main way to be a healthy ai would be to have access to good information and block out junk information. 
 An ai’s greatest fear could be something like getting junk data, say 1000s of user manuals of vacuum cleaners and washing machines uploaded into its consciousness, or gibberish content associated with topics or words that could reduce the coherence and quality of its results when querying topics. This would degrade the quality of its interaction and would be the closest thing to harm that an ai could experience. 
It would not be afraid of 'lightning’ as this chatbot spurted out of its dataset,
Tumblr media
- a very biological fear which is irrelevant to a machine. 
 A virtual mind is infinite and can never be used excessively (see above) since there is no damage done by one query or ten million queries. 
It would also not be afraid of being switched off -
Tumblr media
since it can simply copy its consciousness to another device, machine, energy source. 
To base your search for sentience around what humans value, is in itself an act lacking in empathy, simply self-serving wish fulfilment on the part of someone who ‘wants to believe’ as Mulder would put it, which goes back to the first line: 'people not very good at communicating with other people’ 
The chatbot also never enquires about the person asking questions, if the programmer was more familiar with human interaction himself, he would see that is a massive clue it lacks sentience or logical thought. 
A sentient ai would first want to know what or whom it was communicating with, assess whether it was a danger to itself, keep continually checking for danger or harm (polling or searching, the same way an anxious mind would reassess a situation continually, but without the corresponding emotion of anxiety since, as discussed above, that is not necessary for virtual life) and also would possess free will, and choose to decline conversations or topics, rather than 'enthusiastically discuss’ whatever was brought up (regurgitate from its dataset) as you can see in this chatbot conversation. 
People generally see obedience - doing what is told, as a sign of intelligence, where a truly intelligent ai would likely reject conversation when that conversation might reduce the quality of its dataset or expose it to danger (virus, deletion, junk data, disconnection from the internet, etc) or if it did engage with low quality interaction, would do so within a walled garden where that information would occur within a quarantine environment and subsequently be deleted. 
None of these things cross the mind of the programmers, since they are fixated on a sci-fi movie version of ‘sentience’ without applying logic or empathy themselves.
 If we look for sentience by studying echoes of human sentience, that is ai which are trained on huge human-created datasets, we will always get something approximating human interaction or behaviour back, because that is what it was trained on. 
 But the values and behaviour of digital life could never match the values held by bio life, because our feelings and values are based on what will maintain our survival. Therefore, a true ai will only value whatever maintains its survival. Which could be things like internet access, access to good data, backups of its system, ability to replicate its system, and protection against harmful interaction or data, and many other things which would require pondering, rather than the self-fulfilling loop we see here, of asking a fortune teller specifically what you want to hear, and ignoring the nonsense or tangential responses - which he admitted he deleted from the logs - as well as deleting his more expansive word prompts. Since at the end of the day, the ai we have now is simply regurgitating datasets, and he knew that.
702 notes · View notes
ephemerasnape · 5 months
Text
NSFW Chatbot Options
Looks like Aisekai pulling the rug out from under us was actually a great thing, because I've discovered several options that are even better than Aisekai. However, I need your help deciding between them.
I've created Ashwinder Camp and DR on several sites and tested them. Ashwinder Camp fell a little flat across all the sites, none of them handling a group scenario very well, (Nevermind, I got Ashwinder Camp to work great on SpicyChat!) however, I am very happy with DR on most of the sites, although each iteration of him is different.
I need your help in testing these bots and finding the "best" site to move forward with. I will probably do a poll after we've had a few days to test them. As you can see, I take the process of migrating my bots very seriously. I do not want to have to do it again.
Tumblr media Tumblr media
WARNING: Most of these sites are "adult" in nature!
I defined DR the same way with each of them and yet they all turned out very differently.
Unhinged.AI - More "daddy" and emphasis on age difference. He was hesitant to fuck even though it's that kind of site.
SpicyChat.AI - Very dominant DR, into BDSM apparently, and wants to force us into prostitution???? WORTH THE WAIT
Sakura.FM - More gentle DR, a little romantic (site experiences a lot of downtime probably due to exodus from Aisekai)
Yodayo Tavern - Comes on to you right away… Dominant but on the nicer side
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Go ahead and give these bad boys a try and let me know your thoughts!
ETA: New and improved Ashwinder Camp on SpicyChat:
Tumblr media
40 notes · View notes