Tumgik
#artificial capitalism
economicsresearch · 1 year
Text
Tumblr media
page 562 - AI wants to control and reproduce. Harness the creative output of humanity then reproduce those images, words and ideas for the profit of AI creators.
I should stop saying AI wants to do anything, because this AI isn't doing anything more than what it's told to do. It's people controlling a tool and growing wealthy from it.
Like so much else I blame English. This foolish noun based language that sets things in stone, refuses to see the world in interconnected dynamism. A lady wrote a book once, about glaciers listening, indigenous language and knowledge, about entire new ways of being if only English (and the whole Enlightenment project world it begat) weren't stepping on our heads and rubbing mud in our eyes. There are no options but this one. I'll include the book in the episode notes.
In English things need to be isolated and locked, and the rest is just adjectives at the edges. Describe the noun, always describe the noun. Once labelled it can be priced, bought, sold, isolated, restricted, made a shortage of. If you can name it you can tame it. (damn!)
And AI serves this noun based categorical, post-Enlightenment world so well. Take all the nouns and recreate all the nouns.
This is control. This is nothing new being created. This is the crushing of creativity. This is wealth transferred and the crushing of a human future.
10 notes · View notes
f-identity · 1 year
Text
Tumblr media
[Image description: A series of posts from Jason Lefkowitz @[email protected] dated Dec 08, 2022, 04:33, reading:
It's good that our finest minds have focused on automating writing and making art, two things human beings do simply because it brings them joy. Meanwhile tens of thousands of people risk their lives every day breaking down ships, a task that nobody is in a particular hurry to automate because those lives are considered cheap https://www.dw.com/en/shipbreaking-recycling-a-ship-is-always-dangerous/a-18155491 (Headline: 'Recycling a ship is always dangerous.' on Deutsche Welle) A world where computers write and make art while human beings break their backs cleaning up toxic messes is the exact opposite of the world I thought I was signing up for when I got into programming
/end image description]
28K notes · View notes
puppygirldick · 22 days
Text
If housing were free so many people would never be seen as bad people for having mental illness/disabilities "hotheads" "lazy" "people who stink" "messy" would never be a problem nobody would ever be able to say "no couples, no pets, no smoking" and it would only cost landlords their passive incomes
585 notes · View notes
mckitterick · 9 months
Text
The End Is Near: "News" organizations using AI to create content, firing human writers
Tumblr media
source: X
Tumblr media
source: X
Tumblr media
source: X
an example "story" now comes with this warning:
Tumblr media
A new byline showed up Wednesday on io9: “Gizmodo Bot.” The site’s editorial staff had no input or advance notice of the new AI-generator, snuck in by parent company G/O Media.
G/O Media’s AI-generated articles are riddled with errors and outdated information, and block reader comments.
“As you may have seen today, an AI-generated article appeared on io9,” James Whitbrook, deputy editor at io9 and Gizmodo, tweeted. “I was informed approximately 10 minutes beforehand, and no one at io9 played a part in its editing or publication.”
Whitbrook sent a statement to G/O Media along with “a lengthy list of corrections.” In part, his statement said, “The article published on io9 today rejects the very standards this team holds itself to on a daily basis as critics and as reporters. It is shoddily written, it is riddled with basic errors; in closing the comments section off, it denies our readers, the lifeblood of this network, the chance to publicly hold us accountable, and to call this work exactly what it is: embarrassing, unpublishable, disrespectful of both the audience and the people who work here, and a blow to our authority and integrity.”
He continued, “It is shameful that this work has been put to our audience and to our peers in the industry as a window to G/O’s future, and it is shameful that we as a team have had to spend an egregious amount of time away from our actual work to make it clear to you the unacceptable errors made in publishing this piece.”
According to the Gizmodo Media Group Union, affiliated with WGA East, the AI effort has “been pushed by” G/O Media CEO Jim Spanfeller, recently hired editorial director Merrill Brown, and deputy editorial director Lea Goldman.
In 2019, Spanfeller and private-equity firm Great Hill Partners acquired Gizmodo Media Group (previously Gawker Media) and The Onion.
The Writers Guild of America issued a blistering condemnation of G/O Media’s use of artificial intelligence to generate content.
“These AI-generated posts are only the beginning. Such articles represent an existential threat to journalism. Our members are professionally harmed by G/O Media’s supposed ‘test’ of AI-generated articles.”
WGA added, “But this fight is not only about members in online media. This is the same fight happening in broadcast newsrooms throughout our union. This is the same fight our film, television, and streaming colleagues are waging against the Alliance of Motion Picture and Television Producers (AMPTP) in their strike.”
The union, in its statement, said it “demands an immediate end of AI-generated articles on G/O Media sites,” which include The A.V. Club, Deadspin, Gizmodo, Jalopnik, Jezebel, Kotaku, The Onion, Quartz, The Root, and The Takeout.
but wait, there's more:
Just weeks after news broke that tech site CNET was secretly using artificial intelligence to produce articles, the company is doing extensive layoffs that include several longtime employees, according to multiple people with knowledge of the situation. The layoffs total 10 percent of the public masthead.
*
Greedy corporate sleazeballs using artificial intelligence are replacing humans with cost-free machines to barf out garbage content.
This is what end-stage capitalism looks like: An ouroborus of machines feeding machines in a downward spiral, with no room for humans between the teeth of their hungry gears.
Anyone who cares about human life, let alone wants to be a writer, should be getting out the EMP tools and burning down capitalist infrastructure right now before it's too late.
648 notes · View notes
Text
Copyright won't solve creators' Generative AI problem
Tumblr media
The media spectacle of generative AI (in which AI companies’ breathless claims of their software’s sorcerous powers are endlessly repeated) has understandably alarmed many creative workers, a group that’s already traumatized by extractive abuse by media and tech companies.
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids
Even though the claims about “AI” are overblown and overhyped, creators are right to be alarmed. Their bosses would like nothing more than to fire them and replace them with pliable software. The “creative” industries talk a lot about how audiences should be paying for creative works, but the companies that bring creators’ works to market treat their own payments to creators as a cost to be minimized.
Creative labor markets are primarily regulated through copyright: the exclusive rights that accrue to creators at the moment that their works are “fixated.” Media and tech companies then bargain to buy or license those rights. The theory goes that the more expansive those rights are, the more they’ll be worth to corporations, and the more they’ll pay creators for them.
That’s the theory. In practice, we’ve spent 40 years expanding copyright. We’ve made it last longer; expanded it to cover more works, hiked the statutory damages for infringements and made it easier to prove violations. This has made the entertainment industry larger and more profitable — but the share of those profits going to creators has declined, both in real terms and proportionately.
In other words, today creators have more copyright, the companies that buy creators’ copyrights have more profits, but creators are poorer than they were 40 years ago. How can this be so?
As Rebecca Giblin and I explain in our book Chokepoint Capitalism, the sums creators get from media and tech companies aren’t determined by how durable or far-reaching copyright is — rather, they’re determined by the structure of the creative market.
https://chokepointcapitalism.com/
The market is concentrated into monopolies. We have five big publishers, four big studios, three big labels, two big ad-tech companies, and one gargantuan ebook/audiobook company. The internet has been degraded into “five giant websites, each filled with screenshots from the other four”:
https://twitter.com/tveastman/status/1069674780826071040
Under these conditions, giving a creator more copyright is like giving a bullied schoolkid extra lunch money. It doesn’t matter how much lunch money you give that kid — the bullies will take it all, and the kid will still go hungry (that’s still true even if the bullies spend some of that stolen lunch money on a PR campaign urging us all to think of the hungry children and give them even more lunch money):
https://doctorow.medium.com/what-is-chokepoint-capitalism-b885c4cb2719
But creative workers have been conditioned — by big media and tech companies — to reflexively turn to copyright as the cure-all for every pathology, and, predictably, there are loud, insistent calls (and a growing list of high-profile lawsuits) arguing that training a machine-learning system is a copyright infringement.
This is a bad theory. First, it’s bad as a matter of copyright law. Fundamentally, machine learning systems ingest a lot of works, analyze them, find statistical correlations between them, and then use those to make new works. It’s a math-heavy version of what every creator does: analyze how the works they admire are made, so they can make their own new works.
If you go through the pages of an art-book analyzing the color schemes or ratios of noses to foreheads in paintings you like, you are not infringing copyright. We should not create a new right to decide who is allowed to think hard about your creative works and learn from them — such a right would make it impossible for the next generation of creators to (lawfully) learn their craft:
https://www.oblomovka.com/wp/2022/12/12/on-stable-diffusion/
(Sometimes, ML systems will plagiarize their own training data; that could be copyright infringement; but a) ML systems will doubtless get guardrails that block this plagiarism; and, b) even after that happens, creators will still worry about being displaced by ML systems trained on their works.)
We should learn from our recent history here. When sampling became a part of commercial hiphop music, some creators clamored for the right to control who could sample their work and to get paid when that happened. The musicians who sampled argued that inserting a few bars from a recording was akin to a jazz trumpeter who works a few bars of a popular song into a solo. They lost that argument, and today, anyone who wants to release a song commercially will be required — by radio stations, labels, and distributors — the clear that sample.
This change didn’t make musicians better off. The Big Three labels — Sony, Warners, and Universal, who control 70% of the world’s recorded music — now require musicians to sign away the rights to samples from their works. The labels also refuse to sell sampling licenses to musicians unless they are signed to one of the Big Three.
Thus, producing music with a sample requires that you take whatever terms the Big Three impose on you, including giving up the right to control sampling of your music. We gave the schoolkids more lunch money and the bullies took that, too.
https://locusmag.com/2020/03/cory-doctorow-a-lever-without-a-fulcrum-is-just-a-stick/
The monopolists who control the creative industries are already getting ahead of the curve on this one. Companies that hire voice actors are requiring those actors to sign away the (as yet nonexistant) right to train a machine-learning model with their voices:
https://www.vice.com/en/article/5d37za/voice-actors-sign-away-rights-to-artificial-intelligence
The National Association of Voice Actors is (quite rightly) advising its members not to sign contracts that make this outrageous demand, and they note that union actors are having success getting these clauses struck, even retroactively:
https://navavoices.org/synth-ai/
That’s not surprising — labor unions have a much better track record of getting artists’ paid than giving creators copyright and expecting them to bargain individually for the best deal they can get. But for non-union creators — the majority of us — getting this language struck is going to be a lot harder. Indeed, we already sign contracts full of absurd, unconscionable nonsense that our publishers, labels and studios refuse to negotiate:
https://doctorow.medium.com/reasonable-agreement-ea8600a89ed7
Some of the loudest calls for exclusive rights over ML training are coming not from workers, but from media and tech companies. We creative workers can’t afford to let corporations create this right — and not just because they will use it against us. These corporations also have a track record of creating new exclusive rights that bite them in the ass.
For decades, media companies stretched copyright to cover works that were similar to existing works, trying to merge the idea of “inspired by” and “copied from,” assuming that they would be the ones preventing others from making “similar” new works.
But they failed to anticipate the (utterly predictable) rise of copyright trolls, who launched a string of lawsuits arguing that popular songs copied tiny phrases (or just the “feel”) of their clients’ songs. Pharrell Williams and Robin Thicke’s got sued into radioactive rubble by Marvin Gaye’s estate over their song “Blurred Lines” — which didn’t copy any of Gaye’s words or melodies, but rather, took its “feel”:
https://www.rollingstone.com/music/music-news/robin-thicke-pharrell-lose-multi-million-dollar-blurred-lines-lawsuit-35975/
Today, every successful musician lives in dread of a multi-million-dollar lawsuit over incidental similarities to obscure tracks. Last spring, Ed Sheeran beat such a suit, but it was a hollow victory. As Sheeran said, with 60,000 new tracks being uploaded to Spotify every day, these similarities are inevitable:
https://twitter.com/edsheeran/status/1511631955238047751
The major labels are worried about this problem, too — but they are at a loss as to what to do about it. They are completely wedded to the idea that every part of music should be converted to property, so that they can expropriate it from creators and add it to their own bulging portfolios. Like a monkey trapped because it has reached through a hole into a hollow log to grab a banana that won’t fit back through the hole, the labels can’t bring themselves to let go.
https://pluralistic.net/2022/04/08/oh-why/#two-notes-and-running
That’s the curse of the monkey’s paw: the entertainment giants argued for everything to be converted to a tradeable exclusive right — and now the industry is being threatened by trolls and ML creeps who are bent on acquiring their own vast troves of pseudo-property.
There’s a better way. As NAVA president Tim Friedlander told Motherboard’s Joseph Cox, “NAVA is not anti-synthetic voices or anti-AI, we are pro voice actor. We want to ensure that voice actors are actively and equally involved in the evolution of our industry and don’t lose their agency or ability to be compensated fairly for their work and talent.”
This is as good a distillation of the true Luddite ethic as you could ask for. After all, the Luddites didn’t oppose textile automation: rather, they wanted a stake in its rollout and a fair share of its dividends:
https://locusmag.com/2022/01/cory-doctorow-science-fiction-is-a-luddite-literature/
Turning every part of the creative process into “IP” hasn’t made creators better off. All that’s it’s accomplished is to make it harder to create without taking terms from a giant corporation, whose terms inevitably include forcing you to trade all your IP away to them. That’s something that Spider Robinson prophesied in his Hugo-winning 1982 story, “Melancholy Elephants”:
http://www.spiderrobinson.com/melancholyelephants.html
This week (Feb 8–17), I’ll be in Australia, touring my book Chokepoint Capitalism with my co-author, Rebecca Giblin. We’re doing a remote event for NZ on Feb 13. Next are Melbourne (Feb 14), Sydney (Feb 15) and Canberra (Feb 16/17). I hope to see you!
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
[Image ID: A poster for the 1933 movie ‘The Monkey’s Paw.’ The fainting ingenue has been replaced by the glaring red eye of HAL9000 from 2001: A Space Odyssey.]
764 notes · View notes
chaoticace22 · 10 months
Text
Tumblr media
367 notes · View notes
jessiarts · 2 months
Text
I feel like a lot of people who stress about being/getting popular on social media don't realize that they don't actually want to be internet famous, they just want community.
In this essay I will-
54 notes · View notes
queen-mabs-revenge · 2 months
Text
found 'the way' a really interesting piece of speculative fiction exploring the idea of anti-migrant xenophobic violence being turned inwards towards 'legitimate citizens' when interests of capital are threatened by struggle, but this sequence in the last episode def stood out to me as a neoluddite.
feels connected to this quote from dan mcquillan's 'resisting ai - an anti-fascist approach to artificial intelligence':
Bergson argued that if one accepts a ready- made problem in this way, "one might just as well say that all truth is already virtually known, that its model is patented in the administrative offices of the state, and that philosophy is a jig- saw puzzle where the problem is to construct with the pieces society gives us the design it is unwilling to show us." (Deleuze, 2002, cited in Coleman, 2008) In other words, however sophisticated or creative AI might seem to be, its modelling is stuck in abstractions drawn from the past, and so becomes a rearrangement of the way things have been rather than a reimagining of the way things could be. AI has, in effect, an inbuilt political commitment to the status quo, in particular to existing structures that embed specific relations of power. The absence of different concepts leaves out the possibility of conceiving that things could be arranged differently.
69 notes · View notes
lilithism1848 · 2 months
Text
Tumblr media
36 notes · View notes
Link
I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.
People who criticize new technologies are sometimes called Luddites, but it’s helpful to clarify what the Luddites actually wanted. The main thing they were protesting was the fact that their wages were falling at the same time that factory owners’ profits were increasing, along with food prices. They were also protesting unsafe working conditions, the use of child labor, and the sale of shoddy goods that discredited the entire textile industry. The Luddites did not indiscriminately destroy machines; if a machine’s owner paid his workers well, they left it alone. The Luddites were not anti-technology; what they wanted was economic justice. They destroyed machinery as a way to get factory owners’ attention. The fact that the word “Luddite” is now used as an insult, a way of calling someone irrational and ignorant, is a result of a smear campaign by the forces of capital.
Whenever anyone accuses anyone else of being a Luddite, it’s worth asking, is the person being accused actually against technology? Or are they in favor of economic justice? And is the person making the accusation actually in favor of improving people’s lives? Or are they just trying to increase the private accumulation of capital?
Today, we find ourselves in a situation in which technology has become conflated with capitalism, which has in turn become conflated with the very notion of progress. If you try to criticize capitalism, you are accused of opposing both technology and progress. But what does progress even mean, if it doesn’t include better lives for people who work? What is the point of greater efficiency, if the money being saved isn’t going anywhere except into shareholders’ bank accounts? We should all strive to be Luddites, because we should all be more concerned with economic justice than with increasing the private accumulation of capital. We need to be able to criticize harmful uses of technology—and those include uses that benefit shareholders over workers—without being described as opponents of technology.
160 notes · View notes
hasellia · 2 months
Text
'What if Al gains sentience?" You techbros
can't even handle the idea of someone with
a communication disability being a sentient
being. You haven't even thought about a
human with a different pattern of thinking
let alone the sentience of an animal that
you can't keep in your home.
"Don't bully my mini Roko's basilisk"! lf it
gains sentience, it's going to wonder what
constitutes as the letter Y, a dwarf planet or
the colour red. It's not going to hate; it's
going to wonder why you're mad that it
doesn't care about money.
44 notes · View notes
korovaoverlook · 9 months
Text
I Sacrificed My Writing To A.I So You Don't Have To
I was thinking about how people often say "Oh, Chat GPT can't write stories, but it can help you edit things!" I am staunchly anti-A.I, and I've never agreed with this position. But I wouldn't have much integrity to stand on if I didn't see for myself how this "editing" worked. So, I sacrificed part of a monologue from one of my fanfictions to Chat GPT to see what it had to say. Here is the initial query I made:
Tumblr media
Chat GPT then gave me a list of revisions to make, most of which would be solved if it was a human and had read the preceding 150k words of story. I won't bore you with the list it made. I don't have to, as it incorporated those revisions into the monologue and gave me an edited sample back. Here is what it said I should turn the monologue into:
Tumblr media
The revision erases speech patterns. Ben/the General speaks in stilted, short sentences in the original monologue because he is distinctly uncomfortable—only moving into longer, more complex structures when he is either caught up in an idea or struggling to elaborate on an idea. The Chat GPT version wants me to write dialogue like regular narrative prose, something that you'd use to describe a room. It also nullified the concept of theme. "A purity that implied personhood" simply says the quiet(ish) part out loud, literally in dialogue. It erases subtlety and erases how people actually talk in favor of more obvious prose. Then I got a terrible idea. What if I kept running the monologue through the algorithm? Feeding it its own revised versions over and over, like a demented Google Translate until it just became gibberish? So that's what I did. Surprisingly enough, from original writing sample to the end, it only took six turnarounds until it pretty much stopped altering the monologue. This was the final result:
Tumblr media
This piece of writing is florid, overly descriptive, unnatural, and unsubtle. It makes the speaking character literally give voice to the themes through his dialogue, erasing all chances at subtext and subtlety. It uses unnecessary descriptors ("Once innocuous," "gleaming," "receded like a fading echo," "someone worth acknowledging,") and can't comprehend implication—because it is an algorithm, not a human that processes thoughts. The resulting writing is bland, stupid, lacks depth, and seemingly uses large words for large word's sake, not because it actually triggers an emotion in the reader or furthers the reader's understanding of the protagonist's mindset.
There you have it. Chat GPT, on top of being an algorithm run by callous, cruel people that steals artist's work and trains on it without compensation or permission, is also a terrible editor. Don't use it to edit, because it will quite literally make your writing worse. It erases authorial intention and replaces it with machine-generated generic slop. It is ridiculous that given the writer's strike right now, studios truly believe they can use A.I to produce a story of marginal quality that someone may pay to see. The belief that A.I can generate art is an insult to the writing profession and artists as a whole—I speak as a visual artist as well. I wouldn't trust Chat GPT to critique a cover letter, much less a novel or poem.
104 notes · View notes
nando161mando · 24 days
Text
‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets
24 notes · View notes
alpaca-clouds · 30 days
Text
How Capitalism turned AI into something bad
Tumblr media
AI "Art" sucks. AI "writing" sucks. Chat GPT sucks. All those fancy versions of "fancy predictive text" and "fancy predictive image generation" actually do suck a lot. Because they are bad at what they do - and they take jobs away from people, who would actually be good at them.
But at the same time I am also thinking about what kind of fucking dystopia we live in, that this had to turn out that way.
You know... I am an autistic guy, who has studied computer science for quite a while now. I have read a lot of papers and essays in my day about the development of AI and deep learning and what not. And I can tell you: There is stuff that AI is really good and helpful for.
Currently I am working a lot with the evaluation of satellite imagery and I can tell you: AI is making my job a ton easier. Sure, I could do that stuff manually, but it would be very boring and mind numbing. So, yeah, preprocessing the images with AI so that I just gotta look over the results the AI put out and confirm them? Much easier. Even though at times it means that my workday looks like this: I get to work, start the process on 50GB worth of satellite data, and then go look at tumblr for the rest of the day or do university stuff.
But the thing is that... You know. Creative stuff is actually not boring, manial stuff where folks are happy to have the work taken off their hands. Creative work is among those jobs that a lot of people find fulfilling. But from the feeling of fulfillment you cannot eat. But now AI is being used to push down the money folks in creative jobs can make.
I think movie and TV writing is a great example. When AI puts out a script, that script is barely sensible. Yet, the folks who actually make something useful out of it get paid less than they would, if they did it on their own.
Sure, in the US the WGA made it clear that they would not work with studios doing something like that - but the US is not the whole world. And in other countries it will definitely happen.
And that... kinda sucks.
And of course even outside of creative fields... There is definitely jobs that are going to get replaced by automation and artificial intelligence.
The irony is that once upon a time folks like Keynes were like: "OMG, we will get there one day and it is going to be great, because a machine is going to do your work, and you are gonna get paid for it." But the reality obviously is that: "A machine is going to do the work and the CEO is going to get an even bigger bonus, while you sleep on the streets, where police will then violate you for being homeless."
You know, looking at this from the point of view of Solarpunk: I absolutely think that there is a place in a Solarpunk future for AI. Even for some creative AI. But all under the assumption that first we are going to erradicate fucking capitalism. Because this does not work together with capitalism. We need to get rid of capitalism first. And no, I do not know how to start.
22 notes · View notes
mckitterick · 29 days
Text
OpenAI previews voice generator that produces natural-sounding speech based on a 15-second voice sample
The company has yet to decide how to deploy the technology, and it acknowledges election risks, but is going ahead with developing and testing with "limited partners" anyway.
Not only is such a technology a risk during election time (see the fake robocalls this year when an AI-generated, fake Joe Biden voice told people not to vote in the primary), but imagine how faked voices of important people - combined with AI-generated fake news plus AI-generated fake photos and videos - could con people out of money, literally destroy political careers and parties, and even collapse entire governments or nations themselves.
By faking a news story using accurate (but faked) video clips of (real) respected and elected officials supporting the fake story - then creating a billion SEO-optimized fake news and research websites full of fake evidence to back up their lies - a bad actor or cyberwarfare agent could take down an enemy government, create a revolution, turn nations against one another, even cause world war.
This kind of apocalyptic scenario has always felt like a science-fiction idea that could only exist in a possible dystopian future, not something we'd actually see coming true in our time, now.
How in the world are we ever again going to trust what we read, hear, or watch? If LLM-barf clogs the internet, and lies pollute the news, and people with bad intentions can provide all the evidence they need to fool anyone into believing anything, and there's no way to guarantee the veracity of anything anymore, what's left?
Whatever comes next, I guarantee it'll be weirder than we imagine.
Here's hoping it's not also worse than the typical cyberpunk tale.
PS: NEVER ANSWER CALLS FROM UNKNOWN NUMBERS EVER AGAIN
...or at least don't speak to the caller. From now on, assume it's a con-bot or politi-bot or some other bot seeking to cause you and others harm. If they hear your voice, they can fake it saying anything they want. If it sounds like someone you know, it's probably not if it's not their number saved in your contacts. If it's about something important, hang up and call the official or saved number for the supposed caller.
64 notes · View notes
rowanellis · 11 months
Text
youtube
a deep dive about ai, capitalism, dystopia and... hope?
"Writing, painting, playing music, drawing - art - is something human beings do to relax, to express themselves, to bring joy and catharsis to those around them - the fact so many people are working so hard to ensure machines, that feel none of those things, can replace humans in that process - that feels instinctively soulless to me."
watch the full video essay
116 notes · View notes