Tumgik
#im become techbro
1o1percentmilk · 3 months
Text
contemplating making youtube tutorials for security topics? AUGHH NO GET AWAY FROM ME THE THIRD YEAR CS MAJORS BECOMING YOUTUBERS STEREOTYPE IS TOO REAL
10 notes · View notes
puppyeared · 3 months
Text
i have to say its a strange experience taking classes on branding and marketing while being vehemently anticapitalist and scorning the economic system
36 notes · View notes
irlcryingcatmeme · 15 hours
Text
silicon valley hbo in detroit become human au...... where everything is the same but jared is an office worker android sent by hooli but not really bc he deviated when richard declined belson's offer and decided to join pied piper as the first thing after deviating
0 notes
onemillionfurries · 11 months
Text
my biggest pet peeve with web 2.0 dying is that people really think that the cryptobro web 3.0 is the inevitable next step. like it's written in stone that the internet will take the form of NFTs and cryptocurrency and the "metaverse" in the future.
like. come on. when was the last time you've actually seen anything NFT related in the year of our lord 2023? has ANYONE other than the cryptobros and their bots on twitter been talking about it?? what about the "metaverse"??? im pretty sure Meta itself even dropped their vr "metaverse" crap.
also even if the "mainstream" internet DOES go in that direction, that doesn't mean we need to follow? Despite the total takeover of web 2.0 and social media, personal sites and forums DO still exist. there are still IRC chats and MUCKs and MUDs. We will still have websites and web browsers in a web 3.0-dominated internet. even if those become niche.
plus, i think the whole reason web 2.0 and social media took off in the first place was because it was convenient. rather than building your own website from scratch, you instead had your own profile that you can easily upload photos to and set your mood and make small posts. people saw it as a way to easily keep up with friends, so they flocked to it.
what the fuck does web 3.0 offer to the average person? needing to put on a clunky, expensive set of goggles to virtually attend a work meeting? artificial scarcity assigned to shitty monkey jpegs?? other than techbros and billionares creating artificial hype around this shit to get people flocking there, what actually has people staying?
143 notes · View notes
catboxghost · 5 years
Text
i think the biggest problems in the tech industry and indeed maybe even the wider world comes down to the fact that computer science majors don’t know anything except computer science and then think that qualifies them as a complete human being
if you’re in computer science or considering computer science PLEASE consider taking some art classes or ethics courses or you’ll end up like the dudes trying to capitalize off of school shootings for money
1 note · View note
Text
Facebook thrives on criticism of "disinformation"
Tumblr media
The mainstream critique of Facebook is surprisingly compatible with Facebook’s own narrative about its products. FB critics say that the company’s machine learning and data-gathering slides disinformation past users’ critical faculties, poisoning their minds.
Meanwhile, Facebook itself tells advertisers that it can use data and machine learning to slide past users’ critical faculties, convincing them to buy stuff.
In other words, the mainline of Facebook critics start from the presumption that FB is a really good product and that advertisers are definitely getting their money’s worth when they shower billions on the company.
Which is weird, because these same critics (rightfully) point out that Facebook lies all the time, about everything. It would be bizarre if the only time FB was telling the truth was when it was boasting about how valuable its ad-tech is.
Facebook has a conflicted relationship with this critique. I’m sure they’d rather not be characterized as a brainwashing system that turns good people into monsters, but not when the choice is between “brainwashers” and “con-artists selling garbage to credulous ad execs.”
As FB investor and board member Peter Thiel puts it: “I’d rather be seen as evil than incompetent.” In other words, the important word in “evil genius” is “genius,” not “evil.”
https://twitter.com/doctorow/status/1440312271511568393
The accord of tech critics and techbros gives rise to a curious hybrid, aptly named by Maria Farrell: the Prodigal Techbro.
A prodigal techbro is a self-styled wizard of machine-learning/surveillance mind control who has see the error of his ways.
https://crookedtimber.org/2020/09/23/story-ate-the-world-im-biting-back/
This high-tech sorcerer doesn’t disclaim his magical powers — rather, he pledges to use them for good, to fight the evil sorcerers who invented a mind-control ray to sell your nephew a fidget-spinner, then let Robert Mercer hijack it to turn your uncle into a Qanon racist.
There’s a great name for this critique, criticism that takes its subjects’ claims to genius at face value: criti-hype, coined by Lee Vinsel, describing a discourse that turns critics into “the professional concern trolls of technoculture.”
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
The thing is, Facebook really is terrible — but not because it uses machine learning to brainwash boomers into iodine-guzzling Qnuts. And likewise, there really is a problem with conspiratorial, racist, science-denying, epistemologically chaotic conspiratorialism.
Addressing that problem requires that we understand the direction of the causal arrow — that we understand whether Facebook is the cause or the effect of the crisis, and what role it plays.
“Facebook wizards turned boomers into orcs” is a comforting tale, in that it implies that we need merely to fix Facebook and the orcs will turn back into our cuddly grandparents and get their shots. The reality is a lot gnarlier and, sadly, less comforting.
There’s been a lot written about Facebook’s sell-job to advertisers, but less about the concern over “disinformation.” In a new, excellent longread for Harpers, Joe Bernstein makes the connection between the two:
https://harpers.org/archive/2021/09/bad-news-selling-the-story-of-disinformation/
Fundamentally: if we question whether Facebook ads work, we should also question whether the disinformation campaigns that run amok on the platform are any more effective.
Bernstein starts by reminding us of the ad industry’s one indisputable claim to persuasive powers: ad salespeople are really good at convincing ad buyers that ads work.
Think of department store magnate John Wanamaker’s lament that “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” Whoever convinced him that he was only wasting half his ad spend was a true virtuoso of the con.
As Tim Hwang documents brilliantly in his 2020 pamphlet “Subprime Attention Crisis,” ad-tech is even griftier than the traditional ad industry. Ad-tech companies charge advertisers for ads that are never served, or never rendered, or never seen.
https://pluralistic.net/2020/10/05/florida-man/#wannamakers-ghost
They rig ad auctions, fake their reach numbers, fake their conversions (they also lie to publishers about how much they’ve taken in for serving ads on their pages and short change them by millions).
Bernstein cites Hwang’s work, and says, essentially, shouldn’t this apply to “disinformation?”
If ads don’t work well, then maybe political ads don’t work well. And if regular ads are a swamp of fraudulently inflated reach numbers, wouldn’t that be true of political ads?
Bernstein talks about the history of ads as a political tool, starting with Eisenhower’s 1952 “Answers America” campaign, designed and executed at great expense by Madison Ave giants Ted Bates.
Hannah Arendt, whom no one can accuse of being soft on the consequences of propaganda, was skeptical of this kind of enterprise: “The psychological premise of human manipulability has become one of the chief wares that are sold on the market of common and learned opinion.”
The ad industry ran an ambitious campaign to give scientific credibility to its products. As Jacques Ellul wrote in 1962, propagandists were engaged in “the increasing attempt to control its use, measure its results, define its effects.”
Appropriating the jargon of behavioral scientists let ad execs “assert audiences, like workers in a Taylorized workplace, need not be persuaded through reason, but could be trained through repetition to adopt the new consumption habits desired by the sellers.” -Zoe Sherman
These “scientific ads” had their own criti-hype attackers, like Vance “Hidden Persuaders” Packard, who admitted that “researchers were sometimes prone to oversell themselves — or in a sense to exploit the exploiters.”
Packard cites Yale’s John Dollard, a scientific ad consultant, who accused his colleagues of promising advertisers “a mild form of omnipotence,” which was “well received.”
Today’s scientific persuaders aren’t in a much better place than Dollard or Packard. Despite all the talk of political disinformation’s reach, a 2017 study found “sharing articles from fake news domains was a rare activity” affecting <10% of users.
https://www.science.org/doi/10.1126/sciadv.aau4586
So, how harmful is this? One study estimates “if one fake news article were about as persuasive as one TV campaign ad, the fake news in our database would have changed vote shares by an amount on the order of hundredths of a percentage point.”
https://www.aeaweb.org/articles?id=10.1257/jep.31.2.211
Now, all that said, American politics certainly feel and act differently today than in years previous. The key question: “is social media creating new types of people, or simply revealing long-obscured types of people to a segment of the public unaccustomed to seeing them?”
After all, American politics has always had its “paranoid style,” and the American right has always had a sizable tendency towards unhinged conspiratorialism, from the John Birch Society to Goldwater Republicans.
Social media may not be making more of these yahoos, but rather, making them visible to the wider world, and to each other, allowing them to make common cause and mobilize their adherents (say, to carry tiki torches through Charlottesville in Nazi cosplay).
If that’s true, then elite calls to “fight disinformation” are unlikely to do much, except possibly inflaming things. If “disinformation” is really people finding each other (not infecting each other) labelling their posts as “disinformation” won’t change their minds.
Worse, plans like the Biden admin’s National Strategy for Countering Domestic Terrorism lump 1/6 insurrectionists in with anti-pipeline activists, racial justice campaigners, and animal rights groups.
Whatever new powers we hand over to fight disinformation will be felt most by people without deep-pocketed backers who’ll foot the bill for crack lawyers.
Here’s the key to Bernstein’s argument: “One reason to grant Silicon Valley’s assumptions about our mechanistic persuadability is that it prevents us from thinking too hard about the role we play in taking up and believing the things we want to believe. It turns a huge question about the nature of democracy in the digital age — what if the people believe crazy things, and now everyone knows it? — into a technocratic negotiation between tech companies, media companies, think tanks, and universities.”
I want to “Yes, and” that.
My 2020 book How To Destroy Surveillance Capitalism doesn’t dismiss the idea that conspiratorialism is on the rise, nor that tech companies are playing a key role in that rise — but without engaging in criti-hype.
https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59
In my book, I propose that conspiratorialism isn’t a crisis of what people believe so much as how they arrive at their beliefs — it’s an “epistemological crisis.”
We live in a complex society plagued by high-stakes questions none of us can answer on our own.
Do vaccines work? Is oxycontin addictive? Should I wear a mask? Can we fight covid by sanitizing surfaces? Will distance ed make my kind an ignoramus? Should I fly in a 737 Max?
Even if you have the background to answer one of these questions, no one can answer all of them.
Instead, we have a process: neutral expert agencies use truth-seeking procedures to sort of competing claims, showing their work and recusing themselves when they have conflicts, and revising their conclusions in light of new evidence.
It’s pretty clear that this process is breaking down. As companies (led by the tech industry) merge with one another to form monopolies, they hijack their regulators and turn truth-seeking into an auction, where shareholder preferences trump evidence.
This perversion of truth has consequences — take the FDA’s willingness to accept the expensively manufactured evidence of Oxycontin’s safety, a corrupt act that kickstarted the opioid epidemic, which has killed 800,000 Americans to date.
If the best argument for vaccine safety and efficacy is “We used the same process and experts as pronounced judgement on Oxy” then it’s not unreasonable to be skeptical — especially if you’re still coping with the trauma of lost loved ones.
As Anna Merlan writes in her excellent Republic of Lies, conspiratorialism feeds on distrust and trauma, and we’ve got plenty of legitimate reasons to experience both.
https://memex.craphound.com/2019/09/21/republic-of-lies-the-rise-of-conspiratorial-thinking-and-the-actual-conspiracies-that-fuel-it/
Tech was an early adopter of monopolistic tactics — the Apple ][+ went on sale the same year Ronald Reagan hit the campaign trail, and the industry’s growth tracked perfectly with the dismantling of antitrust enforcement over the past 40 years.
What’s more, while tech may not persuade people, it is indisputably good at finding them. If you’re an advertiser looking for people who recently looked at fridge reviews, tech finds them for you. If you’re a boomer looking for your old high school chums, it’ll do that too.
Seen in that light, “online radicalization” stops looking like the result of mind control, instead showing itself to be a kind of homecoming — finding the people who share your interests, a common online experience we can all relate to.
I found out about Bernstein’s article from the Techdirt podcast, where he had a fascinating discussion with host Mike Masnick.
https://www.techdirt.com/articles/20210928/12593747652/techdirt-podcast-episode-299-misinformation-about-disinformation.shtml
Towards the end of that discussion, they talked about FB’s Project Amplify, in which the company tweaked its news algorithm to uprank positive stories about Facebook, including stories its own PR department wrote.
https://pluralistic.net/2021/09/22/kropotkin-graeber/#zuckerveganism
Project Amplify is part of a larger, aggressive image-control effort by the company, which has included shuttering internal transparency portals, providing bad data to researchers, and suing independent auditors who tracked its promises.
I’d always assumed that this truth-suppression and wanton fraud was about hiding how bad the platform’s disinformation problem was.
But listening to Masnick and Bernstein, I suddenly realized there was another explanation.
Maybe Facebook’s aggressive suppression of accurate assessments of disinformation on its platform are driven by a desire to hide how expensive (and profitable) political advertising it depends on is pretty useless.
Image: Anthony Quintano (modified) https://commons.wikimedia.org/wiki/File:Mark_Zuckerberg_F8_2018_Keynote_(41793470192).jpg
Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY: https://creativecommons.org/licenses/by/3.0/deed.en
61 notes · View notes