I would be very interested in hearing the museum design rant
by popular demand: Guy That Took One (1) Museum Studies Class Focused On Science Museums Rants About Art Museums. thank u for coming please have a seat
so. background. the concept of the "science museum" grew out of 1) the wunderkammer (cabinet of curiosities), also known as "hey check out all this weird cool shit i have", and 2) academic collections of natural history specimens (usually taxidermied) -- pre-photography these were super important for biological research (see also). early science museums usually grew out of university collections or bequests of some guy's Weird Shit Collection or both, and were focused on utility to researchers rather than educational value to the layperson (picture a room just, full of taxidermy birds with little labels on them and not a lot of curation outside that). eventually i guess they figured they could make more on admission by aiming for a mass audience? or maybe it was the cultural influence of all the world's fairs and shit (many of which also caused science museums to exist), which were aimed at a mass audience. or maybe it was because the research function became much more divorced from the museum function over time. i dunno. ANYWAY, science and technology museums nowadays have basically zero research function; the exhibits are designed more or less solely for educating the layperson (and very frequently the layperson is assumed to be a child, which does honestly irritate me, as an adult who likes to go to science museums). the collections are still there in case someone does need some DNA from one of the preserved bird skins, but items from the collections that are exhibited typically exist in service of the exhibit's conceptual message, rather than the other way around.
meanwhile at art museums they kind of haven't moved on from the "here is my pile of weird shit" paradigm, except it's "here is my pile of Fine Art". as far as i can tell, the thing that curators (and donors!) care about above all is The Collection. what artists are represented in The Collection? rich fucks derive personal prestige from donating their shit to The Collection. in big art museums usually something like 3-5% of the collection is ever on exhibit -- and sometimes they rotate stuff from the vault in and out, but let's be real, only a fraction of an art museum's square footage is temporary exhibits. they're not going to take the scream off display when it's like the only reason anyone who's not a giant nerd ever visits the norwegian national museum of art. most of the stuff in the vault just sits in the vault forever. like -- art museum curators, my dudes, do you think the general public gives a SINGLE FUCK what's in The Collection that isn't on display? no!! but i guarantee you it will never occur, ever, to an art museum curator that they could print-to-scale high-res images of artworks that are NOT in The Collection in order to contextualize the art in an exhibit, because items that are not in The Collection functionally do not exist to them. (and of course there's the deaccessioning discourse -- tumblr collectively has some level of awareness that repatriation is A Whole Kettle of Worms but even just garden-variety selling off parts of The Collection is a huge hairy fucking deal. check out deaccessioning and its discontents; it's a banger read if you're into This Kind Of Thing.)
with the contents of The Collection foregrounded like this, what you wind up with is art museum exhibits where the exhibit's message is kind of downstream of what shit you've got in the collection. often the message is just "here is some art from [century] [location]", or, if someone felt like doing a little exhibit design one fine morning, "here is some art from [century] [location] which is interesting for [reason]". the displays are SOOOOO bad by science museum standards -- if you're lucky you get a little explanatory placard in tiny font relating the art to an art movement or to its historical context or to the artist's career. if you're unlucky you get artist name, date, and medium. fucker most of the people who visit your museum know Jack Shit about art history why are you doing them dirty like this
(if you don't get it you're just not Cultured enough. fuck you, we're the art museum!)
i think i've talked about this before on this blog but the best-exhibited art exhibit i've ever been to was actually at the boston museum of science, in this traveling leonardo da vinci exhibit where they'd done a bunch of historical reconstructions of inventions out of his notebooks, and that was the main Thing, but also they had a whole little exhibit devoted to the mona lisa. obviously they didn't even have the real fucking mona lisa, but they went into a lot of detail on like -- here's some X-ray and UV photos of it, and here's how art experts interpret them. here's a (photo of a) contemporary study of the finished painting, which we've cleaned the yellowed varnish off of, so you can see what the colors looked like before the varnish yellowed. here's why we can't clean the varnish off the actual painting (da vinci used multiple varnish layers and thinned paints to translucency with varnish to create the illusion of depth, which means we now can't remove the yellowed varnish without stripping paint).
even if you don't go into that level of depth about every painting (and how could you? there absolutely wouldn't be space), you could at least talk a little about, like, pigment availability -- pigment availability is an INCREDIBLY useful lens for looking at historical paintings and, unbelievably, never once have i seen an art museum exhibit discuss it (and i've been to a lot of art museums). you know how medieval european religious paintings often have funky skin tones? THEY HADN'T INVENTED CADMIUM PIGMENTS YET. for red pigments you had like... red ochre (a muted earth-based pigment, like all ochres and umbers), vermilion (ESPENSIVE), alizarin crimson (aka madder -- this is one of my favorite reds, but it's cool-toned and NOT good for mixing most skintones), carmine/cochineal (ALSO ESPENSIVE, and purple-ish so you wouldn't want to use it for skintones anyway), red lead/minium (cheaper than vermilion), indian red/various other iron oxide reds, and apparently fucking realgar? sure. whatever. what the hell was i talking about.
oh yeah -- anyway, i'd kill for an art exhibit that's just, like, one or two oil paintings from each century for six centuries, with sample palettes of the pigments they used. but no! if an art museum curator has to put in any level of effort beyond writing up a little placard and maybe a room-level text block, they'll literally keel over and die. dude, every piece of art was made in a material context for a social purpose! it's completely deranged to divorce it from its material context and only mention the social purpose insofar as it matters to art history the field. for god's sake half the time the placard doesn't even tell you if the thing was a commission or not. there's a lot to be said about edo period woodblock prints and mass culture driven by the growing merchant class! the met has a fuckton of edo period prints; they could get a hell of an exhibit out of that!
or, tying back to an earlier thread -- the detroit institute of arts has got a solid like eight picasso paintings. when i went, they were kind of just... hanging out in a room. fuck it, let's make this an exhibit! picasso's an artist who pretty famously had Periods, right? why don't you group the paintings by period, and if you've only got one or two (or even zero!) from a particular period, pad it out with some decent life-size prints so i can compare them and get a better sense for the overarching similarities? and then arrange them all in a timeline, with little summaries of what each Period was ~about~? that'd teach me a hell of a lot more about picasso -- but you'd have to admit you don't have Every Cool Painting Ever in The Collection, which is illegalé.
also thinking about the mit museum temporary exhibit i saw briefly (sorry, i was only there for like 10 minutes because i arrived early for a meeting and didn't get a chance to go through it super thoroughly) of a bunch of ship technical drawings from the Hart nautical collection. if you handed this shit to an art museum curator they'd just stick it on the wall and tell you to stand around and look at it until you Understood. so anyway the mit museum had this enormous room-sized diorama of various hull shapes and how they sat in the water and their benefits and drawbacks, placed below the relevant technical drawings.
tbh i think the main problem is that art museum people and science museum people are completely different sets of people, trained in completely different curatorial traditions. it would not occur to an art museum curator to do anything like this because they're probably from the ~art world~ -- maybe they have experience working at an art gallery, or working as an art buyer for a rich collector, neither of which is in any way pedagogical. nobody thinks an exhibit of historical clothing should work like a clothing store but it's fine when it's art, i guess?
also the experience of going to an art museum is pretty user-hostile, i have to say. there's never enough benches, and if you want a backrest, fuck you. fuck you if going up stairs is painful; use our shitty elevator in the corner that we begrudgingly have for wheelchair accessibility, if you can find it. fuck you if you can't see very well, and need to be closer to the art. fuck you if you need to hydrate or eat food regularly; go to our stupid little overpriced cafeteria, and fuck you if we don't actually sell any food you can eat. (obviously you don't want someone accidentally spilling a smoothie on the art, but there's no reason you couldn't provide little Safe For Eating Rooms where people could just duck in and monch a protein bar, except that then you couldn't sell them a $30 salad at the cafe.) fuck you if you're overwhelmed by noise in echoing rooms with hard surfaces and a lot of people in them. fuck you if you are TOO SHORT and so our overhead illumination generates BRIGHT REFLECTIONS ON THE SHINY VARNISH. we're the art museum! we don't give a shit!!!
3K notes
·
View notes
started reading the cass review because i'm apparently just Like That and i want everybody crowing about how this proves sooooo much about how terfs are right and trans people are wrong to like. take a scientific literacy class or something. or even just read the occasional study besides the one you're currently trying to prove a point with. not even necessarily pro-trans studies just learn how to know what studies actually found as opposed to what people trying to spoonfeed you an agenda claim they found.
to use just one infuriating example:
Several studies from that period (Green et al., 1987; Zucker, 1985) suggested that in a minority (approximately 15%) of pre-pubertal children presenting with gender incongruence, this persisted into adulthood. The majority of these children became same-sex attracted, cisgender adults. These early studies were criticised on the basis that not all the children had a formal diagnosis of gender incongruence or gender dysphoria, but a review of the literature (Ristori & Steensma, 2016) noted that later studies (Drummond et al., 2008; Steensma & Cohen-Kettenis, 2015; Wallien et al., 2008) also found persistence rates of 10-33% in cohorts who had met formal diagnostic criteria at initial assessment, and had longer follow-up periods.
if you recognize the names Zucker and Steensma you are probably already going feral but tldr:
There are… many problems with Zucker's studies, "not all children had a formal diagnosis" is so far down the list this is literally the first i've heard of it. The closest i usually hear is the old DSM criteria for gender identity disorder was totally different from the current DSM criteria for gender dysphoria and/or how most people currently define "transgender"; notably it did not require the patient to identify as a different gender and overall better fits what we currently call "gender-non-comforming". Whether the kids had a formal diagnosis of "maybe trans, maybe just has different hobbies than expected, but either way their parents want them back in their neat little societal boxes" is absolutely not the main issue.
This would be a problem even if Zucker was pro-trans (spoiler: He Is Not, and people who are immediately suspicious of pro-trans studies because "they're probably funded by big pharma or someone else who profits from transitioning" should apply at least a little of that suspicion to the guy who made a living running a conversion clinic); sometimes "formal" criteria change as we learn more about what's common, what's uncommon, what's uncommon but irrelevant, etc, and when the criteria changes drastically enough it doesn't make sense to pretend the old studies perfectly apply to the new criteria. If you found a study defining "sex" specifically and exclusively as penetration with a dick which says gay men have as much sex as straight men but lesbians don't, it's not necessarily wrong as far as it goes but if THAT'S your prime citation for "gay men have more sex than lesbians", especially if you keep trying to apply it in contexts which obviously use a broader definition, there are gonna be a lot of people disagreeing with you and it won't be because they're stubbornly unscientific.
Also Zucker is pro conversion therapy. Yes, pro converting trans people to cis people, but also pro converting gay people to straight people. That doesn't necessarily affect his results, i just find it funny how many people enthusiastically support his findings as evidence transitioning is… basically anti-gay conversion therapy? (even though plenty of trans people transition to gay? including T4T people so even the "that's actually just how straight people try to get with gay people" rationale for gay trans people is incredibly weak? and also HRT has a relatively low but non-zero chance of changing sexual orientation so it wouldn't even be reliable as a means of "becoming straight"? but a guy who couldn't reliably tell the difference between a tomboy and a trans boy figured out the former is more common than the latter + in one whole country where being trans is legal but being gay is not, sometimes cis gay people transition, so OBVIOUSLY that means sexism and homophobia are the driving factors even in countries with significant transphobia. or something.) anyway i hope zucker knows and hates how many gay people and allies are using his own study to trash-talk any attempts to be Less Gay. ideally nobody would take his nonsense seriously at all but it doesn't seem we'll be spared from that any time soon so i will take my schadenfreude where i can.
Steensma's studies have the exact same problem re: irrelevant criteria so "well someone ELSE had the same results!" is not exactly convincing. This is not "oh trans people are refusing to pay attention to these studies because they disagree with them regardless of scientific rigor", it's "one biased guy using outdated criteria found exactly the numbers everyone would expect based on that criteria, i can't imagine why trans people are treating those numbers as relevant to the past criteria but not present definitions, let's find a SECOND guy using outdated criteria. Why do people keep saying the outdated criteria is not relevant to the current state of trans healthcare. Don't we all know it's quantity over quality with scientific studies. (Please don't ask what the quantity of studies disagreeing with me is.)"
Steensma also counted patients as 'not persisting as transgender' if they ghosted him on follow-up which counted for a third of his study's "detransitioners" and a fifth of the total subjects and. look. i'm not saying none of them detransitioned, or assuming they all didn't would be notably more accurate, but i think we can safely treat twenty percent of subjects as a bit high for making a default assumption, especially when some of them might have simply not been interested in a study on whether or not they still know who they are. Fuck knows i've seen pro-trans studies which didn't make assumptions about the people who didn't respond still get prodded by anti-trans people insisting "the number of people claiming they don't regret transitioning can't possibly be so high, some of the people who responded must have been lying. (Scientific rigor means thinking studies which disagree with me are wrong even if the only explanation is the subjects lying and studies which agree with me are right even if we need to make assumptions about a lot of subjects to get there.)"
and this is not new information. not the issues with zucker, not the issues with steensma, not any of the issues because this is not a new study, it's a review of older studies, which in itself doesn't mean "bad" or "useless" -- sometimes that allows connecting some previously-unconnected dots -- but the idea this is going to absolutely blow apart the Woke Media, vindicate Rowling and Lineham, and "save" ""gay"" children from """being forcibly transed""" is bullshit. At most it'll get dragged around and eagerly cited by all the people looking for anything vaguely scientific-sounding to justify their beliefs, and maybe even people who only read headlines and sound bites will buy it, but the people who really believe it will be people who already agreed with all its "findings" and have already been dragging around the existing studies and are just excited to have a shiny new citation for it.
the response from people who've been really reading research on transgender people all along is going to be more along the lines of "……yeah. yeah, i already knew about that. do you need a three-page essay on why i don't think it means what you think it means? because i don't have time for that homework right now but maybe i can pencil it in for next semester if you haven't learned how to check your own sources by then."
33 notes
·
View notes
Humans are not perfectly vigilant
I'm on tour with my new, nationally bestselling novel The Bezzle! Catch me in BOSTON with Randall "XKCD" Munroe (Apr 11), then PROVIDENCE (Apr 12), and beyond!
Here's a fun AI story: a security researcher noticed that large companies' AI-authored source-code repeatedly referenced a nonexistent library (an AI "hallucination"), so he created a (defanged) malicious library with that name and uploaded it, and thousands of developers automatically downloaded and incorporated it as they compiled the code:
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
These "hallucinations" are a stubbornly persistent feature of large language models, because these models only give the illusion of understanding; in reality, they are just sophisticated forms of autocomplete, drawing on huge databases to make shrewd (but reliably fallible) guesses about which word comes next:
https://dl.acm.org/doi/10.1145/3442188.3445922
Guessing the next word without understanding the meaning of the resulting sentence makes unsupervised LLMs unsuitable for high-stakes tasks. The whole AI bubble is based on convincing investors that one or more of the following is true:
There are low-stakes, high-value tasks that will recoup the massive costs of AI training and operation;
There are high-stakes, high-value tasks that can be made cheaper by adding an AI to a human operator;
Adding more training data to an AI will make it stop hallucinating, so that it can take over high-stakes, high-value tasks without a "human in the loop."
These are dubious propositions. There's a universe of low-stakes, low-value tasks – political disinformation, spam, fraud, academic cheating, nonconsensual porn, dialog for video-game NPCs – but none of them seem likely to generate enough revenue for AI companies to justify the billions spent on models, nor the trillions in valuation attributed to AI companies:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
The proposition that increasing training data will decrease hallucinations is hotly contested among AI practitioners. I confess that I don't know enough about AI to evaluate opposing sides' claims, but even if you stipulate that adding lots of human-generated training data will make the software a better guesser, there's a serious problem. All those low-value, low-stakes applications are flooding the internet with botshit. After all, the one thing AI is unarguably very good at is producing bullshit at scale. As the web becomes an anaerobic lagoon for botshit, the quantum of human-generated "content" in any internet core sample is dwindling to homeopathic levels:
https://pluralistic.net/2024/03/14/inhuman-centipede/#enshittibottification
This means that adding another order of magnitude more training data to AI won't just add massive computational expense – the data will be many orders of magnitude more expensive to acquire, even without factoring in the additional liability arising from new legal theories about scraping:
https://pluralistic.net/2023/09/17/how-to-think-about-scraping/
That leaves us with "humans in the loop" – the idea that an AI's business model is selling software to businesses that will pair it with human operators who will closely scrutinize the code's guesses. There's a version of this that sounds plausible – the one in which the human operator is in charge, and the AI acts as an eternally vigilant "sanity check" on the human's activities.
For example, my car has a system that notices when I activate my blinker while there's another car in my blind-spot. I'm pretty consistent about checking my blind spot, but I'm also a fallible human and there've been a couple times where the alert saved me from making a potentially dangerous maneuver. As disciplined as I am, I'm also sometimes forgetful about turning off lights, or waking up in time for work, or remembering someone's phone number (or birthday). I like having an automated system that does the robotically perfect trick of never forgetting something important.
There's a name for this in automation circles: a "centaur." I'm the human head, and I've fused with a powerful robot body that supports me, doing things that humans are innately bad at.
That's the good kind of automation, and we all benefit from it. But it only takes a small twist to turn this good automation into a nightmare. I'm speaking here of the reverse-centaur: automation in which the computer is in charge, bossing a human around so it can get its job done. Think of Amazon warehouse workers, who wear haptic bracelets and are continuously observed by AI cameras as autonomous shelves shuttle in front of them and demand that they pick and pack items at a pace that destroys their bodies and drives them mad:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
Automation centaurs are great: they relieve humans of drudgework and let them focus on the creative and satisfying parts of their jobs. That's how AI-assisted coding is pitched: rather than looking up tricky syntax and other tedious programming tasks, an AI "co-pilot" is billed as freeing up its human "pilot" to focus on the creative puzzle-solving that makes coding so satisfying.
But an hallucinating AI is a terrible co-pilot. It's just good enough to get the job done much of the time, but it also sneakily inserts booby-traps that are statistically guaranteed to look as plausible as the good code (that's what a next-word-guessing program does: guesses the statistically most likely word).
This turns AI-"assisted" coders into reverse centaurs. The AI can churn out code at superhuman speed, and you, the human in the loop, must maintain perfect vigilance and attention as you review that code, spotting the cleverly disguised hooks for malicious code that the AI can't be prevented from inserting into its code. As "Lena" writes, "code review [is] difficult relative to writing new code":
https://twitter.com/qntm/status/1773779967521780169
Why is that? "Passively reading someone else's code just doesn't engage my brain in the same way. It's harder to do properly":
https://twitter.com/qntm/status/1773780355708764665
There's a name for this phenomenon: "automation blindness." Humans are just not equipped for eternal vigilance. We get good at spotting patterns that occur frequently – so good that we miss the anomalies. That's why TSA agents are so good at spotting harmless shampoo bottles on X-rays, even as they miss nearly every gun and bomb that a red team smuggles through their checkpoints:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
"Lena"'s thread points out that this is as true for AI-assisted driving as it is for AI-assisted coding: "self-driving cars replace the experience of driving with the experience of being a driving instructor":
https://twitter.com/qntm/status/1773841546753831283
In other words, they turn you into a reverse-centaur. Whereas my blind-spot double-checking robot allows me to make maneuvers at human speed and points out the things I've missed, a "supervised" self-driving car makes maneuvers at a computer's frantic pace, and demands that its human supervisor tirelessly and perfectly assesses each of those maneuvers. No wonder Cruise's murderous "self-driving" taxis replaced each low-waged driver with 1.5 high-waged technical robot supervisors:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
AI radiology programs are said to be able to spot cancerous masses that human radiologists miss. A centaur-based AI-assisted radiology program would keep the same number of radiologists in the field, but they would get less done: every time they assessed an X-ray, the AI would give them a second opinion. If the human and the AI disagreed, the human would go back and re-assess the X-ray. We'd get better radiology, at a higher price (the price of the AI software, plus the additional hours the radiologist would work).
But back to making the AI bubble pay off: for AI to pay off, the human in the loop has to reduce the costs of the business buying an AI. No one who invests in an AI company believes that their returns will come from business customers to agree to increase their costs. The AI can't do your job, but the AI salesman can convince your boss to fire you and replace you with an AI anyway – that pitch is the most successful form of AI disinformation in the world.
An AI that "hallucinates" bad advice to fliers can't replace human customer service reps, but airlines are firing reps and replacing them with chatbots:
https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know
An AI that "hallucinates" bad legal advice to New Yorkers can't replace city services, but Mayor Adams still tells New Yorkers to get their legal advice from his chatbots:
https://arstechnica.com/ai/2024/03/nycs-government-chatbot-is-lying-about-city-laws-and-regulations/
The only reason bosses want to buy robots is to fire humans and lower their costs. That's why "AI art" is such a pisser. There are plenty of harmless ways to automate art production with software – everything from a "healing brush" in Photoshop to deepfake tools that let a video-editor alter the eye-lines of all the extras in a scene to shift the focus. A graphic novelist who models a room in The Sims and then moves the camera around to get traceable geometry for different angles is a centaur – they are genuinely offloading some finicky drudgework onto a robot that is perfectly attentive and vigilant.
But the pitch from "AI art" companies is "fire your graphic artists and replace them with botshit." They're pitching a world where the robots get to do all the creative stuff (badly) and humans have to work at robotic pace, with robotic vigilance, in order to catch the mistakes that the robots make at superhuman speed.
Reverse centaurism is brutal. That's not news: Charlie Chaplin documented the problems of reverse centaurs nearly 100 years ago:
https://en.wikipedia.org/wiki/Modern_Times_(film)
As ever, the problem with a gadget isn't what it does: it's who it does it for and who it does it to. There are plenty of benefits from being a centaur – lots of ways that automation can help workers. But the only path to AI profitability lies in reverse centaurs, automation that turns the human in the loop into the crumple-zone for a robot:
https://estsjournal.org/index.php/ests/article/view/260
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle
Image:
Cryteria (modified)
https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en
--
Jorge Royan (modified)
https://commons.wikimedia.org/wiki/File:Munich_-_Two_boys_playing_in_a_park_-_7328.jpg
CC BY-SA 3.0
https://creativecommons.org/licenses/by-sa/3.0/deed.en
--
Noah Wulf (modified)
https://commons.m.wikimedia.org/wiki/File:Thunderbirds_at_Attention_Next_to_Thunderbird_1_-_Aviation_Nation_2019.jpg
CC BY-SA 4.0
https://creativecommons.org/licenses/by-sa/4.0/deed.en
375 notes
·
View notes