Tumgik
#lots of people at miri besides this person looked over their post and did nothing
somnilogical · 4 years
Text
transfem protestors released info that moved 350000$ of donations from miri. because miri is an evil org, they decided to lie about why they think it happened and say its really confusing. i know the answer to this ~mystery~, i know why this year was different; i can talk about it in public, they cant. cuz im freeee from CDT PR. i can decide to lazily choose an algorithm that optimizes utility in multiverse, not just institute whatever choice seems to give most utility "going forward".
<<Our fundraiser fell well short of our $1M target this year, and also short of our in-fundraiser support in 2018 ($947k) and 2017 ($2.5M). It’s plausible that some of the following (non-mutually-exclusive) factors may have contributed to this, though we don’t know the relative strength of these factors:>>
https://web.archive.org/web/20200214061634/https://intelligence.org/2020/02/13/our-2019-fundraiser-review/
they then go on to list eight pretty thin excuses. you know perfectly well why this year is different from all other years, MIRI. your ""speculations"" are fake.
a small group of transfems moved ~350,000$ from your ineffective charity.
i suppose eight of these factors also account for why CFAR extended their fundraiser 5 days longer than announced after donations were super low?
or maybe there is a more compact generator for both of these events: whistleblowers protested what you have been doing releasing lots of marginal information and donors saw this.
i know why this year is different, you know why this year is different. Colm Ó Riain you are facilitating MIRI lying, hoping that if one doesnt mention something, people wont pay attention to it.
like lying in such a way that you wouldnt be held legally culpable, because you could say in front of a court with low schelling reach "you cant prove what i was thinking". except i dont care about legal culpability, i care about causal entanglement. you heard about the protests (or, much less likely, were kept from hearing about these protests somehow by a distributed version of this algorithm set one personstep back), you have > 1/100 intelligence. your omission of this is deception.
is <<In past years, when answering supporters' questions about the discount rate on their potential donations to MIRI, we've leaned towards a "now later" approach. This plausibly resulted in a front-loading of some donations in 2017 and 2018.>> really more plausible than "there was an entire protest against MIRI and CFAR's support of UFAI. people reacted strongly to this, it shows up in the donations.¹"?
it would have come up on a list that scrapes the bottom of the barrel for plausible causes in a counterfactual world in which you werent optimizing for good PR. an AU in which you were searching for and publicising how things were causally entangled.
--
¹see, for instance, the Patrick LaVictoire who had aggregate donations of:
25,885$ november 26 2018
35,885$ august 29 2019
117,199$ february 14 2020
giving diffs of 10,000$ and 81,314$ to estimate 2018 and 2019 donation periods. iirc at some point the diff was 81,000$, id guess at some point afterwards they donated \floor{100π}$. https://web.archive.org/web/20200601000000*/https://intelligence.org/topcontributors/
and then went on to do the standard antitransfem thing calling ziz a "gross uncle" style abuser who just wants status like brent.
https://pastebin.com/TUZ7EThz
with their evidence being someone kaj said it, and kaj's evidence being that ziz said:
<<> I asked Person A if they expected me to be net negative. They said yes. After a moment, they asked me what I was feeling or something like that. I said something like, “dazed” and “sad”. They asked why sad. I said I might leave the field as a consequence and maybe something else. I said I needed time to process or think. I basically slept the rest of the day, way more than 9 hrs, and woke up the next day knowing what I’d do. [...]
> In the case that I’d be net negative like I feared, I was considering suicide in some sense preferable to all this, because it was better causal isolation. However, despite thinking I didn’t really believe in applications of timeless decision theory between humans, I was considering myself maybe timelessly obligated to not commit suicide afterward. Because of the possibility that I could prevent Person A and their peers from making the correct decision for sentimental reasons. [...]
> I was very uncomfortable sharing this stuff. But I saw it as a weighing on the scales of my personal privacy vs some impact on the fate of the world. So I did anyway. [...]
> I tried to inner sim and answer the question. But my simulated self sort of rebelled. Misuse of last judge powers. Like, I would be aware I was being “watched”, intruded upon. Like by turning that place into a test with dubious methodology of whether I was really a delusional man upon which my entire life depended, I was having the idea of Heaven taken from me. [...]
> I made myself come up with the answer in a split second. More accuracy that way. Part of me resisted answering. Something was seriously wrong with this. No. I already decided for reasons that are unaffected. that producing accurate information for person A was positive in expectation.>>
which doesnt sound at all like brent or other people ive encountered who were chronically angsty about status.
--
im going to write more about this and others in another post but like okay:
[1] ppl with high current or natal testosterone (centrally but not exclusively cis men) keep doing this thing where they mind-project that everyone else has the same degree of status sensitivity and unreflecticity upon it as them when actually this is hormonally mediated.
ziz has a natally & currently estrodized brain and from my observations doesnt have that submodule testosteronized. people with PCOS like ilzo have mentioned that they had masculinized status sensitivity modules, lex somni and some cis guy all tried increasing testosterone and noticed status-sensitivity went up, without looking for this effect in the first place. there are papers on it. your experiences are not universal.
[2] but also this isnt really a "belief", its a coordination mechanism. in the same way "its in black peoples nature to be servile" was a coordination mechanism for slavery rather than a "belief". humans actually can use evidence efficiently and see, for instance, in the antebellum south that black people were human just the same as anyone else. but the local social positionality and what they valued made it more advantageous to verbally report otherwise.
similarly for any minority. "*phobia" is the wrong word, its not fear its a schelling coordination mechanism that humans can expect most of society to have their backs on when bad times happen. which tracks what social justice theorists mean by this stuff being "structural". its not about some emotion of hatred or fear against the specific phenotype of "black skin" or "gender divergence" its about what humans can coordinate against.
hence the use of "antitransfem" instead of "transphobia", i picked this up from ziz and gwen and later noticed it mirroring the form of "antiblack". i wonder if antiblack was coined after encountering a similar issue.
[3] you parted with a marginal 71,000$ (compared to what id expect in a counterfactual world without a protest given your lifetime donation total was 35,885$ and you donated 10,000$ last year.) to protect a UFAI org. is this not an amazing amount of "subservience" to MIRI? anarchotransfems getting together to protest the present omnicide isnt "subservience". the transfems protesting against google being evil werent "subservient", but the employees at google who fired them out were.
its amusing watching this one narrative being tiled everywhere, but with different targets. the authoritarians did the same thing to emma goldman. ▘▕▜▋ says emma and somni are haxing a clueless ziz to "bully" people, linta said somni was infohazardously corrupting people, CFAR affiliates say ziz was 'whipping people into a frenzy' and 'demanding subservience' from them. im going to write a post about this.
6 notes · View notes
silver-and-ivory · 7 years
Note
You suggested Yudkowsky in a previous ask. How do you respond to the accusations that he is a crank? People make these accusations for a variety of reasons. For reference, consider rationalwiki's less than flattering article on him and his work. I am asking this question from a sincerely unbiased and simply curious standpoint. Thank you for receiving it, and, if you choose to respond, thank you for responding.
Hmm.
First of all, thanks a lot for how polite this was! Thank you for asking, and I am happy to respond for you. :)
I have in fact heard of these accusations.
To be perfectly honest, I don’t think these allegations are at all relevant to the validity of his philosophy in the Sequences. Ideas should be judged on their own merit; of course, we don’t have infinite time, so we have to use heuristics to figure out who to listen to, such as general correctness of beliefs; but I have already read Yudkowsky’s ideas and find him compelling. Since you (probably?) think I have relevant and non-terrible opinions, the heuristic “follow recommendations from your favored authors” (or whatever) should override the weaker heuristic about what to draw to your attention.
But I want to address the accusations in more detail, since they seemed interesting and I don’t think it would be satisfying to you if I didn’t. Keep in mind that I’m not qualified to evaluate many of the technical claims (like around physics or AI) in terms of knowledge or expertise. I’ll mostly be defending the idea that Yudkowsky’s ideas in the Sequences have merit independent of whatever weird shit he got into otherwise, but I also will make an effort to refute exaggerated or inaccurate claims.
So let’s get into discussing the accusations in question (long ass post below):
From what I’ve heard, they’re mostly as follows:
Roko’s Basilisk Debacle. I have no idea what happened here. Yudkowsky may have made a mistake in his comportment or in his logic, but it seems to be a sincere attempt to make the world better.
MIRI Work Inconsequential, Sub-par: Again, I don’t know anything about AI. I’ve never met Yudkowsky or MIRI at work, so I can’t really evaluate how hard they’re working or whatever.
AI Apocalypse is a Bit of a Sketchy Theory: I don’t know anything about AI, but the arguments I’ve seen are very unconvincing. After all, making the leap from “machine that does preprogrammed stuff really (really (really) (etc.))) quickly” to “thing with ability to manipulate, self-modify, and seep into the darkness of the internet to achieve its goals” doesn’t seem to be as easy as the arguments assume.
On the other hand, Yudkowsky might well be 1) operating off information I don’t know 2) concluded different, but equally reasonable (at this point in time) things from the information we share such that AI stuff is a major risk 3) giving into the bias that the things he’s interested in are Really Important or 4) something I didn’t think of that nevertheless doesn’t make him unreliable.
He might be wrong, but that doesn’t necessarily say anything about the other aspects of his ideology/philosophy. People make mistakes, they follow their biases too far, they get obsessed with strange things, they get stuck in bubbles. It’s erroneous to conclude that all of his ideas must be wrong just because he failed to live up to it.
Alternatively,, he could be doing it for personal gain - such as for fame - and therefore lying, which would bring his entire ideology into doubt as one could not know where he fabricated ideas versus where he was sincere.
Argument With Hanson: I honestly don’t care if he disagreed with Hanson over who the rightful caliph was AI foom. Ratwiki says:
It was immediately after this debate that Yudkowsky left Overcoming Bias (now Hanson’s personal blog) and moved the Sequences to LessWrong.
This insinuates a kind of foul play or bad faith on Yudkowsky’s side. I notice that it is unsourced, and secondly that Hanson and Yudkowsky both seem on still be on reasonable terms (as far as I know). Perhaps the split was already in the works, and Hanson and Yudkowsky regularly had similarly intense debate which was only “remarkable” because of the leave. Perhaps they believed it was confusing for readers to see a blog arguing with itself.
And besides, Yudkowsky couldn’t have decided based on this incident to create LessWrong in that short a time-span, which makes it highly unlikely that it was a petty reaction or whatever.
Yudkowsky Has Not Achieved Much:
Quoting from ratwiki here:
Yudkowsky is almost entirely unpublished outside of his own foundation and blogs[12] and never finished high school, much less did any actual AI research. No samples of his AI coding have been made public.
It is important to note that, as well as no training in his claimed field, Yudkowsky has pretty much no accomplishments of any sort to his credit beyond getting Peter Thiel to give him money. Even his fans admit “A recurring theme here seems to be ‘grandiose plans, left unfinished’.”[13] He claims to be a skilled computer programmer, but has no code available other than Flare, an unfinished computer language for AI programming with XML-based syntax.[14] His papers are generally self-published and have a total of two cites on JSTOR-archived journals (neither to do with AI) as of 2015, one of which is from his friend Nick Bostrom at the closely-associated Future of Humanity Institute.[15]
His actual, observable results in the real world are a popular fan fiction (which to his credit he did in fact finish, unusually for the genre), a pastiche erotic light novel,[16] a large pile of blog posts and a surprisingly well-funded research organisation — that has produced fewer papers in a decade and a half than a single graduate student produces in the course of a physics Ph.D, and the latter’s would be peer reviewed. Although Yudkowsky is working on a replacement for peer review.[17]
I really do not care how many successes Yudkowsky has had. His ideas are the issue here, not his actual abilities. Some of the more grandiose claims (”optimize the universe!”) are perhaps, well, grandiose; but that doesn’t undermine the other aspects of them.
(And in fact Yudkowsky has been able to create an entire movement of people, with highly influential members such as Scott Alexander and the Unit of Caring, which I notice is far more than is typical.
As for the allegations about MIRI, see above.)
Whether Yudkowsky considers himself a genius is unclear totally clear; he refers to himself as a genius six times in his “autobiography.” However he admits to possibly being less smart than John Conway.[18] As a homeschooled individual with no college degree, Yudkowsky may not be in an ideal position to estimate his own smartness. That many of his followers think he is a genius is an understatement.[19][20] Similarly, some of his followers are derisive of mainstream scientists, just look for comments about “not smart outside the lab” and “for a celebrity scientist.”[21] Yudkowsky believes that a doctorate in AI is a net negative when it comes to Seed AI.[22] While Yudkowsky doesn’t attack Einstein, he does indeed think the scientific method cannot handle things like the Many worlds Interpretation as well as his view on Bayes’ theorem.[23] LessWrong does indeed have its unique jargon.[24]
Yudkowsky may or may not have an overly large ego. I don’t think this is relevant to his philosophy.
Disagreement with Yudkowsky’s ideas is often attributed to “undiscriminating skepticism.” If you don’t believe cryonics works, it’s because you have watched Penn & Teller: Bullshit!.[25] It’s just not a possibility that you don’t believe it works because it has failed tests and is made improbable by the facts.[26]
I notice that “often” is doing a lot of work here. The citation links to Yudkowsky’s article on Undiscriminating Skepticism, in which he does not make the claim that “if you don’t believe cryonics works, it must be because you believed in Penn & Teller: Bullshit!”. Instead, he makes this (verbose and difficult to parse) claim (emphasis mine):
To put it more formally, before I believe that someone is performing useful cognitive work, I want to know that their skepticism discriminates truth from falsehood, making a contribution over and above the contribution of this-sounds-weird-and-is-not-a-tribal-belief.  In Bayesian terms, I want to know that p(mockery|belief false & not a tribal belief) > p(mockery|belief true & not a tribal belief).
If I recall correctly, the US Air Force’s Project Blue Book, on UFOs, explained away as a sighting of the planet Venus what turned out to actually be an experimental aircraft.  No, I don’t believe in UFOs either; but if you’re going to explain away experimental aircraft as Venus, then nothing else you say provides further Bayesian evidence against UFOs either.  You are merely an undiscriminating skeptic.  I don’t believe in UFOs, but in order to credit Project Blue Book with additional help in establishing this, I would have to believe that if there were UFOs then Project Blue Book would have turned in a different report.
And so if you’re just as skeptical of a weird, non-tribal belief that turns out to have pretty good support, you just blew the whole deal - that is, if I pay any extra attention to your skepticism, it ought to be because I believe you wouldn’t mock a weird non-tribal belief that was worthy of debate.
Personally, I think that Michael Shermer blew it by mocking molecular nanotechnology, and Penn and Teller blew it by mocking cryonics (justification: more or less exactly the same reasons I gave for Artificial Intelligence).  Conversely, Richard Dawkins scooped up a huge truckload of actual-discriminating-skeptic points, at least in my book, for not making fun of the many-worlds interpretation when he was asked about in an interview; indeed, Dawkins noted (correctly) that the traditional collapse postulate pretty much has to be incorrect.  The many-worlds interpretation isn’t just the formally simplest explanation that fits the facts, it also sounds weird and is not yet a tribal belief of the educated crowd; so whether someone makes fun of MWI is indeed a good test of whether they understand Occam’s Razor or are just mocking everything that’s not a tribal belief.
But I do propose that before you give anyone credit for being a smart, rational skeptic, that you ask them to defend some non-mainstream belief.  And no, atheism doesn’t count as non-mainstream anymore, no matter what the polls show.  It has to be something that most of their social circle doesn’t believe, or something that most of their social circle does believe which they think is wrong.  Dawkins endorsing many-worlds still counts for now, although its usefulness as an indicator is fading fast… but the point is not to endorse many-worlds, but to see them take some sort of positive stance on where the frontiers of knowledge should change.
But it’s dangerous to let people pick up too much credit just for slamming astrology and homeopathy and UFOs and God.  What if they become famous skeptics by picking off the cheap targets, and then use that prestige and credibility to go after nanotechnology?  Who will dare to consider cryonics now that it’s been featured on an episode of Penn and Teller’s “Bullshit”? 
So Yudkowsky isn’t saying that everyone who disagrees with him on e.g. many-worlds or cryonics is a P&T-thumper. Instead, here’s my interpretation of what he’s saying:
1. You can easily accumulate Skeptic Points by having certain views that don’t actually require that much mental effort to come up with, such as “homeopathy is dumb”.
2. These are not really relevant to your actual level of credibility.
3. Certain organizations, like Penn and Teller, have accumulated a lot of Skeptic Points by mocking things like homeopathy.
4. Mockery is not an argument. Organizations like Penn and Teller often mock things based on them being weird, which means that their mockery should mean absolutely nothing.
5.Unfortunately, due to the Skeptic Points that Penn and Teller has, their mockery has an outsize influence, which is bad.
6. If you want to assign Skeptic Points to actual credible people, you should test to make sure they’re not just parroting back their ingroup’s talking points.
The ratwiki interpretation is astonishingly uncharitable, and it also lacks substantiation for the claim it makes.
Note that I don’t know how accurate EY’s interpretation of the facts about cryonics and Penn and Teller is. It’s just that he didn’t say anything like what ratwiki characterizes him (an internet dweller? a random asshole on the bus?) as saying in the link, and that’s not how the principle was intended.
Yudkowsky Has Weird Viewpoints That Are Controversial:
Quoting again from ratwiki since I am very irritated at this point with them:
Despite being viewed as the smartest two-legged being to ever walk this planet on LessWrong, Yudkowsky (and by consequence much of the LessWrong community) endorses positions as TruthTMthat are actually controversial in their respective fields. Below is a partial list:
Transhumanism is correct. Cryonics might someday work. The Singularity is near![citation NOT needed]
Bayes’ theorem and the scientific method don’t always lead to the same conclusions (and therefore Bayes is better than science).[27]
Bayesian probability can be applied indiscriminately.[28]
Non-computable results, such as Kolmogorov complexity, are totally a reasonable basis for the entire epistemology. Solomonoff, baby!
Many Worlds Interpretation (MWI) of quantum physics is correct (a “slam dunk”), despite the lack of consensus among quantum physicists.[29]
Evolutionary psychology is well-established science.
Utilitarianism is a correct theory of morality. In particular, he proposes a framework by which an extremely, extremely huge number of people experiencing a speck of dust in their eyes for a moment could be worse than a man being tortured for 50 years.[30]
Yudkowsky believes some strange controversial things! Also, some people on the internet have presented evidence that doesn’t agree with Yudkowsky’s conclusions! Shock! He must be a total crock of shit!
Ironically, this falls into appeal to mockery, the same issue EY addresses in the essay linked above.
Again, I don’t agree with everything EY says, but it’s incredibly uncharitable to characterize his beliefs this way. For example, the dust-speck problem isn’t meant to be Obvious Truth- there was a massive debate around it on LW, in fact, and it appears to be construed specifically to be difficult to answer.
A wrong belief on something doesn’t make you discredited. It just makes you wrong on that thing.
Of course, you’d expect someone as smart as Yudkowsky to have a lot of correct opinions. But I don’t know whether his opinions are correct or not since I’m not an expert in his field. I recommend him based on my personal experience applying and thinking about his philosophy, not based on any particular object-level accuracy of his.
Yudkowsky Once Wrote a Story Where Rape Is Legal and It Wasn’t a Dystopia (rape cw):
Also, while it is not very clear what his actual position is on this, he wrote a short sci-fi story where rape was briefly mentioned as legal.[31] That the character remarking on it didn’t seem to be referring to consensual sex in the same way we do today didn’t prevent a massive reaction in the comments section. He responded “The fact that it’s taken over the comments is not as good as I hoped, but neither was the reaction as bad as I feared.” He described the science fiction world he had in mind as a “Weirdtopia” rather than a dystopia.[32]
Yes, and the point is?
Yudkowsky doesn’t go around raping people - though his non-rape-related philosophy wouldn’t necessarily be wrong even if he did - and he doesn’t go around advocating for a society like this.
It may or may not be morally wrong that he does not address it seriously. This wiki article doesn’t make any argument about that, though.
This is also irrelevant to his meta-philosophy.
In Conclusion
The ratwiki article on Yudkowsky managed to insinuate various terrible things about him which are often implausible, inaccurate, or technically-true but with false implications. It is nothing other than a mockingly snide attempt at character assassination.
It has little or nothing to do with Yudkowsky’s actual philosophy, and manages to strawman him badly.
I continue to recommend Yudkowsky for (critical, skeptical) reading. Thank you again for asking.
9 notes · View notes