Tumgik
#as if those donation posts arent passing the same $5 around to get by
master-gatherer · 10 months
Text
It's fascinating how in reaction to the crab day idea half my dashboard is like "Tumblr is awful do not give them one red cent let the motherfucker burn" and the other half is like "let's save the rec center 🙂"
20 notes · View notes
somnilogical · 4 years
Text
modular "ethics":
a wrong and two rights make a right
<<I've been known to cause outrage by suggesting that people who really care about something shouldn't have romantic relationships. Think what would happen if I dared to suggest that those people should also seriously consider getting castrated. That would be crazy! And who am I to suggest that basically everyone claiming to be doing good is faking it? Then people would feel bad about themselves. We can't have that!>>
https://squirrelinhell.blogspot.com/2018/02/men-have-women-are.html
previously i talked about an infohazard about altruism that seemed to fuck with grognor. it feels useful to pass by the dead and look at their lives and choices.
i dont think that castrating yourself is a good intervention for doing stuff you care about, like this is patchwork constraints for an unaligned optimizer. if you arent altruistically aligned from core values, castrating yourself wont make you more aligned.
the "altruists" having babies thing is actual insane and pasek is right about that. pretty much all of society will try and gaslight you about this the way sometimes people are gaslit about "i need to have sex with lots of attractive fems to keep up my moral so i can do super good stuff afterwards.". like if people want to do good for the world it will flow out as a continuous expression of value not some brent dill kind of deal that institutions like CFAR accepted until there was too much social pressure for them to maintain this facade.
the entire premise that morality is this modular thing and you can help set the utility function of an FAI while being a terrible person, is wrong. yet organizations like CFAR keep thinking it will work out for them:
<<We believe that Brent is fundamentally oriented towards helping people grow to be the best versions of themselves. In this way he is aligned with CFAR’s goals and strategy and should be seen as an ally.
  In particular, Brent is quite good at breaking out of standard social frames and making use of unconventional techniques and strategies. This includes things that have Chesterton’s fences attached, such as drug use, weird storytelling, etc. A lot of his aesthetic is dark, and this sometimes makes him come across as evil or machiavellian.
  Brent also embodies a rare kind of agency and sense of heroic responsibility. This has caused him to take the lead in certain events and be an important community hub and driver. The flip side of this is that because Brent is deeply insecure, he has to constantly fight urges to seize power and protect himself. It often takes costly signalling for him to trust that someone is an ally, and even then it’s shaky.
  Brent is a controversial figure, and disliked by many. This has led to him being attacked by many and held to a higher standard than most. In these ways his feelings of insecurity are justified. He also has had a hard life, including a traumatic childhood. Much of the reason people don’t like him comes from a kind of intuition or aesthetic feeling, rather than his actions per se.
  Brent’s attraction to women (in the opinion of the council) sometimes interferes with his good judgement. Brent knows that his judgement is sometimes flawed, and has often sought the help of others to check his actions. Whether or not this kind of social binding is successful is not obvious.>>
https://pastebin.com/fzwYfDNq
<<AnnaSalamon 2/6/09, 5:54 AM
Aleksei, I don’t know what you think about the current existential risks situation, but that situation changed me in the direction of your comment. I used to think that to have a good impact on the world, you had to be an intrinsically good person. I used to think that the day to day manner in which I treated the people around me, the details of my motives and self-knowledge, etc. just naturally served as an indicator for the positive impact I did or didn’t have on global goodness.
(It was a dumb thing to think, maintained by an elaborate network of rationalizations that I thought of as virtuous, much the way many people think of their political “beliefs”/clothes as virtuous. My beliefs were also maintained by not bothering to take an actually careful look either at global catastrophic risks or even at the details of e.g. global poverty. But my impression is that it’s fairly common to just suppose that our intuitive moral self-evaluations (or others’ evaluations of how good of people we are) map tolerably well onto actual good consequences.)
Anyhow: now, it looks to me as though most of those “good people”, living intrinsically worthwhile lives, aren’t contributing squat to global goodness compared to what they could contribute if they spent even a small fraction of their time/money on a serious attempt to shut up and multiply. The network of moral intuitions I grew up in is… not exactly worthless; it does help with intrinsically worthwhile lives, and, more to the point, with the details of how to actually build the kinds of reasonable human relationships that you need for parts of the “shut up and multiply”-motivated efforts to work… but, for most people, it’s basically not very connected to how much good they do or don’t do in the world. If you like, this is good news: for a ridiculously small sum of effort (e.g., a $500 donation to SIAI; the earning power of seven ten-thousandths of your life if you earn the US minimum wage), you can do more expected-good than perhaps 99.9% of Earth’s population. (You may be able to do still more expected-good by taking that time and thinking carefully about what most impacts global goodness and whether anyone’s doing it.)>>
https://www.greaterwrong.com/posts/4pov2tL6SEC23wrkq/epilogue-atonement-8-8
like opposing this isnt self-denying moral aestheticism or a signalling game of how good you can look (credibly signalling virtue is actually a good thing, i wish more people did it by for instance demonstrating how they win in a way that wouldnt work if they werent aligned. whose power seeded from their alignment.). its like... the alternative where people do things that it makes no sense for an altruist to do and then say that when they go to their day jobs they are super duper altruistic they swear; compartmentalizing in this way ...doesnt actually work.
people who want to obscure what altruism looks like will claim that this is moving around a social schelling point for who is to be ostracized. and that altruism as a characteristic of a brain isnt a cluster-in-reality that you can talk about. because it will be coopted by malicious actors as a laser to unjustly zap people with. these people are wrong.
both EA and CFAR are premised on some sort of CDT modular morality working. it is actually pretending to do CDT optimization because like with brent at each timestep they are pretending to think "how can we optimize utility moving forward?" (really i suspect they are just straight up mindcontrolled by brent, finding ways to serve their master because they used force and the people at CFAR were bad at decision theory) instead of seeking to be agents such that brent when brents plans to predate on people ran through them, he would model it as more trouble than it was worth and wouldnt do this in the first place.
CFAR and EA will do things like allowing someone to predate on women because they are "insightful" or creating a social reality where people with genetic biases who personally devote massive amounts of time and money to babies who happen to be genetically related to them and then in their day job act "altruistically". as long as it all adds up to net positive, its okay right?
but thats not how it works and structures built off of this are utterly insufficient to bring eutopia to sentient life. in just the same way that "scientists" who when they arent at their day jobs are theists are an utterly insufficient to bring eutopia to sentient life.
<<Maybe we can beat the proverb—be rational in our personal lives, not just our professional lives. We shouldn’t let a mere proverb stop us: “A witty saying proves nothing,” as Voltaire said. Maybe we can do better, if we study enough probability theory to know why the rules work, and enough experimental psychology to see how they apply in real-world cases—if we can learn to look at the water. An ambition like that lacks the comfortable modesty of being able to confess that, outside your specialty, you’re no better than anyone else. But if our theories of rationality don’t generalize to everyday life, we’re doing something wrong. It’s not a different universe inside and outside the laboratory.>>
--
to save the world it doesnt help to castrate yourself and make extra super sure not to have babies. people's values are already what they are, their choices have already been made. these sort of ad-hoc patches are what wrangling an unaligned agent looks like. and the output of an unaligned agent with a bunch of patches, isnt worth much. would you delegate important tasks to an unaligned AI that was patched up after each time it gave a bad output?
it does mean that if after they know about the world and what they can do, people still say that they specifically should have babies, i mark them as having a kind of damage and route around them.
someone not having babies doesnt automatically mark them as someone id pour optimization energy into expecting it to combine towards good ends. the metrics i use are cryptographically secure from being goodharted. so i can talk openly about traits i use to discern between people without worrying about people reading about this and using it to gum up my epistemics.
27 notes · View notes