Minds aren't magic

Paul Crowley

The long scale of utility

The way that utilitarianism is normally stated is a terrible way to think about it which leads to real errors. Wikipedia:

Utilitarianism [holds] that the proper course of action is the one that maximizes utility

This idea strikes terror into people’s hearts: this is a standard that no-one could possibly live up to. Even Toby Ord occasionally buys himself a treat. This is the heart of the demandingness objection to utilitarianism. I think this definition says both too much and too little; it comments only on the highest point on the scale, where a better definition can illuminate the whole scale. I would rather say this:

A course of action that results in higher utility is proportionally better.

So yes, it’s better to give all your money to GiveDirectly than to spend it all on a yacht. But it’s also better to give £5 to GiveDirectly than nothing; and having given £5, you can feel good that your action is better than giving £0 and wonder if you might give £10 which would be better still.

People spend a lot of effort on trying to work out whether they reach the bar on their actions, and where the bar should be. They are hard on themselves for not reaching the bar they set, and worry that if they stopped being hard on themselves they would slide back and fail to achieve what they could. Utilitarianism, by the first definition, seems to set a bar so high that you can’t hope to reach it. But the truth is, there is no bar; there’s an infinitely long scale of utility. And so the question is not “is this the very best I can do”, but “can I do better than this? How much better?”

(A post I’ve been meaning to write for some time finally prompted by a blog post by Julia Wise, which in turn arose out of a conversation with me, her and Jeff Kaufman)

Never be sarcastic

I’m sometimes sarcastic, but I’m trying to give it up altogether. It’s bad to be sarcastic because civility over disagreements is a good idea, and sarcasm is uncivil. But there’s another reason to avoid it.

There are mistaken arguments that sound vaguely persuasive when cloaked in sarcasm whose flaws would be obvious if you tried to say them straightforwardly. People say things like “oh, yeah, I’m sure if cigarettes are in plain packets then no-one will ever smoke again, that’ll solve the problem”. What’s the non-sarcastic form of this argument? The obvious turn-around is “I think that there will still be smoking if cigarettes are put in plain packets”—but put this way, it’s obvious that it’s arguing against a position that no-one is taking, since a reduction in smoking that stops short of elimination is still a good thing.

Or “the minimum wage is great, let’s have a minimum wage of $1000 an hour and we’ll all be rich”. Here the argument is at best incomplete—we can all agree that a $1000/hr minimum wage wouldn’t be a good idea, but you’re going to have to spell out what you mean this to tell us about, say, a $15/hr minimum wage. If there’s a real argument behind what you say, you should be able to make it without sarcasm, and exactly what you are trying to argue will be clearer to all of us, including yourself.

Please do try to avoid the obvious jokes in your responses, thank you :)

Unoriginal and wrong posts ahead

I often hold back from posting first because I’m very unsure whether what I’m saying is right, and secondly because I wonder whether someone else has already said it, and better, and I’d find it if I did more thorough reading. However, I’ve come to the conclusion that it’s much better to err on the side of posting unoriginal and wrong things than to let this stop me.

For even lower quality material, I now have a Tumblr.

The size of the Universe and the Great Filter

In a small Universe, an early Great Filter is unlikely, simply because we’re evidence against it.

Suppose the Filter is early; how severe must it be for us to observe what we observe? By “severe” I mean: what proportion of worlds must it stop? Our existence is evidence towards a lower bound on the severity of Great Filter, while the fact that we observe no other live tells us about an upper bound. If the observable Universe is a substantial fraction of the whole Universe, then the two bounds aren’t very far apart, and so to defend an early Filter we have to believe in a great cosmic coincidence in which the severity of the Filter was just right, which is in turn is evidence for a late Filter whose severity we have no upper bound for. This argument has in the past given me real cause to worry that the Filter is late, and very severe.

However, this argument doesn’t hold at all if the observable Universe is a tiny fraction of the whole Universe.  The larger the difference between these two numbers, the bigger the gap between the bounds we have for the severity of the Filter, because intelligent only has to appear once in the whole Universe for us to contemplate this question, while it has to arrive twice in the smaller, observable bubble for us to observe it.

As I understand it, modern cosmology points towards a Universe that is either infinite, or very much larger than the observable Universe, so on those grounds alone we can perhaps worry less. But far more strongly than that, the Many Worlds interpretation of quantum mechanics gives us a many-branched Universe that is just unthinkably larger than the tiny portion we can observe; if intelligent life emerged on as many Everett branches as there are stars in the galaxy, we would still appear alone to our best ability to tell. So I now think that this isn’t a reason to worry that the Filter is late. It is however an excellent reason to expect never to meet an alien. Sorry.

Resolving Yeats’s Paradox

The blood-dimmed tide is loosed, and everywhere
The ceremony of innocence is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity.

In general, people talk about what they’re confident of, and keep quiet about what they’re not confident of. There are areas where this can make sense; if I ask which Bond came between Roger Moore and Pierce Brosnan, there’s an excellent chance that the person most confident of the answer has the right one, so it’s best they speak first.

However, a question such as “what does the long-term future hold for humanity” is affected by innumerable unknowns, and no very confident answer to this question can possibly be warranted; at best we can hope to make antipredictions. If we follow standard conventions, we will thus stay silent on such matters. The only people who do speak will be those who think they know what’s going to happen; we will leave the entire subject of discussion of humanity’s long-term future to crazy people. I don’t think that’s a good idea.  We see the same pattern in politics; the people who have the best judgement are exactly those who appreciate the uncertainties of these things, but all decision making is driven by the very confident; no-one fights for a measure they’re not sure will improve things.

We can only get out of this trap if we can energetically pursue courses of action we’re not sure about; if we are able to lack all conviction and still be filled with passionate intensity.

Diagrams for preference matrices

I always have a hard time making sense of preference matrices in two-player games. Here are some diagrams I drew to make it easier. This is a two-player game:

1

North wants to end up on the northernmost point, and East on the eastmost. North goes first, and chooses which of the two bars will be used; East then goes second and chooses which point on the bar will be used.

North knows that East will always choose the easternmost point on the bar picked, so one of these two:

2

North checks which of the two points is further north, and so chooses the leftmost bar, and they both end up on this point:

3

Which is sad, because there’s a point north-east of this that they’d both prefer. Unfortunately, North knows that if they choose the rightmost bar, they’ll end up on the easternmost, southernmost point.

Unless East can somehow precommit to not choosing this point:

4

Now East is going to end up choosing one of these two points:

5

So North can choose the rightmost bar, and the two players end up here, a result both prefer:

6

I won’t be surprised if this has been invented before, and it may even be superceded – please do comment if so :)

EDIT: here’s a game where East has to both promise and threaten to get a better outcome:

0,1-1,3_2,2-3,0

0,1-1,3_2,2-3,0-x

How to donate to the Future of Humanity Institute as a UK taxpayer

As charitable causes, MIRI, CFAR, and FHI are the ones that seem to me to do the most good per pound given, and are not far apart in efficiency. However as a UK tax payer, I can give to FHI much more efficiently than I can the other two. I took some wrong turns in working out how best to set up my monthly donation, which meant it took longer than it needed to; here’s what I learned, so that it can be easier for you.

First of all, don’t try to use Give As You Earn. If you do this, the University of Oxford can’t tell for whom the money was earmarked, and so it ends up in general coffers. Instead, set up a donation via direct debit or credit card, and remember to tick the “Gift Aid box”.

This means that if you give £60, FHI get £75. However, if you pay into the 40% tax bracket (ie if your salary is more than ~£41.5k) then for your £60 you can give FHI £100. To do this, make your donation £80 and claim back the £20 from the Revenue by writing to them, something like this:

HM Revenue and Customs
PAYE and Self Assessment
PO Box 1970
Liverpool
L75 1WX

<your address, phone number and NI number>

From today, I have set up a monthly donation of £80 with Gift Aid to the University of Oxford Development Trust Fund. The Fund is an exempt charity for the purpose of charity legislation. As such, it has full charitable status; it is exempt from the requirement to register as a charity with the Charity Commission, and therefore does not have a Charity reference number. I understand that since I pay the 40% tax rate, this should mean a change in my tax code that reduces my monthly tax bill by around £20; please can you make this change?

Thanks!

I got them to change my code by calling them, but since it’s about 3 minutes navigating the phone tree and a further 20 mins on hold, writing may save time. If you’re paid over £150k you can get even more tax back – at that pay I’m guessing you have an accountant and they can advise you better than I can :)

Thanks to purplecoffeespoons for her advice on how to sort this out! CORRECTED 2013-08-07: thanks to both purplecoffeespoons and Psyche for setting me straight on the tax bands.

Efficient altruism links

In another lovely and fascinating conversation with Dr Meg Barker, the subject of efficient altruism came up. I promised to furnish her with some links – these are good places to start.

Some brief notes on how to sign up for cryonics in the UK

If you want to be signed up for cryonics but you haven’t quite taken the first step, you should send three emails now.

First, you’re going to need life insurance to pay for your cryopreservation and storage. For that I recommend my financial advisor, Chris Morgan of Compass Independent; I persuaded him to look into how to do insurance for cryonics and since then he’s looked after me very well.

Second, you’ll need a contract with a cryonics provider. I am signed up with the Cryonics Institute.

Third, you’ll need a standby and transport service; in the UK that means Cryonics UK.

So the very next thing you should do, right now, is send three quick emails saying “I want to sign up for cryopreservation and you’ve been recommended to me. What shall I do next?” to these email addresses:

It’s likely that more people have died while cryocrastinating than have actually been cryopreserved, so don’t delay. Fire off those three emails now; don’t worry about composing them just right, all you have to say is “OK I’m signing up, what do I do now?”.

Then comment to say you’ve done it, and if there’s any way I can help, let me know.

Don’t turn up the heat

“So you’re saying that there’s nothing wrong with raping and murdering people?”

I recently got a response something like this in a discussion about moral philosophy. It’s something that people say a lot when they first encounter the idea of moral anti-realism, and I hope to address it in a future post, but first I want to say this: there’s no need to use the most emotive example you can think of to make this argument. For most meta-ethical arguments, if it’s a valid argument when you use rape and murder as an example, it’ll be just as valid if you use pushing your way onto the train before the other passengers have got off, and vice versa. Using the more emotive example here serves only to turn up the heat, which can result in people thinking less clearly.

I thought of this recently in a discussion of the de Finetti way of looking at probability as a choice between gambles. Given a choice between two hypothetical gambles for money, a lot of people are tempted by the grand gesture of turning down both gambles, even if that means wishing a way a zero-risk chance at free money. So it can be good to reframe it as a chance to prevent some harm happening to other people – that way it seems more obvious that the good thing to do is to go for some chance of preventing the harm over allowing it to happen with certainty. And of course as is standard in these philosophical discussions, my first thought was to let the hypothetical stakes be something like “a thousand people die horribly”. But I remembered the admonition above, and we worked out another example – something just bad enough that you’d feel like a heel if you just let it happen with certainty when you could have taken a chance to prevent it, but not so bad that its awfulness could seriously distract from the topic at hand, which was probability theory.

And so we imagined a man in a Florida retirement home, whose fate was not going to be critical to the fate of the world in a way that could make our hypotheticals more confusing, who was at risk of a painful burn on his left little finger that would annoy him for a week. And that was enough – enough to seem more important that this man not get a burn than that we follow one ritual or other in choosing our actions, without the side order of “and you’re a TERRIBLE PERSON if you don’t take my position on this” that seems to come with piling up the stakes as high as your imagination can go before posing your hypothetical.

Heat is sometimes necessary. If a weaker example doesn’t work, a stronger one can sometimes make the difference. But please don’t start at the highest temperature you can reach – more heat doesn’t usually mean more light.