Minds aren't magic

Paul Crowley

Resolving Yeats’s Paradox

The blood-dimmed tide is loosed, and everywhere
The ceremony of innocence is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity.

In general, people talk about what they’re confident of, and keep quiet about what they’re not confident of. There are areas where this can make sense; if I ask which Bond came between Roger Moore and Pierce Brosnan, there’s an excellent chance that the person most confident of the answer has the right one, so it’s best they speak first.

However, a question such as “what does the long-term future hold for humanity” is affected by innumerable unknowns, and no very confident answer to this question can possibly be warranted; at best we can hope to make antipredictions. If we follow standard conventions, we will thus stay silent on such matters. The only people who do speak will be those who think they know what’s going to happen; we will leave the entire subject of discussion of humanity’s long-term future to crazy people. I don’t think that’s a good idea.  We see the same pattern in politics; the people who have the best judgement are exactly those who appreciate the uncertainties of these things, but all decision making is driven by the very confident; no-one fights for a measure they’re not sure will improve things.

We can only get out of this trap if we can energetically pursue courses of action we’re not sure about; if we are able to lack all conviction and still be filled with passionate intensity.

Disclaimer: I don’t think I’m saying anything very new in the above—this is just a point I find myself making in conversation often, so I thought it might be valuable to get it into a blog post. Please do link me to anyone making a similar point elsewhere!

Diagrams for preference matrices

I always have a hard time making sense of preference matrices in two-player games. Here are some diagrams I drew to make it easier. This is a two-player game:

1

North wants to end up on the northernmost point, and East on the eastmost. North goes first, and chooses which of the two bars will be used; East then goes second and chooses which point on the bar will be used.

North knows that East will always choose the easternmost point on the bar picked, so one of these two:

2

North checks which of the two points is further north, and so chooses the leftmost bar, and they both end up on this point:

3

Which is sad, because there’s a point north-east of this that they’d both prefer. Unfortunately, North knows that if they choose the rightmost bar, they’ll end up on the easternmost, southernmost point.

Unless East can somehow precommit to not choosing this point:

4

Now East is going to end up choosing one of these two points:

5

So North can choose the rightmost bar, and the two players end up here, a result both prefer:

6

I won’t be surprised if this has been invented before, and it may even be superceded – please do comment if so :)

EDIT: here’s a game where East has to both promise and threaten to get a better outcome:

0,1-1,3_2,2-3,0

0,1-1,3_2,2-3,0-x

How to donate to the Future of Humanity Institute as a UK taxpayer

As charitable causes, MIRI, CFAR, and FHI are the ones that seem to me to do the most good per pound given, and are not far apart in efficiency. However as a UK tax payer, I can give to FHI much more efficiently than I can the other two. I took some wrong turns in working out how best to set up my monthly donation, which meant it took longer than it needed to; here’s what I learned, so that it can be easier for you.

First of all, don’t try to use Give As You Earn. If you do this, the University of Oxford can’t tell for whom the money was earmarked, and so it ends up in general coffers. Instead, set up a donation via direct debit or credit card, and remember to tick the “Gift Aid box”.

This means that if you give £60, FHI get £75. However, if you pay into the 40% tax bracket (ie if your salary is more than ~£41.5k) then for your £60 you can give FHI £100. To do this, make your donation £80 and claim back the £20 from the Revenue by writing to them, something like this:

HM Revenue and Customs
PAYE and Self Assessment
PO Box 1970
Liverpool
L75 1WX

<your address, phone number and NI number>

From today, I have set up a monthly donation of £80 with Gift Aid to the University of Oxford Development Trust Fund. The Fund is an exempt charity for the purpose of charity legislation. As such, it has full charitable status; it is exempt from the requirement to register as a charity with the Charity Commission, and therefore does not have a Charity reference number. I understand that since I pay the 40% tax rate, this should mean a change in my tax code that reduces my monthly tax bill by around £20; please can you make this change?

Thanks!

I got them to change my code by calling them, but since it’s about 3 minutes navigating the phone tree and a further 20 mins on hold, writing may save time. If you’re paid over £150k you can get even more tax back – at that pay I’m guessing you have an accountant and they can advise you better than I can :)

Thanks to purplecoffeespoons for her advice on how to sort this out! CORRECTED 2013-08-07: thanks to both purplecoffeespoons and Psyche for setting me straight on the tax bands.

Efficient altruism links

In another lovely and fascinating conversation with Dr Meg Barker, the subject of efficient altruism came up. I promised to furnish her with some links – these are good places to start.

Some brief notes on how to sign up for cryonics in the UK

If you want to be signed up for cryonics but you haven’t quite taken the first step, you should send three emails now.

First, you’re going to need life insurance to pay for your cryopreservation and storage. For that I recommend my financial advisor, Chris Morgan of Compass Independent; I persuaded him to look into how to do insurance for cryonics and since then he’s looked after me very well.

Second, you’ll need a contract with a cryonics provider. I am signed up with the Cryonics Institute.

Third, you’ll need a standby and transport service; in the UK that means Cryonics UK.

So the very next thing you should do, right now, is send three quick emails saying “I want to sign up for cryopreservation and you’ve been recommended to me. What shall I do next?” to these email addresses:

It’s likely that more people have died while cryocrastinating than have actually been cryopreserved, so don’t delay. Fire off those three emails now; don’t worry about composing them just right, all you have to say is “OK I’m signing up, what do I do now?”.

Then comment to say you’ve done it, and if there’s any way I can help, let me know.

Don’t turn up the heat

“So you’re saying that there’s nothing wrong with raping and murdering people?”

I recently got a response something like this in a discussion about moral philosophy. It’s something that people say a lot when they first encounter the idea of moral anti-realism, and I hope to address it in a future post, but first I want to say this: there’s no need to use the most emotive example you can think of to make this argument. For most meta-ethical arguments, if it’s a valid argument when you use rape and murder as an example, it’ll be just as valid if you use pushing your way onto the train before the other passengers have got off, and vice versa. Using the more emotive example here serves only to turn up the heat, which can result in people thinking less clearly.

I thought of this recently in a discussion of the de Finetti way of looking at probability as a choice between gambles. Given a choice between two hypothetical gambles for money, a lot of people are tempted by the grand gesture of turning down both gambles, even if that means wishing a way a zero-risk chance at free money. So it can be good to reframe it as a chance to prevent some harm happening to other people – that way it seems more obvious that the good thing to do is to go for some chance of preventing the harm over allowing it to happen with certainty. And of course as is standard in these philosophical discussions, my first thought was to let the hypothetical stakes be something like “a thousand people die horribly”. But I remembered the admonition above, and we worked out another example – something just bad enough that you’d feel like a heel if you just let it happen with certainty when you could have taken a chance to prevent it, but not so bad that its awfulness could seriously distract from the topic at hand, which was probability theory.

And so we imagined a man in a Florida retirement home, whose fate was not going to be critical to the fate of the world in a way that could make our hypotheticals more confusing, who was at risk of a painful burn on his left little finger that would annoy him for a week. And that was enough – enough to seem more important that this man not get a burn than that we follow one ritual or other in choosing our actions, without the side order of “and you’re a TERRIBLE PERSON if you don’t take my position on this” that seems to come with piling up the stakes as high as your imagination can go before posing your hypothetical.

Heat is sometimes necessary. If a weaker example doesn’t work, a stronger one can sometimes make the difference. But please don’t start at the highest temperature you can reach – more heat doesn’t usually mean more light.

State of the Paul

In July, along with two dozen other people, I attended a CFAR 8-day minicamp in the Bay Area of California. CFAR plan to test in various ways whether minicamp has made people more effective over the course of one year; this is salient to me because I played a role in the baseline assessments that took place before minicamp. As we approach the half-way mark, am I more effective, and what role does minicamp play in that?

The most valuable lasting thing I got out of attending, I think, is a renewed determination to continually up my game. A big part of that is that the minicamp creates a lasting community of fellow alumni who are also trying for the biggest bite of increased utility they can get, and that’s no accident; CFAR president Julia Galef’s 2012 Singularity Summit talk makes explicit her goal of encouraging rationality by fostering a community where it is practiced and valued. I met a lot of really amazing people while I was there, and I’m encouraged by their example.

The most valuable single class for me was Valentine’s introduction to Getting Things Done. I don’t think I could have adopted it given only the resolutely paper-based description in David Allen’s book; Valentine’s description of how he makes it work for him using Remember The Milk and his smartphone was essential to breathe life into those bones.

What changes have I made since minicamp? Somewhere inbetween the infinitely expanding superpowers I’d hoped for when the camp was coming to a close, and the zero change I’d feared. I’m 41, so a big change in habits is less likely for me than for the mostly much younger bulk of the participants. In addition, I’d already been part of the OB/LW community for over four years and read all of the Sequences more than once, so a little less of what was taught was new to me – though to my surprise I’d say a clear majority of it was new, or at least things I hadn’t considered in the depth they deserved. Still, any change that lasts over six months is a surprise compared to what seems to be the normal pattern for things like this, where a burst of enthusiasm lasting a month or two peters out by the three month mark. Here’s what I’ve changed:

  • I’ve adopted some of Getting Things Done. I have done only very few weekly reviews, but I have adopted the habit of putting all TODO items into the Inbox, and reviewing the inbox to turn items into first actions.  As part of that change, I’ve switched from trying to be my own mail provider to using Google Mail, which I’m very glad of having done, and I try to keep an empty inbox on Google Mail.  I’ve also installed Netmemo as an easy way of recording TODO items as I think of them. In GTD you’re supposed to empty your inbox daily; I sometimes do this, and sometimes go through long periods where I can’t look at it, followed by big clearups.  Still, I manage to review everything in Remember the Milk on a pretty regular basis, and I don’t have any “oh yes, mustn’t forget that” worries stored in my head; it’s all in the system. I definitely feel as though I’m getting more things done, but it’s hard to know for sure when I didn’t have a system for measuring it before!
  • That last link reveals a second pattern; if I want to achieve something, I want to measure how I’m doing. A minicamp workshop used tooth flossing as an example of something we sometimes don’t have the willpower to do; I decided to floss nightly.  There is now a jar in the bathroom containing floss picks; every time I refill it, I count how many picks I add and put it in Beeminder. Without going into gory details, I can say that there is an unmistakable improvement in my oral health.
  • I have taken several other measures to improve my health. I got fillings where I needed fillings and been to the doctor for my longstanding nose and knee issues, getting physiotherapy for my knees and a treatment program for my nose. Using Med Helper I can actually keep track of whether I’m following the program; I usually do most of it but it’s the lunchtime saline rinse that I find hardest to keep up, especially when not at work, thus the poor performance over the holidays!
  • Finally I have been getting regular exercise, nearly daily.  I wanted something that wouldn’t strain my knees, so I went with a mashup of programs from One Hundred Pushups and Two Hundred Situps, mixed in with some chin-ups so my muscles don’t get too unbalanced, and some barbell excises with 2kg barbells to make sure that my rotator cuffs are strong enough that I won’t damage myself with chin-ups! All of that is tracked on Fitocracy.  It’s making a noticeable, and pleasing, difference to the way my chest looks, but it doesn’t seem to have resulted in dramatically greater energy levels yet.

I’m applying other ideas from minicamp to my daily life, using ideas like fungibility more consistently in my decision making than I did prior to minicamp. In addition of course, minicamp was a great chance to talk to and make friends with uncountable numbers of incredibly smart and fascinating people in very pleasant surroundings!

CFAR have announced workshops in January and March. Attend, become more effective, and make a bigger difference.

Greatest video of all time

I made this video in several drafts over the course of last year. It illustrates where human events sit on the scale of all time, at a scale of a million years per meter. I’d meant it to show how short human timescales are on the scale of all history, but I worry that it has exactly the reverse effect, because the smooth exponential scaling means that events like control of fire take place half way through the video. It may help to zoom out again at the end, and show the scale of all history again; I may try making a new version!

My questions for Leah Libresco

Leah Libresco made waves earlier this year when, after years of blogging for the Patheos atheism portal, she announced her conversion to Roman Catholicism.  Shortly after that, in late July, she attended the same CFAR one-week camp as me, and it was a privilege to spend time with her: she’s smart, energetic, thoughtful and very good fun.  She has also been incredibly helpful coordinating post-camp activities to keep us all in touch and help us help each other achieve our goals.  Oh, and I think the vowel is pronounced the same way as “see ya”; I was told “not like Princess Leia”, which immediately meant I could no longer remember how Princess Leia’s name was pronounced.

Near the end of minicamp, late one evening, I asked her if she had spent much time arguing about religion with fellow minicampers. I tried to persuade her that she should, but I never got to pursue my own line of argument in detail—the conversation moved on and it was all too interesting to drag it back!  So I’m setting it out again here in the hope that she’ll have time to respond.

I confess I don’t fully understand her justification for converting, and I know I’m not alone: “it seems that her justification is opaque and too complicated for one blog post” wrote Vlad Chituc. In general, though, I find it’s a mistake to try to get into religious arguments “from the inside”—you end up playing a game of self-referential Twister whose rules you neither know nor care about. Instead I wanted to start from the outside, with a sequence of hypotheticals.  Leah believes that there is something about morality that implies a god.  I wanted to know if any of the following hypothetical words are compatible with the absence of a god:

  • A world in which there is no life
  • A world in which there are only simple single celled animals
  • Multicellular animals
  • Mutualism, such as between flowers and bees
  • The kind of reciprocal altruism we observe in animal species
  • A species that is violent towards those who don’t sacrifice their own interests to further the interests of all
  • A species that is violent towards those who are not violent as above
  • A species that develops language, and uses it to talk about who will be punished and who will not
  • A species that uses the same kind of language as we do to talk about morality.

If I recall correctly, Leah accepted the compatibility of all of these worlds with godlessness except the last.

On the one hand, this is good—there is a clear path by which she believes the existence of a god is the historical cause of her belief in a good, as any good Bayesian requires.  I was worried that the argument would be entirely based on ideas in moral philosophy, not in things we could observe about the world—such an argument would hold in all my hypothetical worlds, not just the last one.

On the other hand, if this is the key to her argument, then it’s odd that so much of it is taken up with discussing moral philosophy, when what she should be entirely concerned with is evolutionary psychology, which would directly address the question of whether our current attitudes and language about morality can arise in a universe without a god.

Near the end of our discussion, she asked me: “do you think morality is more like a matter of taste, or more like math?” I didn’t get a chance to answer, which is one reason I wanted to write this blog post. As it happens, on metaethical matters I tend to agree with Joshua Greene. But what I really wanted to say was I’M ASKING THE QUESTIONS! Or to put it a less silly way, I’m happy to have a discussion about what I think about metaethics, but I don’t see how that relates to my efforts to understand what her position is using hypothetical questions.

I chose my questions exactly in order to try and step around the minefield of metaethics, because what I wanted to know was, is there something different about what we observe that is acting as evidence for a god here?

No architectural leap required

I recently listened to the Yudkowsky-Hanson debate that took place at Jane Capital in June 2011.  It’ll surprise no-one that I’m more convinced by Eliezer Yudkowsky’s arguments than Robin Hanson’s, but the points below aren’t meant to recap or cover the entire debate.

At roughly 34 minutes in, Hanson leans hard on the idea that a machine intelligence that could rapidly outcompete the whole of humanity would have to have some architectural insight missing in the makeup of human intelligence that makes it vastly more effective. The “brain in a box in a basement” scenario assumes no such thing. It imagines the machine intelligence starting out with no better tools with which to understand the world than a human being starts out with, but that simply because of the change in substrate from biology to silicon, the machine intelligence can do what we do vastly faster than human intelligence, and it is this advantage in speed and throughput that allows it to do better than us at building machine intelligences, and thus dramatically outcompete us.  As Yudkowsky puts it, since Socrates roughly 2,500 years have elapsed; a machine that thinks a million times faster than we do does 2,500 years worth of thinking in under 24 hours.

This scenario is made vivid in his That Alien Message, which I strongly recommend reading if you haven’t already.  In addition to the various skills we are born with to deal with things we encounter in the ancestral environment, like recognising faces, we have skills for thinking about seemingly arbitrary things utterly remote from that environment, like quantum electrodynamics, and for sharing our knowledge about those things through cultural exchange. The “brain in a box in a basement” scenario invites us to imagine a machine which can apply those skills at vastly superhuman speeds. Robin later speaks about how a great deal of what gives us power over the world isn’t some mysterious “intelligence” but simply hard-won knowledge; if our machine intelligence has these core skills, it will make up for any shortfall in knowledge relative to us the same way any autodidact does, but at its own extremely rapid pace.

None of this lands a fatal blow on Hanson’s argument; the argument can be maintained if we suppose that even these core skills are made up of a great many small disparate pieces each of which makes only a small improvement to learning, inference and decision making ability.  However, when he seems to imply that knowledge outside these core skills will also need to be painstakingly programmed into a machine intelligence, or that some architectural superiority over human intelligence more than mere raw speed would be needed to improve on our AI building ability, I don’t think that can make sense.