Minds aren't magic

Paul Crowley

Rationality: From AI to Zombies

Cover imageRationality: From AI to Zombies, by Eliezer Yudkowsky, Machine Intelligence Research Institute, suggested price $5, 1813 pages

Christmas 1982, aged eleven, I gave my Dad a copy of Douglas Hofstadter’s Gödel, Escher, Bach, and immediately borrowed it and read it myself. Like so many readers, I was captivated, and changed. Martin Gardner said of it: “Every few decades, an unknown author brings out a book of such depth, clarity, range, wit, beauty and originality that it is recognized at once as a major literary event.” Nothing could take its place as the biggest influence on my thought—until a quarter century later, when I started reading daily essays by Eliezer Yudkowsky on the blog Overcoming Bias. Out of those essays came a community with thousands of participants and meetups in over fifty locations worldwide. Now, they have been edited, organised, and combined with other writing by Yudkowsky and by Rob Bensinger to make this extraordinary work, Rationality: From AI to Zombies.

The two bear close comparison. Both GEB and RAZ take the reader on an eclectic journey through science, art and philosophy, drawing on the tropes of Eastern philosophy as well as Western popular science. Both are borne of contemplation of artificial intelligence, but their subject is the human mind. Like GEB, each diversion in RAZ is fascinating in its own right, but each makes a real and important contribution to a central theme. Among other things RAZ discusses the mathematics of probability and decision making, the foibles of human psychology, evolution, quantum mechanics, thermodynamics, and the philosophy of reason, of mind and of morality. For both, the central theme is hard to describe briefly—in the preface to the 20th Anniversary edition of GEB Hofstadter describes his difficulties getting the New York Times to use a description that was not “utter hogwash”, while Yudkowsky can only offer that after years of study “it may be that you will glimpse the center”. Both are at times intellectually demanding. GEB is a large book; at 1800 pages organised into six books, RAZ is over twice the length, with a word count similar to The Lord of the Rings.

Both are works of tremendous originality and wit. GEB is undoubtedly the greater work of beauty; while Yudkowsky is an excellent writer few if any can match the extraordinary fireworks of Hofstader’s wordplay. Conversely, RAZ surpasses it in all of clarity, range, and depth.

It’s not without its flaws; it takes a while to really get started, and not all readers enjoy Yudkowsky’s Eastern nods. But RAZ also surpasses GEB in an area Gardner does not name: importance. While the nature of consciousness is a subject of endless fascination, RAZ’s drive to help us properly shift our views in response to evidence and make better decisions in the face of uncertainty given only the flawed instrument that is our brain could not be of more crucial importance.

Rationality: From AI to Zombies is available as a eBook package ($5 suggested, minimum price $0) or from Amazon stores the world over.

Declaration of interest: I have a spot in the acknowledgements for proofreading and very minor contributions to the new material.

Paying someone to help me learn degree-level maths

I have a plan, but some of the plan is probably wrong, so I’m posting here before executing in the hope that you can set me straight. Thanks!

I’m self-taught in most of the maths that I know. This has advantages, but it’s hard work; I can make a lot of progress by myself but if I get stuck on something it’s easy to stay stuck. I want to speed up my maths learning and pick up fields like category theory and mathematical logic, and it seems like even a small amount of tutoring could make a big difference. Obviously this is something friends who know the field can help with, but I can get a lot more control over the hows and whens by just paying someone. I still mostly want to teach myself, but with someone to turn to when I slow down; regular tutorials will also help me keep at it.

Tutoring over Hangouts/Skype have two advantages: I don’t have to travel or find a space for it to happen, and I can recruit from anywhere in the world, meaning it can be cheaper for me while still a good rate for the person receiving it. I could look for a tutor with a Google ad targeted to the right country with keywords from the fields I want to know about, and link the ad to a post on my main blog setting out the details.

Nitty gritty specifics: I’d advertise across India, and offer 1000 INR/hour, which is around £10.60; a search suggests that programmers in Bangalore and Hyderabad are often hired out at around $12/hour which is around £8, but unlike programming this is work that a PhD student can do. I’d pay in arrears by TransferWise. I’d offer to make the calls at either 7am or 9:30pm, whichever suited the tutor best.

The ad would say something like:

I’ll pay you 1000 INR/hour to help me learn category theory over video chat. I’m not a student, just curious!

Keywords: coproduct colimit … other ideas for category theory specific words welcome. Also ideas for what to look for for mathematical logic.

I’m not sure how to assess applicants—I guess it’ll depend on how many I get!

What am I missing?

EDITED TO ADD: have added some clarification on what I want after a useful question on Twitter from John Armstrong (1, 2) – thanks!

Crowley’s Law

As always, laws aren’t named after the people who invent them. In 2011 I remarked:

“@frasernels” refers to Fraser Nelson who is now @FraserNelson on Twitter. @palmer1984 immediately informed me that I had nicked this observation from her, which I find very credible since I seem to make a habit of presenting her best ideas as my own! But it’s mildly noteworthy because Monbiot was taken with it, tweeting

and later, in The Spectator runs false sea-level claims on its cover (jointly authored with Mark Lynas):

(We should, as the tweeter Paul Crowley suggested, institute a new version of Godwin’s law: a rightwinger, when his claims are challenged, will soon denounce his opponents as thought police. Let’s call it Crowley’s Law.)

Fraser Nelson’s invocation of the spirit of Orwell that inspired the coinage was quite ridiculous, done in the face not of any kind of censorship or suppression of speech but simply of direct criticism of what was a laughable publishing choice in the first place. However, I’m writing this blog post as a quick link to set the record straight on one issue: I’ve never agreed with George Monbiot’s politically partisan framing of the problem.

Update: the perfect postscript:

The long scale of utility

The way that utilitarianism is normally stated is a terrible way to think about it which leads to real errors. Wikipedia:

Utilitarianism [holds] that the proper course of action is the one that maximizes utility

This idea strikes terror into people’s hearts: this is a standard that no-one could possibly live up to. Even Toby Ord occasionally buys himself a treat. This is the heart of the demandingness objection to utilitarianism. I think this definition says both too much and too little; it comments only on the highest point on the scale, where a better definition can illuminate the whole scale. I would rather say this:

A course of action that results in higher utility is proportionally better.

So yes, it’s better to give all your money to GiveDirectly than to spend it all on a yacht. But it’s also better to give £5 to GiveDirectly than nothing; and having given £5, you can feel good that your action is better than giving £0 and wonder if you might give £10 which would be better still.

People spend a lot of effort on trying to work out whether they reach the bar on their actions, and where the bar should be. They are hard on themselves for not reaching the bar they set, and worry that if they stopped being hard on themselves they would slide back and fail to achieve what they could. Utilitarianism, by the first definition, seems to set a bar so high that you can’t hope to reach it. But the truth is, there is no bar; there’s an infinitely long scale of utility. And so the question is not “is this the very best I can do”, but “can I do better than this? How much better?”

(A post I’ve been meaning to write for some time finally prompted by a blog post by Julia Wise, which in turn arose out of a conversation with me, her and Jeff Kaufman)

Never be sarcastic

I’m sometimes sarcastic, but I’m trying to give it up altogether. It’s bad to be sarcastic because civility over disagreements is a good idea, and sarcasm is uncivil. But there’s another reason to avoid it.

There are mistaken arguments that sound vaguely persuasive when cloaked in sarcasm whose flaws would be obvious if you tried to say them straightforwardly. People say things like “oh, yeah, I’m sure if cigarettes are in plain packets then no-one will ever smoke again, that’ll solve the problem”. What’s the non-sarcastic form of this argument? The obvious turn-around is “I think that there will still be smoking if cigarettes are put in plain packets”—but put this way, it’s obvious that it’s arguing against a position that no-one is taking, since a reduction in smoking that stops short of elimination is still a good thing.

Or “the minimum wage is great, let’s have a minimum wage of $1000 an hour and we’ll all be rich”. Here the argument is at best incomplete—we can all agree that a $1000/hr minimum wage wouldn’t be a good idea, but you’re going to have to spell out what you mean this to tell us about, say, a $15/hr minimum wage. If there’s a real argument behind what you say, you should be able to make it without sarcasm, and exactly what you are trying to argue will be clearer to all of us, including yourself.

Please do try to avoid the obvious jokes in your responses, thank you :)

Unoriginal and wrong posts ahead

I often hold back from posting first because I’m very unsure whether what I’m saying is right, and secondly because I wonder whether someone else has already said it, and better, and I’d find it if I did more thorough reading. However, I’ve come to the conclusion that it’s much better to err on the side of posting unoriginal and wrong things than to let this stop me.

For even lower quality material, I now have a Tumblr.

The size of the Universe and the Great Filter

In a small Universe, an early Great Filter is unlikely, simply because we’re evidence against it.

Suppose the Filter is early; how severe must it be for us to observe what we observe? By “severe” I mean: what proportion of worlds must it stop? Our existence is evidence towards a lower bound on the severity of Great Filter, while the fact that we observe no other life tells us about an upper bound. If the observable Universe is a substantial fraction of the whole Universe, then the two bounds aren’t very far apart, and so to defend an early Filter we have to believe in a great cosmic coincidence in which the severity of the Filter was just right, which is in turn is evidence for a late Filter whose severity we have no upper bound for. This argument has in the past given me real cause to worry that the Filter is late, and very severe.

However, this argument doesn’t hold at all if the observable Universe is a tiny fraction of the whole Universe.  The larger the difference between these two numbers, the bigger the gap between the bounds we have for the severity of the Filter, because intelligent only has to appear once in the whole Universe for us to contemplate this question, while it has to arrive twice in the smaller, observable bubble for us to observe it.

As I understand it, modern cosmology points towards a Universe that is either infinite, or very much larger than the observable Universe, so on those grounds alone we can perhaps worry less. But far more strongly than that, the Many Worlds interpretation of quantum mechanics gives us a many-branched Universe that is just unthinkably larger than the tiny portion we can observe; if intelligent life emerged on as many Everett branches as there are stars in the galaxy, we would still appear alone to our best ability to tell. So I now think that this isn’t a reason to worry that the Filter is late. It is however an excellent reason to expect never to meet an alien. Sorry.

Resolving Yeats’s Paradox

The blood-dimmed tide is loosed, and everywhere
The ceremony of innocence is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity.

In general, people talk about what they’re confident of, and keep quiet about what they’re not confident of. There are areas where this can make sense; if I ask which Bond came between Roger Moore and Pierce Brosnan, there’s an excellent chance that the person most confident of the answer has the right one, so it’s best they speak first.

However, a question such as “what does the long-term future hold for humanity” is affected by innumerable unknowns, and no very confident answer to this question can possibly be warranted; at best we can hope to make antipredictions. If we follow standard conventions, we will thus stay silent on such matters. The only people who do speak will be those who think they know what’s going to happen; we will leave the entire subject of discussion of humanity’s long-term future to crazy people. I don’t think that’s a good idea.  We see the same pattern in politics; the people who have the best judgement are exactly those who appreciate the uncertainties of these things, but all decision making is driven by the very confident; no-one fights for a measure they’re not sure will improve things.

We can only get out of this trap if we can energetically pursue courses of action we’re not sure about; if we are able to lack all conviction and still be filled with passionate intensity.

Diagrams for preference matrices

I always have a hard time making sense of preference matrices in two-player games. Here are some diagrams I drew to make it easier. This is a two-player game:

1

North wants to end up on the northernmost point, and East on the eastmost. North goes first, and chooses which of the two bars will be used; East then goes second and chooses which point on the bar will be used.

North knows that East will always choose the easternmost point on the bar picked, so one of these two:

2

North checks which of the two points is further north, and so chooses the leftmost bar, and they both end up on this point:

3

Which is sad, because there’s a point north-east of this that they’d both prefer. Unfortunately, North knows that if they choose the rightmost bar, they’ll end up on the easternmost, southernmost point.

Unless East can somehow precommit to not choosing this point:

4

Now East is going to end up choosing one of these two points:

5

So North can choose the rightmost bar, and the two players end up here, a result both prefer:

6

I won’t be surprised if this has been invented before, and it may even be superceded – please do comment if so :)

EDIT: here’s a game where East has to both promise and threaten to get a better outcome:

0,1-1,3_2,2-3,0

0,1-1,3_2,2-3,0-x

How to donate to the Future of Humanity Institute as a UK taxpayer

As charitable causes, MIRI, CFAR, and FHI are the ones that seem to me to do the most good per pound given, and are not far apart in efficiency. However as a UK tax payer, I can give to FHI much more efficiently than I can the other two. I took some wrong turns in working out how best to set up my monthly donation, which meant it took longer than it needed to; here’s what I learned, so that it can be easier for you.

First of all, don’t try to use Give As You Earn. If you do this, the University of Oxford can’t tell for whom the money was earmarked, and so it ends up in general coffers. Instead, set up a donation via direct debit or credit card, and remember to tick the “Gift Aid box”.

This means that if you give £60, FHI get £75. However, if you pay into the 40% tax bracket (ie if your salary is more than ~£41.5k) then for your £60 you can give FHI £100. To do this, make your donation £80 and claim back the £20 from the Revenue by writing to them, something like this:

HM Revenue and Customs
PAYE and Self Assessment
PO Box 1970
Liverpool
L75 1WX

<your address, phone number and NI number>

From today, I have set up a monthly donation of £80 with Gift Aid to the University of Oxford Development Trust Fund. The Fund is an exempt charity for the purpose of charity legislation. As such, it has full charitable status; it is exempt from the requirement to register as a charity with the Charity Commission, and therefore does not have a Charity reference number. I understand that since I pay the 40% tax rate, this should mean a change in my tax code that reduces my monthly tax bill by around £20; please can you make this change?

Thanks!

I got them to change my code by calling them, but since it’s about 3 minutes navigating the phone tree and a further 20 mins on hold, writing may save time. If you’re paid over £150k you can get even more tax back – at that pay I’m guessing you have an accountant and they can advise you better than I can :)

Thanks to purplecoffeespoons for her advice on how to sort this out! CORRECTED 2013-08-07: thanks to both purplecoffeespoons and Psyche for setting me straight on the tax bands.