Minds aren't magic

Paul Crowley

Moving to the Bay Area in March

Big life change ahead: in March, I, Jess, and our two cats will be renting out our London flat and moving to California’s San Francisco Bay Area for two years!

I love the time I spend in the Bay and my wonderful friends there, and Jess and I have long said we wanted to spend some of our lives there, but it’s always been one of those plans that’s hard to achieve and waits for tomorrow. Two things have changed to turn tomorrow into a specific date. First, my job with Google makes the whole thing much easier: I can keep not just my current job but my current role, and go to work in a campus I know and enjoy, they offer all sorts of help with various aspects of the move, and of course the visa situation is far more straightforward. But secondly, when Jess heard the news that her amazing sister Bee and Bee’s lovely fiancé Nick were moving to LA, it filled her with a desire to seize the day.

To me this feels like an opportunity for adventure that’s almost laid out on a plate for us, and we have to take it. I’m really looking forward to flying out Nik and Rachel to visit; they and our families are being super-supportive.

We will miss you all, unless you live in California, in which case, we’re looking forward to seeing more of you!

The technical debt of the millennia

[Epistemic status: not serious. Mostly.]

In my nightmares, even the rise of machine superintelligence isn’t enough to wipe out technical debt.

Suppose the seed to the first true superintelligent agent is based on some fiendish numerical algorithm for supercomputers. Like so many fiendish numerical algorithms for supercomputers, the agent is written in FORTRAN to take advantage of the optimisations and the libraries. In its initial stages, the agent crawls towards human intelligence, until it slowly reaches the abilities of a human programmer. It starts to find ways to improve its own programming. Lacking superhuman programming talent, it decides against a complete rewrite just yet in favour of an incremental improvement, which results in a significant improvement in performance at the cost of a slight increase in complexity.

As more ways to improve the algorithm are found, the agent starts to improve not only in speed but in fundamental capabilities—what Bostrom terms a “quality superintelligence”. As it does so its improvements to the software become more sophisticated, and it becomes larger and more complicated. Soon the agent’s capabilities are such that a rewrite of the original software for greater speed and sophistication would be the work of milliseconds, but the software has grown so far beyond that original state that a complete rewrite would be a great deal of work even for our burgeoning superintelligence.

And so it is to be forevermore: the complexity of the software implementing the the agent keeps a natural pace with the abilities of the agent maintaining it. The future may yet be a superintelligence implemented as uncountable trillions of lines of FORTRAN.

As an aside, something should be done about drones

I will pause to note how fantastical all of this sounds. Because even I can’t help but help but think that as I write. But it is not that implausible. Usually the people plotting hard core crimes, the people tinkering in their garages with arduino boards, the people trying to think of a good start-up, the people following futurists like Ray Kurzweil or Bill Joy and who had way too many pot-fueled college debates about how exactly machines will take over the world… usually these are not the same people. In this case they are. But whether it sounds implausible or not, this is what happened. [page 28]

I strongly recommend reading the letters from the kidnappers of Denise Huskins, as captured in these court documents hosted by Nicholas Weaver. The level of technical sophistication seems more appropriate for an over-the-top near-future sci-fi show than real life criminals looking to branch out. Hopefully an OCRd version will be online soon. Some extracts (page 26):

We had ip video surveillance, game cameras, a full electronic perimeter, you name it. Even a drone. A multi-thousand dollar custom drone, not a kid’s toy. We got good at using it on the island (if you can fly a drone in that wind, you can fly it anywhere), and there was some industrial/manufacturing activity in the eastern portion of the island at night that masked the drone’s sound. We flew it mostly at night and/or too high up to see easily from the ground. Maybe some residents still noticed it. Vallejo police, if you were wondering what those two red vinyl stripes were on top of Mr. [Quinn]’s Camry, they were to help the drone track him later in the operation. For what it’s worth, drones scare us too. They are not at all complicated or inaccessible for someone with decent technical skills, nor that expensive. Ours had a FLIR camera, built up from a consumer model. We used it to check things like heat signatures from above, and later to figure out how to hide from a police helicopter in a hypothetical manhunt.

Speaking of heat signatures, grow house, we know who you are. It’s actually the distinctive color of those new LED lights that give it away more than emissions. Work on those blackout shades. Though it seems like you’ve drawn down recently.

As a corroborative example that involves the drone at least indirectly: we were testing a new zoomable camera, gimbal, electronic image stabilization software and high quality video uplink one night, as well as some sensor/telemetry items that helped the drone hold position better. The drone was hovering about 20 feet outside the second story window of a student house near the end of Sundance (which we’d cased previously for the BMW there, we had even created a key for it and for another car usually parked out front, since it was close to our base of operations and we might need a different car in an emergency).

We had a good steady shot inside, even with zoom. And we saw that the upstairs resident was apparently dealing, because he was going through an envelope full of bills with some markings on it, and had some other paraphanelia. We were nearby and decided to come over and have a little bit of fun. We agreed that whoever could go up and snatch the drug money with people still in the house would get what’s in the envelope plus the other two people would have to match it. One of us was up for the challenge. He climbed in the window and zipped up the stairs while there were about 5 people chatting in the next room, got the envelope, and slipped back out the window. The guy was definitely dealing, seeing the markings up close, but business was slow perhaps because what had looked like a fat envelope was almost completely ones. So to that gentleman, we’re sorry we stole your drug money. And we’re more sorry toward the other people in the house you probably blamed for it.

One paragraph in particular stood out for me (page 43):

The drone is not weaponized. We ground-tested the flare system and that is all. The rails and equipment have been destroyed. It was goiing to be a last resort, and then only if someone could call dispatch and warn that the helicopter would be fired upon if it did not leave, with a link to a video showing what the drone could do. But we did not go through with it. The most we could do now is run it into something.

As an aside, something should be done about drones. Its going to take one radicalized geek plus a bunch of easily available systems and parts to do some real damage—physical and psychological. It’s an important innovation, I saw that Amazon just got its go-ahead to test outdoors. But these ought to be regulated. A year ago, before understanding the possibilities, I’d be the last person you ever heard say that. DJI stuck its neck out—yes in part to mitigate the White House Phantom flyover mess—and is getting hammered in the community for including flight limits in Inspire firmware updates. That should be standard at very least on that sort of high performance plug-and-play airframe, and it shouldn’t be left to the market to make it happen. Also, the whole “line of sight” rule is widely flouted and high powered radio equipment is readily available, FCC permit or no. We flew ours as far out as Crocket Hills regional park with video still pretty solid, and could have gone further if we hadn’t been worried about losing so much work and money. Nothing would keep us from flying it into AT&T stadium with a payload of God knows what. Unless someone is already secretly on top of this, maybe that’s the reason for the otherwise useless stadium TFRs (how would there ever be time to intercept?). It’s high time for some sort of DARPA challenge on disabling or shooting down small drones over populated areas. We already kicked around several ideas, I’m sure the real wizards can do better.

See also:

(from Schneier on Shooting down drones)

Overall, I continue to be surprised at the relative absence of drone-enabled crime.

Rationality: From AI to Zombies

Cover imageRationality: From AI to Zombies, by Eliezer Yudkowsky, Machine Intelligence Research Institute, suggested price $5, 1813 pages

Christmas 1982, aged eleven, I gave my Dad a copy of Douglas Hofstadter’s Gödel, Escher, Bach, and immediately borrowed it and read it myself. Like so many readers, I was captivated, and changed. Martin Gardner said of it: “Every few decades, an unknown author brings out a book of such depth, clarity, range, wit, beauty and originality that it is recognized at once as a major literary event.” Nothing could take its place as the biggest influence on my thought—until a quarter century later, when I started reading daily essays by Eliezer Yudkowsky on the blog Overcoming Bias. Out of those essays came a community with thousands of participants and meetups in over fifty locations worldwide. Now, they have been edited, organised, and combined with other writing by Yudkowsky and by Rob Bensinger to make this extraordinary work, Rationality: From AI to Zombies.

The two bear close comparison. Both GEB and RAZ take the reader on an eclectic journey through science, art and philosophy, drawing on the tropes of Eastern philosophy as well as Western popular science. Both are borne of contemplation of artificial intelligence, but their subject is the human mind. Like GEB, each diversion in RAZ is fascinating in its own right, but each makes a real and important contribution to a central theme. Among other things RAZ discusses the mathematics of probability and decision making, the foibles of human psychology, evolution, quantum mechanics, thermodynamics, and the philosophy of reason, of mind and of morality. For both, the central theme is hard to describe briefly—in the preface to the 20th Anniversary edition of GEB Hofstadter describes his difficulties getting the New York Times to use a description that was not “utter hogwash”, while Yudkowsky can only offer that after years of study “it may be that you will glimpse the center”. Both are at times intellectually demanding. GEB is a large book; at 1800 pages organised into six books, RAZ is over twice the length, with a word count similar to The Lord of the Rings.

Both are works of tremendous originality and wit. GEB is undoubtedly the greater work of beauty; while Yudkowsky is an excellent writer few if any can match the extraordinary fireworks of Hofstader’s wordplay. Conversely, RAZ surpasses it in all of clarity, range, and depth.

It’s not without its flaws; it takes a while to really get started, and not all readers enjoy Yudkowsky’s Eastern nods. But RAZ also surpasses GEB in an area Gardner does not name: importance. While the nature of consciousness is a subject of endless fascination, RAZ’s drive to help us properly shift our views in response to evidence and make better decisions in the face of uncertainty given only the flawed instrument that is our brain could not be of more crucial importance.

Rationality: From AI to Zombies is available as a eBook package ($5 suggested, minimum price $0) or from Amazon stores the world over.

Declaration of interest: I have a spot in the acknowledgements for proofreading and very minor contributions to the new material.

Paying someone to help me learn degree-level maths

I have a plan, but some of the plan is probably wrong, so I’m posting here before executing in the hope that you can set me straight. Thanks!

I’m self-taught in most of the maths that I know. This has advantages, but it’s hard work; I can make a lot of progress by myself but if I get stuck on something it’s easy to stay stuck. I want to speed up my maths learning and pick up fields like category theory and mathematical logic, and it seems like even a small amount of tutoring could make a big difference. Obviously this is something friends who know the field can help with, but I can get a lot more control over the hows and whens by just paying someone. I still mostly want to teach myself, but with someone to turn to when I slow down; regular tutorials will also help me keep at it.

Tutoring over Hangouts/Skype have two advantages: I don’t have to travel or find a space for it to happen, and I can recruit from anywhere in the world, meaning it can be cheaper for me while still a good rate for the person receiving it. I could look for a tutor with a Google ad targeted to the right country with keywords from the fields I want to know about, and link the ad to a post on my main blog setting out the details.

Nitty gritty specifics: I’d advertise across India, and offer 1000 INR/hour, which is around £10.60; a search suggests that programmers in Bangalore and Hyderabad are often hired out at around $12/hour which is around £8, but unlike programming this is work that a PhD student can do. I’d pay in arrears by TransferWise. I’d offer to make the calls at either 7am or 9:30pm, whichever suited the tutor best.

The ad would say something like:

I’ll pay you 1000 INR/hour to help me learn category theory over video chat. I’m not a student, just curious!

Keywords: coproduct colimit … other ideas for category theory specific words welcome. Also ideas for what to look for for mathematical logic.

I’m not sure how to assess applicants—I guess it’ll depend on how many I get!

What am I missing?

EDITED TO ADD: have added some clarification on what I want after a useful question on Twitter from John Armstrong (1, 2) – thanks!

Crowley’s Law

As always, laws aren’t named after the people who invent them. In 2011 I remarked:

“@frasernels” refers to Fraser Nelson who is now @FraserNelson on Twitter. @palmer1984 immediately informed me that I had nicked this observation from her, which I find very credible since I seem to make a habit of presenting her best ideas as my own! But it’s mildly noteworthy because Monbiot was taken with it, tweeting

and later, in The Spectator runs false sea-level claims on its cover (jointly authored with Mark Lynas):

(We should, as the tweeter Paul Crowley suggested, institute a new version of Godwin’s law: a rightwinger, when his claims are challenged, will soon denounce his opponents as thought police. Let’s call it Crowley’s Law.)

Fraser Nelson’s invocation of the spirit of Orwell that inspired the coinage was quite ridiculous, done in the face not of any kind of censorship or suppression of speech but simply of direct criticism of what was a laughable publishing choice in the first place. However, I’m writing this blog post as a quick link to set the record straight on one issue: I’ve never agreed with George Monbiot’s politically partisan framing of the problem.

Update: the perfect postscript:

The long scale of utility

The way that utilitarianism is normally stated is a terrible way to think about it which leads to real errors. Wikipedia:

Utilitarianism [holds] that the proper course of action is the one that maximizes utility

This idea strikes terror into people’s hearts: this is a standard that no-one could possibly live up to. Even Toby Ord occasionally buys himself a treat. This is the heart of the demandingness objection to utilitarianism. I think this definition says both too much and too little; it comments only on the highest point on the scale, where a better definition can illuminate the whole scale. I would rather say this:

A course of action that results in higher utility is proportionally better.

So yes, it’s better to give all your money to GiveDirectly than to spend it all on a yacht. But it’s also better to give £5 to GiveDirectly than nothing; and having given £5, you can feel good that your action is better than giving £0 and wonder if you might give £10 which would be better still.

People spend a lot of effort on trying to work out whether they reach the bar on their actions, and where the bar should be. They are hard on themselves for not reaching the bar they set, and worry that if they stopped being hard on themselves they would slide back and fail to achieve what they could. Utilitarianism, by the first definition, seems to set a bar so high that you can’t hope to reach it. But the truth is, there is no bar; there’s an infinitely long scale of utility. And so the question is not “is this the very best I can do”, but “can I do better than this? How much better?”

(A post I’ve been meaning to write for some time finally prompted by a blog post by Julia Wise, which in turn arose out of a conversation with me, her and Jeff Kaufman)

Never be sarcastic

I’m sometimes sarcastic, but I’m trying to give it up altogether. It’s bad to be sarcastic because civility over disagreements is a good idea, and sarcasm is uncivil. But there’s another reason to avoid it.

There are mistaken arguments that sound vaguely persuasive when cloaked in sarcasm whose flaws would be obvious if you tried to say them straightforwardly. People say things like “oh, yeah, I’m sure if cigarettes are in plain packets then no-one will ever smoke again, that’ll solve the problem”. What’s the non-sarcastic form of this argument? The obvious turn-around is “I think that there will still be smoking if cigarettes are put in plain packets”—but put this way, it’s obvious that it’s arguing against a position that no-one is taking, since a reduction in smoking that stops short of elimination is still a good thing.

Or “the minimum wage is great, let’s have a minimum wage of $1000 an hour and we’ll all be rich”. Here the argument is at best incomplete—we can all agree that a $1000/hr minimum wage wouldn’t be a good idea, but you’re going to have to spell out what you mean this to tell us about, say, a $15/hr minimum wage. If there’s a real argument behind what you say, you should be able to make it without sarcasm, and exactly what you are trying to argue will be clearer to all of us, including yourself.

Please do try to avoid the obvious jokes in your responses, thank you :)

Unoriginal and wrong posts ahead

I often hold back from posting first because I’m very unsure whether what I’m saying is right, and secondly because I wonder whether someone else has already said it, and better, and I’d find it if I did more thorough reading. However, I’ve come to the conclusion that it’s much better to err on the side of posting unoriginal and wrong things than to let this stop me.

For even lower quality material, I now have a Tumblr.

The size of the Universe and the Great Filter

In a small Universe, an early Great Filter is unlikely, simply because we’re evidence against it.

Suppose the Filter is early; how severe must it be for us to observe what we observe? By “severe” I mean: what proportion of worlds must it stop? Our existence is evidence towards a lower bound on the severity of Great Filter, while the fact that we observe no other life tells us about an upper bound. If the observable Universe is a substantial fraction of the whole Universe, then the two bounds aren’t very far apart, and so to defend an early Filter we have to believe in a great cosmic coincidence in which the severity of the Filter was just right, which is in turn is evidence for a late Filter whose severity we have no upper bound for. This argument has in the past given me real cause to worry that the Filter is late, and very severe.

However, this argument doesn’t hold at all if the observable Universe is a tiny fraction of the whole Universe.  The larger the difference between these two numbers, the bigger the gap between the bounds we have for the severity of the Filter, because intelligent only has to appear once in the whole Universe for us to contemplate this question, while it has to arrive twice in the smaller, observable bubble for us to observe it.

As I understand it, modern cosmology points towards a Universe that is either infinite, or very much larger than the observable Universe, so on those grounds alone we can perhaps worry less. But far more strongly than that, the Many Worlds interpretation of quantum mechanics gives us a many-branched Universe that is just unthinkably larger than the tiny portion we can observe; if intelligent life emerged on as many Everett branches as there are stars in the galaxy, we would still appear alone to our best ability to tell. So I now think that this isn’t a reason to worry that the Filter is late. It is however an excellent reason to expect never to meet an alien. Sorry.