Saturday, July 12, 2014

Systems 1?

[These ideas were inspired by/stolen from Nate Sores, aka So8res, in an ongoing email conversation about the Dark Arts of Rationality.]

Summary: There's more than one thing we might mean by "System 1", and the different referents require different rationality techniques.
___________________________________________


I went skydiving once. On the way up, I was scared. Not as scared as I expected to be, but more scared than I thought I should have been. I believed at the time that there was about a 0.0007% chance of dying in a skydiving accident.* In other words, if I and around 150,000 other people all went skydiving, about one of us would die. And that's before taking into account that I was jumping with an expert.

Part of me knew this. Otherwise, I wouldn't have gotten into the plane. But part of me didn't seem to know it, and I knew part of me didn't know it, because I was seriously wondering whether I'd have the guts to jump on my own.** I wanted all of me to understand, so I could experience the excitement without the fear.

So I tried picturing 150,000 people all jumping out of planes. (Dictionary of Numbers helpfully informs me that that's about the population of Guam.) It helped. Then I called to mind the people I knew who had been in car crashes, and remembered how willing I was to climb into a car anyway. My methods weren't as sophisticated then, but I was basically seeking what I've recently been calling a "System 1 handle" to arm System 2's abstract understanding with a clear emotional impact. It was enough, though, to calm my nerves.

We lined up. I was calm. The door opened, and the wind roared. I was calm. The pair in front of me jumped. I was calm. 

The floor disappeared from beneath me. It took me about two seconds to regain enough composure to scream.

Dual process theory is mostly just a quick-and-dirty way of framing cognitive processes. Speaking as though "System 1" and "System 2" are people in my head with very different personalities lets me apply a bunch of useful heuristics more intuitively. I've been fairly good about keeping track of the reality that they aren't *people*. I've been less good about guarding against a false dilemma.

The framing tracks something like "degrees of deliberation". But there's a lot more in that continuum than "very deliberative" and "very not deliberative". I think I've been treating everything below some point on the line as "System 1 processing", and it's simply "everything that I don't think of as System 2".

There seem to be (at least) two natural clusters in the "not System 2" part of the spectrum that might call for different treatment. During skydiving, one cluster responded to vivid, concrete examples. The other cluster was too simple, to instinctual to get a grip on even that. The link between "ground disappears" and "freeze in terror" was too basic to manipulate with the kind of technique I was using. The "oh shit I'm falling" process is a different animal than the one responsible for "this is dangerous and therefore I'm going to die".

The "System 1 translation" techniques I've been writing about are meant to deal with the part-of-yourself-that-you-argue-with-when-you're-trying-to-convince-yourself-to-do-something-difficult, and the part-of-yourself-that-needs-to-remember-important-details-but-doesn't-care-about-numbers-or-other-abstractions. The part that's anxious about the jump and doesn't understand the odds.

But I'm not sure S1 translation does much of anything for the part that panics when you pull the ground out from under it. To deal with that part, I think you probably need tools more along the lines of exposure therapy.

When you're in a car driving on icy roads and you start to slide, the best way to regain control is to steer into the skid and accelerate slightly. But most people's instinctive reaction is to slam on the brakes. I've tried to understand why the steer-into-the-skid method works so I could translate that understanding into the language of System 1, and while I've not thought of anything I expect would  work for most people, I've got something that makes sense to me: When I'm on
icy roads, I can imagine that I'm in a Flintstones car, with my feet scrambling against the ice. If I were in a Flintstones car, my immediate reaction to sliding would be to run a little faster in the direction of the skid in order to gain control. I figure this is probably because I spent some of my childhood playing on frozen ponds, so I wouldn't suggest that translation to just anyone.

But I doubt it would work no matter how robust the translation. The part of my brain that panics and slams on the brakes is more basic than the part that's scared of the whole idea of skydiving, or that resists checking the balance of my bank account. I'm not sure it can be reasoned with, no matter what language I use.

To ensure I do the right thing when driving on icy roads, a much better plan would be to find some way to practice. Find an icy parking lot, and actually expose myself to the experience over and over, until the right response is automatic.

I'm not sure about this, but I'd be at least a little more surprised if S1 translation worked for ice driving than if it didn't. If I'm right, lumping together all the "System 1 techniques" and using them on anything that's "not System 2" can be dangerous. If this is a real distinction, it's an important one for applied rationality.

___________________________________________


*I still believe that, but with less confidence 'cause I'm better calibrated and recognize I haven't done enough research.
**With our setup, I was actually hanging from a harness attached to the expert by the time we were about to leave the plane, so my feet weren't on the floor and I didn't get to jump on my own. Still kinda pissed about that.

Thursday, June 5, 2014

Growth Mindset Forest: A System 1 Translation

Related Posts: Urge Propagation In Action, The Most Useful Mnemonic Technique, A Stroll Through My Palace, Ars Memoriae
________________________________________________

I was once counseling a friend at the end of a CFAR workshop. Unsurprisingly, she had a zillion ideas running around in her head, and she was afraid they'd all vanish a week after she left. "Even the most important ones," she told me, "are so full of insight and meaning right now, and I think that even if I remember the basics of what they are in a week, I won't remember why they're so important. They won't keep their effects on my patterns of thought and emotion."

"What's the most important thing you learned this weekend?" I asked her. 

"I must cultivate a growth mindset," she declared. I could feel the strength of that idea resonating through her in her voice and body language as she said the words. In my view, the most important thing I could do for her was to make sure she had access to that feeling when she needed it most.

"Why should you cultivate growth mindset?" I asked. "What goals does it accomplish?"

"Well, sort of all of them," she said. "I have goals like graduating college, improving my relationships, and being more agenty in general. In the past, depression has gotten in the way, and I've been very fixed mindset about it, thinking I could never get any better because I was a Depressed Person and I might as well give up. If I feel like I can grow and change out of depression, all those other things are a lot more likely to happen. With fixed mindset, I'll just go with the default, never exceeding my expectations for myself or becoming stronger than I already am."

"Awesome. Break it down: What are the ideas at play?" 

We identified five central parts of her insight*: 'the process of growth', 'the feeling when you're tempted by fixed mindset', 'the unwanted outcome of remaining in fixed mindset', 'the desired outcome of adopting growth mindset', and 'The causal link from a single instance of resisting fixed mindset to becoming stronger'."

"Perfect," I said. "Time to translate this into The Language of System 1. You know the drill!"

How To Translate


1. Concretize


  • Associate with the process of growth: A green and growing tree that expends time and effort but keeps going. 
  • Keeping with the tree theme, associate with 'the feeling when you're tempted by fixed mindset: A brain carved from a tree stump made of dry, brittle wood that can never grow. 
  • The unwanted outcome of staying in fixed mindset: The brain stump is dead and rotting away. 
  • The predicted outcome of employing growth mindset consistently: A giant redwood tree reaching up through the canopy to the sunlight, too tall and strong to stay hidden in the darkness or to fall in a storm. 
  • The causal link from a single instance of resisting fixed mindset to becoming stronger: New growth emerging out of the rotting stump.

2. Exaggerate

The tree is so tall, it reaches up through the clouds! Not one tree, but a whole forest. A new tree for every skill, new growth from another dead stump every time I try something new, a downpour of nourishing rain soaked up through the roots when I risk failure to reach beyond my current abilities,  brilliant sunlight radiating from persistent practice onto every leaf in the forest!

3. Use More Senses

Not just an image of this forest. I feel the upward stretching in my body when I try a little harder. I feel warm sunlight, cool rain, and wind through the branches as the trees bend without breaking. I hear the downpour, and the music of happy birds in the branches. I smell the rotting of the dead stumps, and the fresh scent of healthy green life when they begin to grow again.

4. Engage Personally

I am this forest, of course. The trees are vaguely shaped like my body. I am the one reaching up, and I make my branches dance in the breeze to the birdsong. I call forth the sun and the rain, and I decide to make dead stumps nourish new growth. In the brain stumps, I am curled up inside, hiding.

5. Tell a Story

Right now, the whole landscape is covered in dry tree stumps. But here I am, one lone growing tree, and nothing will stop me from reaching the sky. Whenever I feel a tree stump rotting, I'll reach upward with new growth, I'll water the ground, and over time, my whole world will all be a lush forest.

Checking Your Work


"So what do you think," I asked. "Is this translation strong enough to affect your actions a month down the line, six months, a year? Does it have an emotional impact that'll hit you every time?"

"Hell yes."

Results


After writing this, I checked back in with the friend to find out whether she still uses the Growth Mindset Forest, whether it works, and whether she's changed anything about it. I committed to publishing whatever result she reported.

Turns out she's still using this four months later, and it works well! She's changed the name, though: She's now calling it "Frondescence", which I completely love. She says it's one of the few things powerful enough for her to use when she's in a place of hopelessness and despair. It doesn't totally solve the problem every time, but it seems to help at least a little in most cases, and often it produces large positive results.

Specific example: She got a low grade on a test and was tempted to be all gloomy about it and give up. But instead, she used Frondescence and kept working so she could get a better grade next time. Yay!


________________________________________________

*This motivation-hacking technique, and especially the identification of goals and the effects single actions have toward accomplishing them, is inspired by a method CFAR calls "Propagating Urges" (though the method and name evolve over time). The idea of "System 1 Translation" was inspired by the book Made To Stick by Dan and Chip Heath.

Thursday, May 22, 2014

A Message to System 1

I used to be afraid of checking the balance of my bank account. I felt as though finding out how much money I had caused me to lose money, so I'd go weeks, sometimes a month, without checking it--even though my income was tiny and irregular. I'd feel guilty almost any time I bought anything, which led to bizarre spending patterns where I'd go for a while eating nothing but rice and beans, then suddenly spend way too much because hey, if I'm doomed anyway for having bought this one unnecessary thing, I might as well enjoy myself before reality catches up with me.

Not surprisingly, when I finally got around to checking my  balance, it was usually frighteningly low. Which, of course, my brain took as punishment for checking my balance, and the cycle continued.

I finally confronted this a little less than a year ago. Though it hurt a lot to poke at the problem, I reasoned like this: In reality, checking my balance causes me to gain money. Nobody's paying me directly for logging into my account, but having accurate beliefs about the resources available to me allows for far more efficient, and not completely insane, spending patterns, and therefore a higher balance on average. Additionally, it's really dangerous in general to allow myself to cling to false beliefs, regardless of how comforting they may be. (This was probably inspired by Anna explaining that paying parking tickets on time is equivalent to cashing a check in the amount of a late fee.)

But understanding this in an abstract, System 2 way was nowhere near enough. It didn't actually change my behavior at all, because on an emotional level, I remained strongly motivated to avoid checking my bank account. The important work done by reasoning through things was to recognize that I really did care about having money and not lying to myself, and that checking my account balance would lead to those larger goals.

Inspired by techniques I learned in a CFAR workshop, I knew that my next step was to explain to System 1 why checking my bank account leads to something I really want. After caching that snapshot message in memory, I'd be able to invoke my System 1-optimized explanation every time I noticed "this would be a good time to check my bank account" and felt myself trying to bury the thought.

Before me (where "me" is usually played by Duncan MacLeod of the TV series Highlander), I'd imagine an ominous looking lock on a Gringotts style bank vault. A broadsword is strapped across my back. The lock represents "clinging to comforting beliefs about my finances", and it stands between me and all the riches behind that door.

Focusing on the feeling of wanting to remain ignorant, of wanting to pretend everything is ok regardless of the truth, I draw my sword. I prepare to strike, raising the sword, calling to mind relinquishment: "That which can be destroyed by the truth should be... the thought I cannot think controls me more than thoughts I speak aloud." Remembering how it feels to let go of ignorance, I let the sword fall, slashing right through the lock. It drops, broken, to the stone floor, making an amplified echo of the "click" from the enter key of my keyboard as it clatters across the ground. Slowly, the door begins to open.

In the mean time, having taken the head of my enemy, the Highlander quickening begins. (I'm MacLeod, remember?) The quickening is, well...

video

Some background: In Highlander, an "immortal" can kill another immortal by cutting off his head. When that happens, all the knowledge and power of the dead immortal is transferred to the victorious immortal. The transfer is called a "quickening", and it basically looks like a giant lightning storm focused on the winner.

Anyway, knowledge is power, so this knowledge storm thing happens while the vault door opens. When it's all over, I enter the vault to look upon my horde of gold pieces and jewels so sparkly they would make a dragon jealous.

If that seems a whole lot more intricate and over the top than you'd expect me to need for something as simple as "check my account balance", you've got to remember I was trying to blast through this almighty ugh field that had crippled me for years. Usually, going straight for this System 1 translation technique isn't recommended when there's a solid ugh field in the way, since there are other techniques (like aversion factoring) you can use to break those down a little at a time. But I've found that it often works just fine as long as your translation is solid and your message is even stronger than the ugh field. Powerful ugh field, powerful message. Subject doesn't really matter. Plus, the basic idea of the quickening ended up serving perfectly as a general purpose translation for relinquishment itself later on.

And... it totally worked! I checked my balance multiple times a week, and experienced no more pain than my actual financial situation warranted. I ended up with accurate beliefs about how much I'd spent and how much I had left.

Moreover, here's what prompted me to make this post: I just checked my balance, and for the third or fourth time in a row, I was surprised to find more money there than I expected. I think this is because I'm so used to discovering I've drastically overestimated my balance that the new urge to know my real balance causes me to update right away to something resembling what the truth should be given past experience. But my spending patterns have improved, as predicted, so now I really do have more cash in my account on average than my past experiences predict!

I certainly haven't amassed vast piles of gold, but System 1 isn't so great with quantities anyway, and it understands "room full of shiny things I can exchange for chocolate" much better than "large percent increase in available funds".

Monday, May 12, 2014

The Most Useful Mnemonic Technique

The other day, I was talking to someone about potential applications of biometrics to gaming and web based education. He mentioned a really interesting study I'd never heard of before. Roz Picard and her students have figured out how to track someone's emotions through heartbeat and respiration via webcam using changes in skin tone as blood circulates. I definitely wanted to look it up later. As I repeated back "Roz Picard?" to make sure I had it right, I made a mental note with the name and a brief description of the study, situated it in my memory of the restaurant where the conversation took place, and associated it with the trigger of opening my laptop.

If I'd not already had a fair amount of experience with the art of memory, it would have been much easier to whip out my smartphone and drop it in my Workflowy right then, and it would have been worth the slight disruption to the conversation. Given how many people object that they could "just write it down" when I mention mnemonics, I expect you might want to update on this about how quick and easy mnemonic techniques get with practice. Storing it in my brain cost less time and attention.

I'm going to sketch roughly what happened in my head when I made that mental note, because I want to illustrate the most foundational principle of the art of memory--a principle I've never once seen laid out explicitly in anything I've heard or read about mnemonics. (Why??? I'm not quite sure. It's very frustrating.)

The most practical insight I've gained by studying mnemonics is this: System 1 runs your memory, and it does not speak English. If you want to convince System 1 to remember something System 2 thinks is important, you have to translate it into the language of System 1. For the same reason you would not train a dog to sit by carefully explaining in words how to execute the procedure of sitting, repeating "remember about Roz Picard and biometrics" should not be your go-to method when you want to remember or learn.* System 1 is in charge of your memory, and it does not care about your proper nouns and abstract concepts.

Here's what System 1 does care about. It likes things that are concrete, emotional, multi-sensory, vivid, dynamic, personally engaging, and story-like.

So I translated the content System 2 flagged as important into the language in which System 1 could actually store it. I imagined the very fluffy black hair of my friend Roz, and stuck it on my mental image of Jean-Luc Picard (to encode the name). I imagined his face flashing bright red, then white, and back again as his facial expressions cycled through intense joy, sadness, and anger while he laughed, cried, and yelled (to encode "you can measure emotions by monitoring heartbeat by watching change in skin tone"). To make sure I accessed the memory when I could do something useful with it, I imagined that big fluffy black hair protruding out from between my monitor and keyboard as I opened my laptop, and then Picard's color-changing, emotional face hovering in front of the screen. Finally, to increase the odds I'd simultaneously access other potentially relevant memories associated with the context of the conversation, I imagined Picard's head rolling off of my laptop--which is now sitting on the restaurant at the very table where the conversation happened--and knocking over my glass of wine, which then spills all over my conversation partner.

Because it's how I operate in real life and I wanted to give a real example, there were other things going on in that mental note besides "translate for S1". But the main thing I want to point to is the translation of "Roz Picard" and of why she matters. The central image is concrete; you could pick up that head and use it like a bowling ball if you wanted. It is clearly emotional. It is multi-sensory because you can feel the fluffy hair, you can hear Picard's voice, and you can see the changing colors. It's fairly vivid, since Roz has some pretty big and interesting hair. It's dynamic, since the colors change and the emotions cycle. The basic image isn't personally engaging, though you could easily make it so by putting yourself behind a video camera that is taping the color changes for the study; in the expanded version, I'm opening the laptop myself. The basic image isn't especially story-like either, but the trigger-action technique employed in the expanded version makes that part automatic (I open the laptop and the head rolls across the table and knocks over wine that spills on my friend).

So next time you want to remember something--or learn an abstract concept or skill--notice when it's mostly System 2 doing the talking, and see if you can explain in System 1 terms instead. It takes practice and maybe training to get really good at this, but I bet you'll see big results from small preliminary efforts if you give it a try.

*Yes, repeating things strengthens associations via classical conditioning, but you can do orders of magnitude better than that.

Monday, May 5, 2014

A Dialogue On the Dark Arts

Doublethink

It is obvious that the same thing will not be willing to do or undergo opposites in the same part of itself, in relation to the same thing, at the same time.
Can you simultaneously want sex and not want it? Can you believe in God and not believe in Him at the same time? Can you be fearless while frightened?

To be fair to Plato, this was meant not as an assertion that such contradictions are impossible, but as an argument that the soul has multiple parts. It seems we can, in fact, want something while also not wanting it. This is awfully strange, and it led Plato to conclude the soul must have multiple parts, for surely no one part could contain both sides of the contradiction.

Often, when we attempt to accept contradictory statements as correct, it causes cognitive dissonance--that nagging, itchy feeling in your brain that won't leave you alone until you admit that something is wrong. Like when you try to convince yourself that staying up just a little longer playing 2048 won't have adverse effects on the presentation you're giving tomorrow, when you know full well that's exactly what's going to happen.

But it may be that cognitive dissonance is the exception in the face of contradictions, rather than the rule. How would you know? If it doesn't cause any emotional friction, the two propositions will just sit quietly together in your brain, never mentioning that it's logically impossible for both of them to be true. When we accept a contradiction wholesale without cognitive dissonance, it's what Orwell called "doublethink".

When you're a mere mortal trying to get by in a complex universe, doublethink may be adaptive. If you want to be completely free of contradictory beliefs without spending your whole life alone in a cave, you'll likely waste a lot of your precious time working through conundrums, which will often produce even more conundrums.

Suppose I believe that my husband is faithful, and I also believe that the unfamiliar perfume on his collar indicates he's sleeping with other women without my permission. I could let that pesky little contradiction turn into an extended investigation that may ultimately ruin my marriage. Or I could get on with my day and leave my marriage in tact.

It's better to just leave those kinds of thoughts alone, isn't it? It probably makes for a happier life.

Against Doublethink

Suppose you believe that driving is dangerous, and also that, while you are driving, you're completely safe. As established in Doublethink, there may be some benefits to letting that mental configuration be.

There are also some life-shattering downsides. One of the things you believe is false, you see, by the law of the excluded middle. In point of fact, it's the one that goes "I'm completely safe while driving". Believing false things has consequences.
Be irrationally optimistic about your driving skills, and you will be happily unconcerned where others sweat and fear. You won't have to put up with the inconvenience of a seatbelt. You will be happily unconcerned for a day, a week, a year. Then CRASH, and spend the rest of your life wishing you could scratch the itch in your phantom limb. Or paralyzed from the neck down. Or dead. It's not inevitable, but it's possible; how probable is it? You can't make that tradeoff rationally unless you know your real driving skills, so you can figure out how much danger you're placing yourself in. --Eliezer Yudkowsky, Doublethink (Choosing to be Biased)
What are beliefs for? Please pause for ten seconds and come up with your own answer.

Ultimately, I think beliefs are inputs for predictions. We're basically very complicated simulators that try to guess which actions will cause desired outcomes, like survival or reproduction or chocolate. We input beliefs about how the world behaves, make inferences from them to which experiences we should anticipate given various changes we might make to the world, and output behaviors that get us what we want, provided our simulations are good enough.

My car is making a mysterious ticking sound. I have many beliefs about cars, and one of them is that if my car makes noises it shouldn't, it will probably stop working eventually, and possibly explode. I can use this input to simulate the future. Since I've observed my car making a noise it shouldn't, I predict that my car will stop working. I also believe that there is something causing the ticking. So I predict that if I intervene and stop the ticking (in non-ridiculous ways), my car will keep working. My belief has thus led to the action of researching the ticking noise, planning some simple tests, and will probably lead to cleaning the sticky lifters.

If it's true that solving the ticking noise will keep my car running, then my beliefs will cache out in correctly anticipated experiences, and my actions will cause desired outcomes. If it's false, perhaps because the ticking can be solved without addressing a larger underlying problem, then the experiences I anticipate will not occur, and my actions may lead to my car exploding.

Doublethink guarantees that you believe falsehoods. Some of the time you'll call upon the true belief ("driving is dangerous"), anticipate future experiences accurately, and get the results you want from your chosen actions ("don't drive three times the speed limit at night while it's raining"). But some of the time, if you actually believe the false thing as well, you'll call upon the opposite belief, anticipate inaccurately, and choose the last action you'll ever take.

Without any principled algorithm determining which of the contradictory propositions to use as an input for the simulation at hand, you'll fail as often as you succeed. So it makes no sense to anticipate more positive outcomes from believing contradictions.

Contradictions may keep you happy as long as you never need to use them. Should you call upon them, though, to guide your actions, the debt on false beliefs will come due. You will drive too fast at night in the rain, you will crash, you will fly out of the car with no seat belt to restrain you, you will die, and it will be your fault.

Against Against Doublethink

What if Plato was pretty much right, and we sometimes believe contradictions because we're sort of not actually one single person?

It is not literally true that Systems 1 and 2 are separate individuals the way you and I are. But the idea of Systems 1 and 2 suggests to me something quite interesting with respect to the relationship between beliefs and their role in decision making, and modeling them as separate people with very different personalities seems to work pretty darn well when I test my suspicions.
I read Atlas Shrugged probably about a decade ago. I was impressed with its defense of capitalism, which really hammers home the reasons it’s good and important on a gut level. But I was equally turned off by its promotion of selfishness as a moral ideal. I thought that was *basically* just being a jerk. After all, if there’s one thing the world doesn’t need (I thought) it’s more selfishness.
Then I talked to a friend who told me Atlas Shrugged had changed his life. That he’d been raised in a really strict family that had told him that ever enjoying himself was selfish and made him a bad person, that he had to be working at every moment to make his family and other people happy or else let them shame him to pieces. And the revelation that it was sometimes okay to consider your own happiness gave him the strength to stand up to them and turn his life around, while still keeping the basic human instinct of helping others when he wanted to and he felt they deserved it (as, indeed, do Rand characters). --Scott of Slate Star Codex in All Debates Are Bravery Debates
If you're generous to a fault, "I should be more selfish" is probably a belief that will pay off in positive outcomes should you install it for future use. If you're selfish to a fault, the same belief will be harmful. So what if you were too generous half of the time and too selfish the other half? Well, then you would want to believe "I should be more selfish" with only the generous half, while disbelieving it with the selfish half.

Systems 1 and 2 need to hear different things. System 2 might be able to understand the reality of biases and make appropriate adjustments that would work if System 1 were on board, but System 1 isn't so great at being reasonable. And it's not System 2 that's in charge of most of your actions. If you want your beliefs to positively influence your actions (which is the point of beliefs, after all), you need to tailor your beliefs to System 1's needs.

For example: The planning fallacy is nearly ubiquitous. I know this because for the past three years or so, I've gotten everywhere five to fifteen minutes early. Almost every single person I meet with arrives five to fifteen minutes late. It is very rare for someone to be on time, and only twice in three years have I encountered the (rather awkward) circumstance of meeting with someone who also arrived early.

Before three years ago, I was also usually late, and I far underestimated how long my projects would take. I knew, abstractly and intellectually, about the planning fallacy, but that didn't stop System 1 from thinking things would go implausibly quickly. System 1's just optimistic like that. It responds to, "Dude, that is not going to work, and I have a twelve point argument supporting my position and suggesting alternative plans," with "Naaaaw, it'll be fine! We can totally make that deadline."

At some point (I don't remember when or exactly how), I gained the ability to look at the true due date, shift my System 1 beliefs to make up for the planning fallacy, and then hide my memory that I'd ever seen the original due date. I would see that my flight left at 2:30, and be surprised to discover on travel day that I was not late for my 2:00 flight, but a little early for my 2:30 one. I consistently finished projects on time, and only disasters caused me to be late for meetings. It took me about three months before I noticed the pattern and realized what must be going on.

I got a little worried I might make a mistake, such as leaving a meeting thinking the other person just wasn't going to show when the actual meeting time hadn't arrived. I did have a couple close calls along those lines. But it was easy enough to fix; in important cases, I started receiving Boomeranged notes from past-me around the time present-me expected things to start that said, "Surprise! You've still got ten minutes!"

This unquestionably improved my life. You don't realize just how inconvenient the planning fallacy is until you've left it behind. Clearly, considered in isolation, the action of believing falsely in this domain was instrumentally rational.

Doublethink, and the "Dark Arts" generally, applied to carefully chosen domains is a powerful tool. It's dumb to believe false things about really dangerous stuff like driving, obviously. But you don't have to doublethink indiscriminately. As long as you're careful, as long as you suspend epistemic rationality only when it's clearly beneficial to do so, employing doublethink at will is a great idea.

Instrumental rationality is what really matters. Epistemic rationality is useful, but what use is holding accurate beliefs in situations where that won't get you what you want?


Against Against Against Doublethink

There are indeed epistemically irrational actions that are instrumentally rational, and instrumental rationality is what really matters. It is pointless to believing true things if it doesn't get you what you want. This has always been very obvious to me, and it remains so.

There is a bigger picture.

Certain epistemic rationality techniques are not compatible with dark side epistemology. Most importantly, the Dark Arts do not play nicely with "notice your confusion", which is essentially your strength as a rationalist. If you use doublethink on purpose, confusion doesn't always indicate that you need to find out what false thing you believe so you can fix it. Sometimes you have to bury your confusion. There's an itsy bitsy pause where you try to predict whether it's useful to bury.

As soon as I finally decided to abandon the Dark Arts--as an experiment--I began to sweep out corners I'd allowed myself to neglect before. They were mainly corners I didn't know I'd neglected.  

The first thing I noticed was the way I responded to requests from my boyfriend. He'd mentioned before that I often seemed resentful when he made requests of me, and I'd insisted that he was wrong, that I was actually happy all the while. (Notice that in the short term, since I was going to do as he asked anyway, attending to the resentment would probably have made things more difficult for me.) This self-deception went on for months.

Shortly after I finally gave up doublethink, he made a request, and I felt a little stab of dissonance. Something I might have swept away before, because it seemed more immediately useful to bury the confusion than to notice it. But I thought (wordlessly and with my emotions), "No, look at it. This is exactly what I've decided to watch for. I have noticed confusion, and I will attend to it."

It was very upsetting at first to learn that he'd been right. I feared the implications for our relationship. But that fear didn't last, because we both knew the only problems you can solve are the ones you acknowledge, so it is a comfort to know the truth.

I was far more shaken by the realization that I really, truly was ignorant that this had been happening. Not because the consequences of this one bit of ignorance were so important, but because who knows what other epistemic curses have hidden themselves in the shadows? I realized that I had not been in control of my doublethink, that I couldn't have been.

Pinning down that one tiny little stab of dissonance took great preparation and effort, and there's no way I'd been working fast enough before. "How often," I wondered, "does this kind of thing happen?"

Very often, it turns out. I began noticing and acting on confusion several times a day, where before I'd been doing it a couple times a week. I wasn't just noticing things that I'd have ignored on purpose before; I was noticing things that would have slipped by because my reflexes slowed as I weighed the benefit of paying attention. "Ignore it" was not an available action in the face of confusion anymore, and that was a dramatic change. Because there are no disruptions, acting on confusion is becoming automatic.

I can't know for sure which bits of confusion I've noticed since the change would otherwise have slipped by unseen. But here's a plausible instance. Tonight I was having dinner with a friend I've met very recently. I was feeling s little bit tired and nervous, so I wasn't putting as much effort as usual into directing the conversation. At one point I realized we had stopped making making any progress toward my goals, since it was clear we were drifting toward small talk. In a tired and slightly nervous state, I imagine that I might have buried that bit of information and abdicated responsibility for the conversation--not by means of considering whether allowing small talk to happen was actually a good idea, but by not pouncing on the dissonance aggressively, and thereby letting it get away. Instead, I directed my attention at the feeling (without effort this time!), inquired of myself what precisely was causing it, identified the prediction that the current course of conversation was leading away from my goals, listed potential interventions, weighed their costs and benefits against my simulation of small talk, and said, "What are your terminal values?"

(I know that sounds like a lot of work, but it took at most three seconds. The hard part was building the pouncing reflex.)

When you know that some of your beliefs are false, and you know that leaving them be is instrumentally rational, you do not develop the automatic reflex of interrogating every suspicion of confusion. You might think you can do this selectively, but if you do, I strongly suspect you're wrong in exactly the way I was.

I have long been more viscerally motivated by things that are interesting or beautiful than by things that correspond to the territory. So it's not too surprising that toward the beginning of my rationality training, I went through a long period of being so enamored with a-veridical instrumental techniques--things like willful doublethink--that I double-thought myself into believing accuracy was not so great. 

But I was wrong. And that mattered. Having accurate beliefs is a ridiculously convergent incentive. Every utility function that involves interaction with the territory--interaction of just about any kind!--benefits from a sound map. Even if "beauty" is a terminal value, "being viscerally motivated to increase your ability to make predictions that lead to greater beauty" increases your odds of success.

Dark side epistemology prevents total dedication to continuous improvement in epistemic rationality. Though individual dark side actions may be instrumentally rational, the patterns of thought required to allow them are not. Though instrumental rationality is ultimately the goal, your instrumental rationality will always be limited by your epistemic rationality.

That was important enough to say again: Your instrumental rationality will always be limited by your epistemic rationality.

It only takes a fraction of a second to sweep an observation into the corner. You don't have time to decide whether looking at it might prove problematic. If you take the time to protect your compartments, false beliefs you don't endorse will slide in from everywhere through those split-second cracks in your art. You must attend to your confusion the very moment you notice it. You must be relentless an unmerciful toward your own beliefs.

Excellent epistemology is not the natural state of a human brain. Without extreme dedication and advanced training, without reliable automatic reflexes of rational thought, your belief structure will be a mess. You can't have totally automatic anti-rationalization reflexes if you use doublethink as a technique of instrumental rationality.

This has been a difficult lesson for me. I have lost some benefits I'd gained from the Dark Arts. I'm late now, sometimes. And painful truths are painful, though now they are sharp and fast instead of dull and damaging. 

And it is so worth it! I have much more work to do before I can move on to the next thing. But whatever the next thing is, I'll tackle it with far more predictive power than I otherwise would have--though I doubt I'd have noticed the difference.

So when I say that I'm against against against doublethink--that dark side epistemology is bad--I mean that there is more potential on the light side, not that the dark side has no redeeming features. Its fruits hang low, and they are delicious.

But the fruits of the light side are worth the climb. You'll never even know they're there if you gorge yourself in the dark forever.

Friday, April 25, 2014

Observing Cthia

I have a pretty awful memory. I've installed all the memory techniques I teach at workshops to mitigate the damage of this. But all the work is done on the encoding end rather than the recall end, so things that happened before I started studying mnemonics, or that I simply fail to encode skillfully, are largely lost to me. 

One of the upsides is that I can read books several times and be surprised by each plot twist again and again. I usually feel a sort of comfortable familiarity when I re-read a book, but that is very often the closest thing to a memory of past readings I retrieve. An effect of that particular phenomenon is that I sometimes completely forget major intellectual influences, and really have no idea how I came to think the way that I do. But I read constantly as a child and teenager, so I know the majority of it has come from books.

For the past few days I've been reading a familiar-seeming Star Trek book called Spock's World, by Diane Duane. I was not completely certain until today that I had in fact read it before.

I was sort of stunned by a particular passage and wanted to share it, because it seems to encompass--and, given I must have read it as a teenager, foreshadow--so much of what's been going on in my life recently. Though this isn't canon, the Vulcans really are rationalists in at least some versions of the Trek universe. I think adopting the term discussed may make my daily life slightly more efficient and meaningful.

[Spoilers: I give away some of the plot of Spock's World below. But honestly, it's not exactly a plot-driven novel, so I wouldn't worry too much.]

Background: Vulcan is considering withdrawing from the Federation, and Sarek, Spock's father and Vulcan's ambassador to Earth, has been called back by T'pau to speak in favor of withdrawing. At this point, he has relatively little information about T'pau's motives and reasoning, so he's not decided whether to oblige her or to resign and be exiled. Upon meeting with members of the Enterprise, the following conversation ensues. [Emphasis mine.]
"This I will say to you Captain: I find being forced to speak against the planet of my embassage immensely distasteful, for reasons that have nothing to do with my history there, my marriage, or my relationships with my son and Starfleet. My whole business for many years has been to understand your peoples and to come closer to them; to understand their diversities. Now I find that business being turned on its ear, and all the knowledge and experience I have amassed being called on to drive away that other diversity, to isolate my people from it. It is almost a perversion of what my career has stood for." 
"But if you feel you have to do it," McCoy said softly, "You'll do it anyway." 
"Of course I will, Doctor. Here, as at many other times, the needs of the many outweigh the needs of the few. What if, as the next few days progress, I become certain that my own people would be more damaged by remaining within the Federation than by leaving it? Must I not then preserve the species of which I am part? But the important thing is that this matter be managed with logic." He blinked then, and spoke again, so that a word came out that did not translate. "No. Cthia. I must not be misunderstood. Cthia must rule this, or we are all lost." 
Jim looked puzzled. "I think I need a translation. It's obviously a Vulcan word, but I'm not familiar with it." 
Amanda [Sarek's wife] looked sad. "This is possibly the worst aspect of this whole mess," she said. "It's the modern Vulcan word which we translate as 'logic'. But what it more correctly means is 'reality-truth'. The truth about the universe, the way things really are, rather than the way we would like them to be. It embraces the physical and the inner realities both at once, in all their changes. The concept says that if we do not tell the universe the truth about itself, if we don't treat it and the people in it as what they are--real, and precious--it will turn against us, and none of our affairs will prosper." She sighed. "That's a child's explanation of the word, I'm afraid. Whole books have been written trying to define it completely. What Sarek is saying is that if we don't handle this matter with the utmost respect for the truth, for what is really needed by everyone involved in it, it will end in disaster." 
"And the problem," McCoy said softly, "is that the truth about what's needed looks different to everybody who faces the situation..." 
Sarek nodded once, a grave gesture. "If I find that I must defend the planet of my birth by turning against my many years on Earth, then I will do so. Alternately," he said, "if I can in good faith defend the Federation in my testimony, I will do that. But what matters is that cthia be observed, without fail, without flaw. Otherwise, all this is useless."

Wednesday, April 23, 2014

Make Rationality Delicious

I've been thinking about Eliezer's suggestion to "Leave a Line of Retreat". The gist is this: Scary possibilities can be hard to think about, but it's easier to consider evidence for and against them once you know how you'd respond if the scary thing turned out to be true. In his words:
The prospect of losing your job, say, may seem a lot more scary when you can't even bear to think about it, than after you have calculated exactly how long your savings will last, and checked the job market in your area, and otherwise planned out exactly what to do next. Only then will you be ready to fairly assess the probability of keeping your job in the planned layoffs next month. Be a true coward, and plan out your retreat in detail—visualize every step—preferably before you first come to the battlefield.
Maybe we should practice finding lines of retreat in random situations occasionally. Then when we go to do it in a situation where we might actually need to retreat, our brains will be less likely to go, "Hey now, I see what you're up to." Suppose that every time you consider the question, "What would I do if the scary thing were true?" you end up facing the scary thing for real immediately afterward. Then you're classically conditioning yourself to not look for a line of retreat.

For example, I walked into an ice cream shop today*, and before entering I was already considering which flavor to get (which for me means weighing all the alternatives against chocolate). Because I happened to recognize the opportunity, I practiced leaving a line of retreat by asking, "Oh no, what if they don't have chocolate?" and answering, "Well, I'll either get vanilla instead, or I'll go to a different ice cream shop." Then I ordered chocolate.

Unlike in most cases where it's important to apply this skill, there was no reason to suspect they wouldn't have chocolate in the first place. So instead of applying the technique and then experiencing the punishment of actually settling for vanilla, from a classical conditioning perspective, I was immediately rewarded for my practice session with chocolate ice cream.

It's great to recognize a difficult rationality technique as wise, virtuous, and resulting in positive outcomes in the long run. But on the level of moment-to-moment decisions, my actual behaviors are much more strongly driven by chocolate than wisdom. Ideally, I'd also be driven by chocolate to be rational, right?

*This is a lie. What I actually walked into was a parable.