Things of Interest and Fascination - A Complement to Wedellsblog Books

March 23, 2007

Moving day: New blog address

I have moved this blog to a new address - it is now being hosted on http://fragmentsofknowledge.wordpress.com/

I will not be using this blog address anymore, so please update your links and feeds to the new site.

To keep track of my other blogs, I have also created a personal website where I gather the feeds of all my different blogs - visit it on http://wedellsblog.wordpress.com/

January 21, 2007

Microfinance

Cool website that allows normal people to do microfinance:

www.kiva.org

I think there could be a business idea in creating a similar site for normal entrepreneurs. Forget about getting 20 m dollars from venture capital companies or business angels - rather, find projects that can be started for less than 20.000 dollars, and post them on a similar site, so normal people can invest in them. While some work needs to get done to prevent abuse of the system, I am sure it can be done.

October 21, 2006

The Reusability of Words

Like radioactive atoms, words have a half-life. Or rather, they have a recharge time, which is the time (or length of text) from you use it the first time til you can use it again without vexing the reader.

Basically, when people read a book, they are not very conscious of the actual words and letters – they generally focus on the meaning that the words convey. But if the writer starts using the same words too much, the readers are torn from their state of absorbtion, as they suddenly become conscious of the actual words – something that also happens with grave spelling mistakes. To me, ignoring the rules of reusability is a sign of sloppy writing (disregarding the times when writers use the effect intentionally, as in poetry).

The most ordinary words have a very short half-life; they are highly reusable. Words like ‘you’, ‘me’, ‘man’, and ‘dog’, for instance, can be used constantly without fear of disrupting the reader's mental flow. Only in the cases where these words are repeated – like the constellation ‘had had’ which is legitimate, but habitually avoided by editors – will they risk perplexing or annoying the reader.

Most other words are somewhat less reusable. ‘Subtle’ needs a few paragraphs, maybe a few pages, before it once again passes below the reusability radar. You can use ‘flummoxed’ or ‘insidious’ more than once, but there has to be a good bit of space between them. ‘Lugubrious’ needs at least a chapter, if not more. The rule seems to be that the more unusual the word, the less you can repeat it. (By the way, there is probably an interesting conclusion to be drawn here involving Zipf’s Law, but I’m not sure what it is.)

One of the best low-reusability examples I know is the word ‘nadir’. Nadir, normally used figuratively to describe the low point of something, i.e. ‘the nadir of my high school years’, is a beautiful expression. (Its better-known opposite is ‘zenith’, the high point of something – both words come from Arabic, from the field of astronomy). But use nadir two times in the same book, as it occurred in a story I just read, and it hits the reader with all the linguistic subtlety of a truckload of bricks. It seems, well, clumsy. Nadir is just one of those once-a-book words.

The same goes for some idioms and self-crafted expressions. On page 14 of the otherwise excellent book 'Stiff', telling the story of what happens to the human body after we die, the author Mary Roach uses the expression ‘a conversational curveball’ to good effect. But 160 pages later, when she describes something as ‘a philosophical curveball’, it stopped me dead in my tracks. Using the curveball metaphor twice, even that far apart, seems like an editorial oversight.

Jasper Fforde, an author who does interesting things with the English language, thought up a new device for use in literary settings, which he called an echolocator. It was a tool that scanned texts and warned the editor if the same word was used twice with 30 words of each other. I don't know why he chose the number 30, but in the same vein, it could be interesting (and utterly pointless, but interesting things often are) to create a quantitative index of the reusability of the words in the English language.

September 13, 2006

Follow-up: Musical Redemption

Re. the phenomenon described in the 'Musical redemption' post below, I thought of another, interesting manifestation of the same thing. It happens when I write notes to myself.

I normally walk around with a couple of blank record cards in my pocket. Whenever some stray thought hits me, I write it down. Sometimes, if I'm really enthusiastic about an idea, I put lots of exclamation marks on it, double underscore, that kind of thing.

Two weeks later, when I pull out the record card again and look it over, I am completely clueless as to what some of my own notes mean. I look at a note and think "Now what the hell was I thinking when I wrote that?" I literally cannot guess or remember what the idea was, based on my disjointed scribblings.

What I think happens here is the exact same thing as with the musical experiment, where the sender 'fills out' the communication with details in his own head. Me-in-the-past writes something down that makes perfect sense to myself, based on the tapestry of thoughts that I have in my head when I write it. Two weeks later, when me-in-the-future reads the note, the background thoughts are not there to inform the reading, making it a lot harder to remember what the note was supposed to mean. Me-in-the-past simply fails to see that the sentence "frame publish - slush!" will not necessarily be clear to me-in-the-future. For this reason, I have now started to write (what seems like) overly extensive notes to myself, with some success.

This, by the way, is an instance of something I find fascinating, namely intrapersonal communication - inTRApersonal, as in communicating with yourself. It is an entirely underestimated and underresearched area within the field of communication studies (where I originally come from). I actually wrote a brief 20-page university paper on intrapersonal communication back in 2003, but be warned, it's in Danish. I might go more into this subject in a later post.

September 03, 2006

The Problem With Procrastination

We all know that procrastination is a bad thing.

We really shouldn’t be doing it; putting problems off till tomorrow that we could be dealing with today. Action is good. Being proactive about things is even better. An immense amount of praise will flow towards the employees that are being proactive, dealing with issues before they become problems.

Now, there’s something slightly disconcerting about all this proactivity. Dealing with problems before they become problems? You have to wonder how many man-hours are being wasted on issues that were never actually going to become a problem in the first place. “Good news: I’ve been proactive about our polar bear problem” - “Err... are we going to have problems with polar bears?” - “Well, not now that I’ve been proactive about it, obviously.”

The fact is, when human beings are prone to procrastinate, it is because it surprisingly often works. I literally cannot count the number of problems I have solved by the simple action of ignoring them completely. Some problems solve themselves, given time. Other problems turn out not to be problems after all. And even real and persistent problems will sometimes get fixed by a person with a lower tolerance for impending doom.

Also, procrastination has a wonderful ability to make all of your other, slightly less unpleasant tasks seem positively rewarding in comparison. Would the desks of your employees ever get cleaned if it wasn’t for that nasty report they are trying to avoid getting started on? Mine wouldn’t. In fact, I’m normally quite productive when I am procrastinating, this post being a good example.

So, when to procrastinate, and when to be proactive? A general rule is that you should be proactive about an issue only when the cost of the proactive measures is lower than the cost of dealing with the problem later, timed by the probability that the problem will actually occur. Say, if you have a 50 percent chance of having to do a 10,000 dollar repair operation, you should be proactive about it only if the cost of being proactive is lower than 5,000 dollars. That way, you will tend to win out in the long run.

Of course, this rule works only when you can reliably estimate both the costs and the probabilities involved – and when there is a long run, i.e. when the issues that are on the line are not life-threateningly big for your company. If it is a one-off situation where a bad outcome will destroy your company, it may make sense to err on the side of caution, if nothing else than to atttain peace of mind.

September 01, 2006

Musical Redemption + Angry Email Syndrome

I can practically never make my friends recognise the tunes I sing to them.

If I try humming the latest radio hit, I will receive perplexed looks from them, followed by general sniggering and good-natured ridicule. The explanation seems simple: my musical talents are not quite up to scratch. Well, that, or maybe, just maybe, my friends have been ganging up on me for years, conspiring to pull a massive practical joke (“Oh – it was Happy Birthday you tried to hum! Sounded like something from Wagner to me.”). The bastards.

Anyway, all became clear as I received this illuminating article from my good friend and academic brother-in-arms, Jonas Heide Smith, who has a blog of his own detailing the progress of his PhD thesis on cooperation and conflict in computer games (not related to the following).


"the music tapping study conducted by Elizabeth Newton (1990). Participants in her study were asked to tap the rhythm of a well-known song to a listener and then assess the likelihood that the listener would correctly identify the song. The results were striking: Tappers estimated that approximately 50% of listeners would correctly identify the song, compared with an actual accuracy rate of 3%. What accounts for this dramatic overestimation?

The answer becomes immediately apparent when one contrasts the perspectives of tappers and listeners, as Ross and Ward (1996) invited their readers to do when describing Newton’s results. Whereas tappers could inevitably “hear” the tune and even the words to the song (perhaps even a “full orchestration, complete with rich harmonies between string, winds, brass, and human voice”), the listeners were limited to “an aperiodic series of taps” (Ross & Ward (1996, p. 114). Indeed, it was difficult from the listener’s perspective to even tell “whether the brief, irregular moments of silence between taps should be construed as sustained notes, as musical “rests” between notes, or as mere interruptions as the tapper contemplates the “music” to come next” (p. 114). So rich was the phenomenology of the tappers, however, that it was difficult for them to set it aside when assessing the objective stimuli available to listeners. As a result, tappers assumed that what was obvious to them (the identity of the song) would be obvious to their audience."


The above citation comes from a recent research article that documents what I call the Angry Email syndrome - basically, that people who read emails surprisingly often misinterpret the emotional tone of the message, and most often in a bad way. Read an abstract of the survey, or download the survey itself (in PDF).

August 22, 2006

The Death of Complexity and The Rise of Small Things

I have an obsession with simple things.

Normally, we take pride in getting the complex stuff right. It is more glamorous, more prestigious; getting simple things right seems so mundane in comparison. The formulation of the grand overarching five year strategy traditionally occupy the finest minds in the company (or at least those with the highest pay level). The actual day-to-day implementation – making sure that the product will in fact work – well, that is more appropriate for lesser minds to deal with. There is little doubt that in the minds of most organisations, doing strategy is somehow ‘finer’ than doing implementation.

There is something fundamentally wrong with our obsession with complexity. I started thinking about this when I was a platoon commander in the army, participating in large-scale field exercises. There, I noticed that in 90 to 95 percent of the cases, when something went wrong, it wasn’t because of the complex stuff. The complex stuff received a lot of attention and careful advance planning, and had a decent success rate, all things considered. It was the simple stuff that went wrong. Somebody would confuse left and right, and botch up a major part of the exercise. Someone else would accidentally push the wrong button on the radio, so that the support divisions didn’t hear the attack order, with predictable results. Or a third somebody would mix up two numbers and end up calling an airstrike on his own headquarters (not a great career move).

To sum it up: most of the time, it is the simple things that go wrong. And this is really stupid, because the simple things are a lot easier to fix than the complicated things. Nobody can fully mastermind a global, multi-stage product launch, but we can make sure that the guy in the marketing division talks regularly with the guy in the sales division. We can’t predict all of the organisational changes that will take place because of our flashy new sales database system, but we can make sure that the user interface can be understood by the people who are to enter the data in the first place.

So, here’s my suggestion: for the next week or two, forget everything about the strategy of your company. Postpone your meetings with all those visionaries that want to tell you about the future. Instead, start focusing on the simple things, on the here and now. And don’t stop until you are sure that they work, and work well. Only then will it make sense to return to the higher spheres of planning, secure in the knowledge that your grand visions won’t founder on the shores of simplicity.

August 21, 2006

Ultimate vs. proximate causes

A mental framework that I have found useful is the distinction between proximate and distal causes – or, if you prefer, immediate versus ultimate explanations for things.

The best way to illustrate the difference is to consider the following question: Why do we have sex? A proximate (or near) explanation is simply to say ‘because we enjoy it’. Sex feels good, so we generally try to have it often.

This is true, but it is not complete. To fully answer the question, it is necessary to then ask ‘but why are human beings built so that we find sex enjoyable?’ The answer to this comes from evolution: the tendency to like and want sex is hardwired into our nature, because sex has been good (critical, actually) for the reproductive success of our ancestors. Those of our distant ancestors who didn’t have sex simply didn’t have offspring, and so never passed on their sex-hating genes. As it is, we are all descendants of people who went to great lengths to get sex, and thus managed to populate the world with their children; this is the reason why we like it.

This explanation is a so-called ultimate (or distal) explanation. It is what pops up when you keep asking ‘why’ to the first answer. Another, slightly different example of the same thing is taken from The Economist (I can’t remember which issue): Why does the water in a kettle boil? One cause could be “the water boils because heat is transferred from the hot stove to the kettle”. A completely different explanation is to say “because I wanted a cup of tea”.

The point is that there can be a hierarchy of causes for things, and that those causes are not necessarily mutually exclusive. It all depends on what you are trying to do when you are posing the question.

August 04, 2006

The Hedonic Threadmill

In the scientific study of happiness, a particularly interesting finding is that people quickly adjust to new-gained wealth - even major increases in income or life quality have only a passing effect on your basic happiness level. Lottery winners are in heaven for a month or two, and then it's back to feeling averagely happy (or unhappy) again.

This universal phenomenon is called the hedonic threadmill. The hunt for happiness is a futile endeavour, at least if the goal is to become happy. We think that happiness will be ours when we have a private jet plane, but once we get it, the goalposts move once again, and we realise that true happiness comes only when we have two personal jet planes. And so on.

Interestingly, according to Daniel Nettle's book Happiness: The Science Behind Your Smile, there is a sound evolutionary explanation for this. Our drive for happiness is nature's way of keeping us striving to improve our lives. If it was easy to become happy, or if the effect was permanent, we would still be sitting in our caves, supremely happy because, hey, we have a cave to sit in. No reason to strive for higher things when you have a nice cave to sit in. No bears in it, too. Being unhappy, however, is a call for action: it makes us try to improve things. The human propensity to continually search for more happiness is nature's way of keeping us on our toes, ever looking for ways to do better.

On a side note, Denmark - a puny yet curiously wonderful nation of which I am a proud member - has recently been found to be the happiest country in the world. It must be all those girls biking around in summer dresses.

See my review of Nettle's book on happiness.

July 10, 2006

Why People Are Polite Towards Their Computer

Did you ever talk to your television? Have you ever given your computer a good angry whack because it didn’t behave? Fear not. You are not alone.

In general, people treat media like computers and televisions as if they were dealing with other people, not dumb electronic devices. This is because media are complex enough in their behaviour and appearance to active the brain’s mental models for dealing with people. When people talk to their television, it is because their brains basically go “Hmm – that box over there looks like a human, and talks like a human. It must be human.” And so we talk to it.

Of course, on a higher level, we ‘know’ that the television can’t hear us, but this knowledge is surprisingly often absent when you look at how people actually behave. For instance, two social scientists found that people are polite towards their computers.

In essence, they asked people to perform a task using a computer, and were afterwards asked to evaluate how the computer was to work with. Interestingly, if people did this evaluation on the same computer as the one they just worked on, their replies were significantly more positive than the control group, who was asked to fill out the evaluation on a different computer, standing right next to the first one. That is, people are polite towards computers.

And it is not because the people they tested were country hicks from some remote and media-ignorant village. They also tested tech-savvy people who dealt with computers every day, and surprisingly, the effect was more pronounced for this group. In effect, this ‘politeness’ is similar to the automatic politeness we show if someone comes up to us after giving a speech and asks, ‘So, how did I do?’. Face to face with the speaker, most people are more polite than they would be, were they asked by a third person.

The learning point is this: media are complex enough to activate the brain’s mental models for dealing with people – or, as the experimenters put it, ‘new media engage old brains’. This obviously has enormous significance for the way we should design media products so that they become more user-friendly.

The experiment is taken from the book The Media Equation, written by Reeves & Nass, which is filled with similar stories on how our hard-wired brain affects our behavior (see my review here).