Ich weiß ja nicht, mit wem Jan Fleischhauer so abhängt

Will Wilkinson meint, Mad Men sei für viele männliche Zuschauer so attraktiv, weil die Serie zu zeigen scheint, "how sweet it would be to have women take care of all the annoying details of life and smoke at work." Laut Jan Fleischhauer geht es vielen Deutschen mit mit Vladimir Putin ähnlich:
Nicht trotz, sondern wegen der Erziehung zu Pazifismus, Geschlechtersensibilität und fortwährender Antidiskriminierung ist ein Gutteil der Deutschen so fasziniert von Russland und seinem Anführer.

Putin steht für das unterdrückte Andere, das gerade, weil es so selbstbewusst und unverstellt auftritt, einen unwiderstehlichen Reiz ausübt.
Diese Erklärung wäre freilich überzeugender, wenn erst mal etabliert würde, dass der zu erklärende Tatbestand überhaupt zutrifft. Mir zumindest ist in Deutschland keine besondere Putin-Begeisterung aufgefallen.

Vielleicht täusche ich mich aber auch, und Fleischhauer hat recht. Das Problem ist, dass weder Fleischhauer noch ich valide Repräsentativdaten zu der Meinung der Deutschen über Putin haben. So hängt die Weltwahrnehmung dann von dem ab, was man so mitkriegt. Ein grundlegender Wahrnehmungsfehler der Menschen ist es, "das, was man so mitkriegt" für repräsentativer zu halten als es ist. Soziologen machen sich nicht deshalb so einen Kopf um Sampling-Probleme und Frageformulierungen, weil man so toll gelehrte Artikel darüber schreiben kann, sondern weil sie bemüht sind, über das Niveau der Alltagswahrnehmung hinauszugehen.


Exit, Voice, and Loyalty, and Spotify

O.k., I'll admit I've never read Hirshleifer's book, but I'm wondering. In the last post I describe how you can react to the Spotify relaunch, which makes the product considerably worse, by switching back to, and remaining on, an older version of the software, using a trick not approved by Spotify (subverting the auto-update function). In Hirshleifer's scheme, what is this? The post itself can, I guess, be interpreted as voice, but what I (and many others) do cannot. It is exit, sort of, because I exit the use of the current software, as envisioned by the company, but then, it's not, but rather loyalty, because I remain a paying customer. Should Hirshleifer have called his book Exit, Voice, Loyalty, and Creative Adjustment?


How to Go Back to the Old Version of Spotify *Permanently*

You may have noted that Spotify decided to completely fuck up its interface in the new relaunch. Below is a slightly clarified version of a how-to that user AriMc99 (Gig Goer) posted on the Spotify forums on how to go install the older version of Spotify and keep it from updating automatically. Of course, that means you lose *all* future updates, including beneficial ones. The following worked for me, using Windows 7 on a PC, but of course if something goes wrong, you've only got yourself to blame. I don't take any responsibility if your computer is unusable afterwards.

1. Google for and download SpotifyInstaller.exe v., the one just before this horrendous new pile of puke.

2. At this point, you should naturally check the file for viruses.

3. Double-click the downloaded file to install.

4. You'll be told that you already have a version of Spotify installed and asked whether you really want to re-install. Yes, you do.

5. Then find the location of the current spotify.exe, usually C:\Users\\Appdata\Roaming\Spotify or C:\Document and Settings\\Application Data\Roaming\Spotify. I had to consult Windows' search function and look on the entire hard drive for "roaming" to find this.

6. Open this directory and do all of the following there.

7. Create a new empy text file.

8. Name it Spotify_new.exe

9. Right click the file and check the appropriate box to make it read only.

10. Create another new empty text file.

11. Name it Spotify_new.exe.sig

12. Right click the file and make the file read only.

Note. In one case, I had to overwrite to save the text file in the directory, in the other, I had to manually delete the file which had the same namme as the newly created file. I forget which was which.

Apparently the trick is that, upon closing, Spotify tries to create new files that have the names of the files it created, which would then be used to install the update. But it cannot, as the files are read only. Hence, no updates.


If You Make One Wrong Assumption, All Kinds of Shit Can Follow

From Steve Sailer's review of Gregory Clark's new book, The Son Also Rises (which I have not  read but might):
Economists [...] assumed that social mobility multiplies at the same rate with each new generation. If the correlation coefficient [...] of income between father and son is 0.4, then between grandfather and son it was imagined to be 0.4 squared or 0.16. Instead, it’s somewhat higher (0.26 in one study) due to regression toward the mean [...]. If a rich man has a son of only average income, his grandson is likely to earn somewhat above average.
This doesn't seem like such an advanced insight to come up with, so one might think that someone should have thought of it long ago and convinced all the others.

But then, it is not that surprising. Mainstream economics is a blank-slate science, as is the other discipline studying intergenerational mobility, sociology. To paint with a broad brush, the two differ in the timing of the influences they deem important. Standard economics sees everybody as basically the same, but subject to different opportunities and restrictions in a given situation. In contrast, sociologists typically think that people enter situations exhibiting vast differences, which result from social influences from birth onwards. Neither considers that large and important differences may already exist at birth (Hence, how could regression to the mean be important? What mean?).

This assumption has been known to be wrong for decades. Clinging to it causes all kinds of problems. Perhaps the main symptom of this in sociology is researchers' tendency to view a host of things as exogenous which are, in fact, endogenous. Such as, oh, socioeconomic status, the discipline's favourite variable. Once you realize there may be an endogenous component to status, you'll start doing lots of eyerolling when reading sociology journals. After a while, eyeache sets in.


Publication Bias: Things Are Looking Up!

I am happy to relay that my first peer-reviewed article was publshed today*, especially because coefficients of interest in the preferred models are all insignificant.

*In the past, I have published an article in an online journal that says it's peer-reviewed on the home page, but, judging from my experience, isn't.


The False Dichotomy Fallacy When There Are Only Two States of the World

The term false dichotomy fallacy (or fallacy of the false dichotomy) is typically used when a person concludes that your position is B because you commited to the view "not A", but there is at least one other position (C, D, E . . .) one can take. For example, when you say that a certain human trait is not 100% environmentally determined, people will often assume you think it is 100% genetically determined, despite the fact that there are a lot of numbers between 0 and 100. In so doing, they are committing the false dichotomy fallacy.

This fallacy, or a variant of it, can be committed even if there are only two possible states of the world. Suppose someone was about to toss a coin and declared: "This one will certainly come up heads." You might then say, "I wouldn't be so sure about that" and the person might reply, "Oh, so you think it will come up tails?" In this example it's obvious: You didn't mean that the other state of the world is certain to come to pass, you simply meant to express uncertainty, given that we cannot know the result of the coin toss.

While the example is a bit far from most real-life situations, variants of it seem to come up quite frequently. Generally, your expressing doubt that X is true is likely to be read as your asserting that X is not true. People tend to go about as though one had to take a confident position on as many issues as possible and assume others feel the same way. (In U.S. discourse, "opinionated" is usually meant as a compliment.*) This may be a sign that uttering opinions has little to do with truth-seeking an a lot with positioning oneself in social space by signaling what type of a person one is.

*Until a few minutes ago, I thought that there was no term in German for the word "opinionated". Now I see that the dictionary I consulted gives two terms that are clearly negative, eigensinnig and starrsinnig.


Around the Blogs, Vol. 106

2. "Nonshared environment" might best be conceived of as noise, not environment, says Kevin Mitchell.

3. External validity alert: Are patients in medical trials selected for large treatment effects? (Andrew Gelman/Paul Alper)

4. Chris Bertram makes a surprisingly good case for the argument "Squeezing the rich is good: even when it raises no money".

5. "Is there no racial bias precisely because it seems like there is?" Ole Rogeberg takes us into the mind of the microeconomist.

8. 50 great book covers from 2013, collected by Dan Wagstaff (via)

9. The low-hanging fruit of immigration: Bryan Caplan offers another metaphor.

12. What's it like to hear voices that aren't there? (Christian Jarrett/L. Holt and A. Tickle)


A Social Scientist Does Research

Eszter Hargittai counts some beans:
The VW Super Bowl ad features German engineers. The story goes as such: every time a VW reaches 100,000 miles, one of the engineers in Germany gets “his wings”. I didn’t find the ad particularly interesting until I realized that none of the engineers getting wings were women. In fact, there were barely any women in the video. Most prominent was the one in the elevator getting slapped by a male engineer’s wings.

There were ten male engineers featured who clearly got wings. It looks like 13% of engineers in Germany are female. So even going just by that statistic, one of the 10 featured should have been a woman.
Exactly! If you cannot rely on commercials to give a fair representation of reality, what can you trust?


50 Great Tracks of the 1950s (Ranks 25-1)

Yay! It's the second installment!

(Alternative link)


The Best Blog Posts of 2013

It's about time, so here.

As usual, brackets are appended to each link to indicate whether the post is Long, Medium lenght or Short; High-Brow, Mid-Brow or Low-Brow, and Funny or Not.

For other years' lists, use the tag.

15. Offsetting Behaviour: "Social Costs and HPV", by Eric Crampton

14. Discover: "Why Race as a Biological Construct Matters", by Razib Khan (L; HB; N)

13. The Power of Goals: "Home Sweet Home", by Mark Taylor (L; MB; N)

12. Crooked Timber: "New Tools for Reproducible Research", by Kieran Healy (S; MB; F)

11. German Joys: "The Metamorphosis (US Summer Movie) Elevator Pitch", by Andrew Hammel (S; MB; F)

10. Code and Culture: "You Broke Peer Review. Yes, I Mean You", by Gabriel Rossman (L; MB; N)

9. EconLog: "The Homage Statism Pays to Liberty", by Bryan Caplan (M; MB; N)

8. Scatterplot: "Annals of Self-Refuting Tweets", by Jeremy Freese (S; MB; F)

7. Overcoming Bias: "Future Story Status", by Robin Hanson (M; HB; N)

6. Gulf Coast Blog: "Defamiliarization, Again for the First Time", by Will Wilkinson (L; MB; N)

5. Armed and Dangerous: "Preventing Visceral Racism", by Eric S. Raymond (L; MB; N)

4. Askblog: "It Is Sometimes Appropriate . . .", by Arnold Kling (M; HB; N)

3. EconLog: "Make Your Own Bubble in 10 Easy Steps", by Bryan Caplan (M; LB; N)

2. Armed and Dangerous: "Natural Rights and Wrongs?", by Eric S. Raymond (M; HB; N)

1. Falkenblog: "Great Minds Confabulate Like Small Minds", by Eric Falkenstein (L; HB; N)

Thanks and congrats to all above.


50 Great Songs from the 50s (Part 1: Ranks 50-26)

Happy new year, everybody (again)! I hope some regular readers are already missing the "Best Blog Posts of 2013" list. Fret not! I was on holiday and am still reading through the backlog of posts that has accumulated in my reader. I guess the list will be coming some time next week.

In the meantime, here's a the first part of my favourite 50 tracks from the 1950s. While such lists are always limited by the compiler's lack of knowledge of the subject - nobody can know everything, or nearly everything - my knowledge of 50s music is so limited that in this case I've refrained from calling it "the greatest songs of . . ." or some such thing.

Next and last installment in a week's time. If the playlist below does not work, try this link. Enjoy.


Why Should You Talk to Your Friends?

Most people would agree that (i) one of the reasons people have conversations is to exchange information, (ii) there are other reasons. An interesting question in this respect is to what extent voluntary, private conversations serve the purpose of exchanging information. It seems to me that even portions of conversation that appear to be about the exchange of information are not, actually. I am saying this because of experience. Sometimes I have been asked stuff that I couldn't answer right away, or couldn't answer in a way I would have deemed appropriate, and offered to follow up on this later on. Once I was asked my opinion about a topic (I don't remember what it was) and answered that my view was quite complex and I probably wouldn't be able to appropriately express it spontaneoulsy, but, fortunately, I had written a blog post about it in which everything was laid down in a well-structured manner and would be happy to send a link. That didn't go down well. If you would like to adjust your estimate of what percentage of private conversation is for exchanging information downwards, you should try something similar sometimes.

So, what is the purpose of having private conversations? Reasons given sometimes include stuff such as affirming group identity, exchanging jokes (laughing is fun, as is having one's jokes laughed at), getting into the other person's knickers, and so forth. All of these are real, but it seems that the main reason for having conversations with people you like is having conversations with people you like, which is pleasurable in itself. It's like listening to music. The purpose of which is not gaining information about sounds.

A related thought: You could do the following. Write down the five or ten points that you think best define the way you view the world. Then have someone you consider a good friend guess what you wrote. I am not saying you should do this. I haven't and I won't. Happy new year.


Around the Blogs, Vol. 105

3. Useless (German) news (Andrew Hammel)

4. Fun with Google ads (John Holbo) (By the way, I couldn't replicate that result. Generally speaking, it seems we get much less Google ads here in Germany than they get in the U.S. Or does Google know I rarely buy stuff on the net?)


Why So Few Private Schools in Germany?

One of those many things that, for quite some time, I had been wondering about in the sense that I had stored it in the folder "phenomena not understood by me", but not in the sense that I had taken steps to transfer it into the "phenomena understood by me" folder is that Germany has so few private schools, compared to the U.K. and the U.S. Now there seems to be an obvious hypothesis. That is, Germany has a tiered (tracked) school system, while the U.K. and the U.S. do not. Hence, in order to separate your kids from the riff-raff, you don't need to send them to a private school if you're in Germany (provided they manage to get into a first-tier school).

That obviously entails the hypothesis that untracked school systems bring about private schools. If you compare countries, you'll want to adjust for the right variables. Someone else do it!


Three Answers to the Question, "What Is Intelligence?"

The Quip: “Intelligence is what you need when you don’t know what to do”. Carl Bereiter coined this elegant phrase. [...]
The Explanation: “Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings — ‘catching on,’ ‘making sense’ of things, or ‘figuring out’ what to do.” Linda Gottfredson and 52 leading psychometricians agree with this explanation. [...]
The formula: g+group+specific skill+error, where g accounts for about 50% of the variance. [...]
That is from James Thompson's blog Psychological Comments, which I've added to my roll. It has what it says on the label with a focus on - you've guessed it - intelligence. I'm particularly grateful to him for providing me with a label for an error (popular with sociologists and the general public) that has long been getting on my nerves. The error is automatically interpreting correlations between socioeconomic status (not my favourite concept in the first place) and some outcome as an effect of the former on the latter, without even considering the possibility that there may be psychological constructs that influence both SES and the outcome (e.g., see my discussion here). He calls it the sociologist's fallacy. Not that imaginative, really, is it? Man, I really should have thought of that myself.

Anyway, the blog's recommended.



Tuned into the news channel yesterday night to watch a documentary on the Aryan Brotherhood. For a minute there I thought, "Wait a minute! Nelson Mandela was a member of the Aryan Brotherhood? That can't be right, can it?"


The Law of Yawn

Regular readers may remember a post about identification of causal effects I wrote in August. Here's the full text:
The better a model is at identifying a causal effect, the less likely it is the effect is going to look substantial. That's because of (i) publication bias, (ii) how the world works.
You may note that that text contains zero examples - it's just a general impression plus some armchair theorizing. Thankfully, Steve Sailer provides an example from commercial marketing research:
In fact, one side effect of bad quantitative methodologies is that they generate phantom churn, which keeps customers interested. For instance, the marketing research company I worked for made two massive breakthroughs in the 1980s to dramatically more accurate methodologies in the consumer packaged goods sector. Before we put to use checkout scanner data, market research companies were reporting a lot of Kentucky windage. In contrast, we reported actual sales in vast detail. Clients were wildly excited ... for a few years. And then they got kind of bored.

You see, our competitors had previously reported all sorts of exciting stuff to clients: For example, back in the 1970s they'd say: of the two new commercials you are considering, our proprietary methodology demonstrates that Commercial A will increase sales by 30% while Commercial B will decrease sales by 20%.


We'd report in the 1980s: In a one year test of identically matched panels of 5,000 households in Eau Claire and Pittsfield, neither new commercial A nor B was associated with a statistically significant increase in sales of Charmin versus the matched control group that saw the same old Mr. Whipple commercial you've been showing for five years. If you don't believe us, we'll send you all the data tapes and you can look for yourselves.

In the social sciences - and I would include marketing - there probably are few cases when the effect of X on Y is genuinely zero. Just about everything influences everything else, in a roundabout way. There's a flipside to that: The influence of single factors is usually very small. A core reason for that is that people's personalities and behaviour are pretty stable, which is why the concept "personality" makes sense.

Of course, there's also Xs that have a large influence on Y. The problem is that researching this is, or soon becomes, pretty boring. In fact, when people say "Did we really need a study for that?", they sometimes have a point. When an influence is large, it will usually (though not by logical necessity) be readily apparent. You don't need to be a social scientist to see that adolescent's friends influence their behaviour.

So, shut up shop? I think not. One, you do need a social scientist to tell you how large an obvious effect is. Two, the above allows for a sweet spot where effects are not obvious, but large enough to detect. Three, and this is perhaps the most important point, it is a worthwhile endeavour to show that the effect of X on Y really is close to zero, contrary to what some people would have you believe. Especially if X costs money.


Sour Spots

I recently gave a talk at a large venue to nearly 1,000 people. It seemed to go well but who am I to judge? The experience of giving a speech is radically different from the experience of listening to one. An adrenalin-drenched emotional rollercoaster for a nervous speaker may nevertheless be unbearably tedious for the listeners. A superbly honed performance may produce a sense of suspense, surprise and delight for the audience; the result of many hours of rehearsal and repetition for the speaker. Yet it can be very hard indeed for the speaker to know what worked and what didn’t.
I remember a job interview that I thought had gone very well. For once, I hadn't been nervous. A few days later I got a call from the lead interviewer, informing me that they didn't want me. Why? She gave some reasons, but when I made her go off-script, it became pretty clear that they thought I'd been arrogant. Shucks!


A Quick Test of the Matt Yglesias Hypothesis on Density and Crime

I’ve recently come across a 2011 post by Matt Yglesias (via, via) in which he presents the following little theory of population density and crime:
higher density helps reduce street crime in an urban environment in two ways. One is that in a higher density city, any given street is less likely to be empty of passersby at any given time. The other is that if a given patch of land has more citizens, that means it can also support a larger base of police officers. And for policing efficacy both the ratio of cops to citzens and of cops to land matters. Therefore, all else being equal a denser city will be a better policed city.
While plausible, this is also somewhat surprising because in the past people have come up with ideas on how density might increase crime. Which is not too surprising given that there is a positive correlation between density and crime (denser cities have higher crime rates). A while back, I half-heartedly reviewed the literature on this; people seem to have come to the conclusion that there’s not a lot to it. But that would suggest the effect is (close to) zero rather than negative.

As it happens, I have a dataset for 125 U.S. cities sitting on my hard drive. So let’s run some quick regressions. All are weighted by a variable that divides 1990 population size by the unweighted sample mean for 1990 population size. That means that each city is given a weight proportional to its size while the sample size stays the same; as a consequence, each crime has the same influence on the results irrespective of whether it happens in a small or a large city. While Yglesias writes in the context of having been assaulted, I will not use data on assault, which seems not to be particularly valid, but rather robbery, as official robbery rates appear to correlate highly with the true rates and robbery is the prototypical street crime. I use 1990-2000 changes in density per square mile and changes in robberies per 100,000 population known to the police as the variables of interest. The use of change data takes care of stable differences between cities that may contaminate the results. The estimation method is linear WLS of changes in (untransformed) rates.

I am not going to go through the trouble of embedding tables in blogger, but simply report results for the variable of interest in the text. Bivariate regression: B = -.25 (p < .001), meaning that an increase in density of 1 person per square mile is associated with a decrease of .25 robberies per 100,000 persons.

Next, let’s worry about immigration. It is, unsurprisingly, correlated positively with density and there are some students of crime who think that immigration decreased crime rates in the 1990s U.S. While I don’t necessarily agree with this, let’s control for changes in the percentage of the population that is foreign-born anyway. This makes next to no difference: B = -.27 (p < .001).

This may mean that density reduces robbery, robbery reduces density, there’s a bunch of unmeasured variables that influence both, or a combination of the above. I am not going to solve that problem here. But what I will do is control for some initial conditions (i.e., 1990 levels of variables) that may influence both of our variables of interest. First, the robbery rate is particularly likely to decline where it is high, so let’s control for 1990 levels of robbery rates. Also, better economic conditions will tend to attract people (and hence increase density) and perhaps also foster future decreases in crime. So let’s throw in 1990 values for poverty and unemployment rates, as well as the median of 1989 household income. This leads to a substantial reduction in the coefficient: -.14 (p < 0.001).

Is that a lot? The mean of changes in population density is 579 per square mile with a standard deviation of 842; for changes in the robbery rate, those values are 366 and 345, respectively. If we were to interpret the coefficient of the last regression as causal, this would mean that, in the sample as a whole, increases in density averted 579*(-.14) = 81 robberies per 100,000 population, meaning that changes in density would be responsible for about a seventh of the observed decline in homicides. That’s a lot.

Of course, you shouldn’t take these little analyses all that seriously. I haven’t worried about functional form or heteroskedasticity and the equation isn’t all that convincing as a causal model.

Still. File under “suggestive”.


Screamers Gone Awry: The Availability Heuristic Meets Selection on the Dependent Variable

I've just finished Soccernomics by journalist Simon Kuper and sports economist Stefan Szymanski. Skipping the matter of the title, I can say that the authors oversell plausible ideas and the results of multivariate regressions, seem to believe in best practice analysis, have already been shown wrong by developments that happened after the book was finished, and yet, I read its 400 pages in three days (which is very quick by my standards). It's so entertaining! But let's not give the authors too much credit. After all, what could go wrong when you pack football and econometrics into one book? It's almost as irresistible a combination as topless darts.*

An interesting factoid from the book is that only about two percent of shots from outside the eighteen-yard-box result in goals. The players are trying to score one of those spectacular "screamers" that they remember from televised matches, Kuper and Szymanski guess, and hence fall prey to the availability heuristic. But perhaps it's not simply that players remember spectacular goals better than failed attempts. Watching many live matches takes a lot of time, and matches to watch cluster on certain days, so much of football coverage is watched in the form of summary highlights of five to ten minutes. Perhaps that's especially true of professional footballers who will often themselves be at work when there are live matches to watch on the telly. And the highlights don't show all the attempts from outside the box, but they certainly show all that result in goals. In contrast, attempts from inside the box (I'm guessing) are shown at a much higher rate. Selection on the dependent variable. To be precise, that's even differential selection on the dependent variable.

On top of that, when live matches are watched, the teams that are on will often be way above average, including at converting long-distance attempts. Selection on the dependent variable again. Hence, some professional watches Gareth Bale hammer the ball into the top corner on Wednesday night and tries the same next Saturday, only to watch it sailing into the stands.

What coaches should do: Show their players a truly random sample of shots from outside the area. Ten minutes once a week should help.

*Best sentence from the link: "When L!VE TV was Millwall FC's shirt sponsor, they originally wanted to advertise this show on the shirts, but club bosses nixed that idea because they were worried it might encourage fans to throw darts." The sentence is funnier if you know a bit of context. A fine link, that one. It does not, however, mention my favourite episode "Topless Darts on the Titanic" ("Oh, no! An iceberg! Let's hope it melts before we hit it!").


Around the Blogs, Vol. 103

Bit late today, but here's some recent posts that may be worth your time.

1. Andrew Gelman knows how randomization works in animal studies. (Post starts off with disturbing image)

2. Gabriel Rossman has tips on how to be a better journal reviewer, with a focus on decreasing turnaround times. Fabio Rojas links and summarizes.

3. Christian Jarrett summarizes a new paper by Brian D. Earp, Jim A. C. Everett, Elizabeth N. Madva, and J. Kiley Hamlin, who cannot replicate the "Macbeth effect", i.e., the finding that feelings of disgust increase the desire for physical cleaning.


Assorted Thoughts #5: Grand Theories Edition

1. Theory of Art: Much of what looks like "selling out" is actually an ageing effect.

2. Theory of Literature: There are, basically, three features of a novel; style, story, character. When an author gets two right and does not mess up the third too badly, the book's fine.

3. Theory of Statistics: The main function of significance tests is to allow researchers to treat coefficients as though they were zero, which simplifies things wonderfully.


Pebbles, Vol. 45

6. Short research article: Evidence against the hypothesis that red sports clothing causes winning (Thomas V. Pollet and Leonard S. Peperkoorn). If I read that correctly, though, assignment is not random.

7. 15 types of movie posters (Houke de Kwant). Arguably, this is a rational business strategy. If you have a way of signaling "This is an action movie with lots of explosions", that's what you should do. After all, posters are marketing devices first.

12. Correlates of polygamy in Africa (James Fenske) (via)


Paging Fritz Heider

Meine Ehre heißt miau: Nazis with cats (via).


"Faith" Is Such a Useful Concept

Atheists sometimes point out that there is no reason to believe in god exactly because such a belief is a matter of faith. As faith is defined as "belief without evidence", this is equivalent to saying that you should not believe in god because there is no evidence for his existence. This is obviously based on the presumption that you should not believe in the existence of something if there is no evidence for it.

The concept of faith comes in handy in a few cases in which I personally feel that it's bloody obvious that a certain view is bonkers, but also know that my making the statement "that's obviously bonkers" will not be awfully convincing to you if you do not agree anyway. Take "natural rights". It should be self-evident that they don't exist. Yet there's a famous document, revered by many, which states that it's self-evident people are endowed with "certain inalienable Rights". What if you want to argue that such rights don't exist? You can point out that there is no evidence for their existence. They're a matter of faith.

Closely related, the philosophical position called "moral realism" (which holds that some moral propositions are objectively true, independent of psyches that recognize them as true) is a matter of faith. So is the belief that objects can be said to have an objective quality (in the sense of "higher" or "lower" quality), implying that you can say that movie X is objectively better that movie Y and that if you don't agree you are mistaken. These positions should not be assumed to be true, as there is no evidence for them. You should not say that you're absolutely certain they're wrong either, as evidence for them may come along in the future. I have no idea what evidence for natural rights or moral realism or objective quality would even look like, but that doesn't mean it's impossible it will come along. 

I should add that this post was in part inspired by reading Brad Taylor's post "Natural Rights Don't Exist" (via), which doesn't really deliver on the title, but succeeds in showing that the nonagression principle can lead to very undesirable consequences. Yet once you're arguing in terms of consequences, you've already left the question whether natural rights exist. He tries to weld the two issues together, but does not, I thnk, succeed. Perhaps he should simply have said that there is no evidence for the existence of such rights.


Around the Blogs, Vol. 102

1. The experiment Milgram chose not to publish (Tom Bartlett/Gina Perry) (via)

6. Why people dislike photos of themselves: Mirrors meet the mere exposure effect (Robert T. Gonzales). But don't miss the link in the last paragraph.

7. 26 great words from the OED (Carolyn Kellogg/Ammon Shea)

11. Paging Quetelet: Why song lenghts are not normally distributed (Gabriel Rossman) (via)

13. Feelings of extreme bliss produced by targeted brain stimulation (Christian Jarrett/Fabienne Picard, Didier Scavarda, and Fabrice Bartolomei). Gimme, gimme, gimme!

14. Why wages don't fall during recessions (Bryan Caplan/Truman Bewley)

16. Another bonkers graphic presented by Kaiser Fung.

17. The impact on wages of: height; smoking; testosterone (Economic Logician/Petri Böckerman and Jari Vainiomäki/Julie Hotchkiss and Melinda Pitts/Anne Gielen, Jessica Holmes and Caitlin Myers).


New Paper: At Least One Method for Estimating the Effect of Genes Yields Misleading Results

Until recently, behavioural geneticists had to use twin samples to estimate the heritability of traits. The standard method estimates heritability - the contribution of genes - by exploiting the fact that identical twins are more genetically alike than nonidentical twins, who are more alike than unrelated people. A drawback of this method is that twin samples are hard to find. Recently, a method has become available that circumvents this problem; it's called genome-wide complex trait analysis (GCTA). The basic idea is to use differences in the actual genetic makeup of people to calculate heritability. It should not be confused with the genome-wide association technique, which "hunts" for specific "genes for" some outcome (the "gene for depression" or what have you).

A recent paper by Maciej Trzaskowski, Philip S. Dale and Robert Plomin (abstract; via) compares heritability estimates on the basis of standard and GCTA techniques. Results for the dependent variables show that GCTA estimates are considerably smaller than standard estimates for height, weight and intelligence. The real shocker are the results for "behaviour problems" (such as depression or hyperactivity), though. While the standard analyses suggest considerable heritability, most GCTA estimates are zero or close to zero. Here's the result for self-report measures, with standard results on the left and GCTA results on the right:

Results for parent and teacher reports are broadly similar.

What's it all mean? Well, the authors include a long discussion section in their paper, but, frankly, much of it is above my head due to my very limited knowledge of genetics and associated research methods. The most important take-home message, though, is that at least one of the common methods for estimating the contribution of genes to human outcomes yields misleading results. This is very important, and it is to be hoped that the paper gets lots of exposure. I've done my part.


Scarily Scathing: Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False, by Thomas Nagel

Having read Mind and Cosmos, in which Thomas Nagel argues that evolutionary theory must be supplemented with teleology because Nagel feels that current Darwinism is kinda iffy, and can assure you that it's every bit as shoddy as the most negative reviews around the web would have you believe. Here's a selection:
Not only doesn’t Nagel deliver: he strikes out three times, with three distinct arguments as to why we should reject natural selection in its current, materialist form. Each of the book’s three main thrusts – involving consciousness, theoretical knowledge, and morality – begets a unique species of error. [...]

Teleology is certainly possible, and Nagel is not wrong to ask us to set aside our materialist presuppositions to consider radical alternatives. But he also needs to provide us with good reasons for believing that such radical alternatives are necessary. Nagel is unconvincing on this score, because it is not clear that the we must amend scientific theories to solve philosophical problems, in such a way as to guarantee the maximal intelligibility of the world; or that intelligibility must be linked to probability; or that an evolutionary origin for cognition is self-undermining; or that moral realism and natural selection are incompatible (and if so, that it is the latter rather than the former that must be amended).

Nagel never explains why his intuition should count for so much

H. Allen Orr, "Awaiting a New Darwin", The New York Review of Books

The sufficiency of genetic variation to drive natural selection has been a central theme since R A Fisher’s great book, The Genetical Theory of Natural Selection. Nagel, a philosopher, tells us there’s not enough. Big result! But it’s completely unsupported by argument. Nagel says that he would “like to defend the untutored reaction of incredulity to neo-Darwinism … It is prima facie highly implausible that life as we know it is the result of a sequence of physical accidents together with the mechanism of natural selection”. This is just irresponsible. It is simply wrong to adjudicate the probability of mutations by an “untutored reaction of incredulity”.

Mohan Matthen, "Thomas Nagel's Untutored Reaction of Incredulity", The Philosopers Magazine

Are we really supposed to abandon a massively successful scientific research program because Nagel finds some scientific claims hard to square with what he thinks is obvious and “undeniable,” such as his confidence that his “clearest moral…reasonings are objectively valid”? [...]

We may, of course, be wrong in having abandoned teleology and the supernatural as our primary tools for understanding and explaining the natural world, but the fact that “common sense” conflicts with a layman’s reading of popular science writing is not a good reason for thinking so. [...]

Nagel’s arguments against reductionism are quixotic, and his arguments against naturalism are unconvincing. He aspires to develop “rival alternative conceptions” to what he calls the materialist neo-Darwinian worldview, yet he never clearly articulates this rival conception, nor does he give us any reason to think that “the present right-thinking consensus will come to seem laughable in a generation or two.”

Brian Leiter and Michael Weisberg, "Do You Only Have a Brain? On Thomas Nagel", The Nation


When Can You Trust (Expert) Intuition?

I read Malcolm Gladwell's Blink when it came out. It's a deeply dissatisfying book. He starts out more or less proposing that intuition, or at least expert intuition, is this kind of superpower, then he starts slipping in anecdotes about how quick decision making fails instead of succeeding, which naturally makes you wait for Gladwell to give you the rule or rules which let you distinguish between the two situations, and then the book's finished. As Steve Sailer paraphrased the message: "Go with your gut reactions, but only when they are right." You're left with a bunch of anecdotes.

I'm currently reading Daniel Kahnemann's Thinking, Fast and Slow, which is - how to put it? - a better book than Blink. In one chapter, he summarizes the results of antagonistic collaboration on the topic of the trustworthiness of expert intuition with a researcher called Gary Klein (Klein trusted expert intuition, Kahnemann didn't). They found that experts need the following to develop trustworthy intuitions (p. 240):
  • an environment that is sufficiently regular to be predictable
  • an opportunity to learn these regularities through prolonged practice
 That may sound a bit obvious (once you know about it), but has a number of interesting implications.

One, this explains why you cannot predict growth or crime rates: the environment's just too darn irregular. Or when the next large-scale terrorist attack on U.S. soil is to be expected: perhaps they will turn out to be highly regular, but if so, there's been no opportunity to learn that.

Two, these are exactly the conditions under which systematic prediction (say, using a regression model) also works. That is, there seem to be no cases when intuition works, but deliberation cannot. Intuition may be quicker, but it's not some magic pipeline into otherwise inaccessible truths.

Three, this gives you a rule for when to trust your own intuitions. You are probably an expert in something, such as your wife's facial expressions. If they have a stable relationship to your wife's psychological states and you've had ample opportunity to learn about that relationship, your intuitions about what they mean are probably trustworthy. On the other hand, your intuitions about what your newly-acquired lover's facial expressions mean may be off the mark.


Lou Reed (1942-2013)

Lou Reed, arguably the greatest artist of the 20th century, is dead.

Here's the Associated Press's take at the Washington Post. Here is Jon Dolan at the Rolling Stone. Here are Sam Jones and Shiv Malik at the Guardian.

Here's an LP length's worth of carreer highlights, in chronological order.

(Image embedded from Wikipedia)


The Methodology of Positive Economics - Reversed

Leigh Caldwell, a behavioural economist, writes about microfoundations in economic models. Microfoundations means that you don't just talk about aggregate-level variables, but model the decision-making of economic agents (typically persons) in your theory, and develop the aggregate-level predictions on that basis. Caldwell correctly points out that homo oeconomicus isn't very realistic and that, consequently, microfounded theories based on the idea of homo oeconomicus are wrong.

He goes on to outline two typical responses to the critique that humans aren't the superrational decision-makers many economic models portray them as. One, actual people are pretty close to homo oeconomicus; two, let's forget about microfoundations. Caldwell suggests that instead economic models be built on more realistic microfoundations.

Between the two of us, Caldwell's the only economist, but I'll still try to make the case that the above is all besides the point. Naturally, it all leads back to Friedman (1953). The article - "The Methodology of Positive Economics" - is all about microfoundations. Friedman's point is simple: He doesn't care whether the microfoundations are correct, as long as they give the right macropredictions, and they often do. 

What Friedman doesn't tell you is this: Economic models are consistent with a lot of predictions, and hence a lot of microfoundations.

Let's take the economic theory of crime. If you ask economists, it starts with Becker (1968).* Among other things, the theory predicts that if the likelihood of being punished for a crime goes up, the volume of crime goes down, all other things equal. Ehrlich (1973) translated that into a microfounded model in which agents make decisions in part based on their rational calculations about the likelihood of being punished if they commit a crime. It's basically the same thing (frankly, I fail to see the point). Both models predict: Punishment up, crime down.

But it doesn't say by how much. And it's not just the economic theory of crime. All that mathematical modeling in economics is highly misleading: It looks exact, but all they'll really tell you is the direction of an effect. The rest is left to empirical estimation. 

You might think that's a big problem for economics, and compared to an ideal world of exact predictions, it is. But, in fairness, it's not as though the other social sciences deliver anything else. As far as I can see, all social science theory is about signs.**

Back to the example: If you can show that a rise in the likelihood of punishment leads to a reduction in crime, that's consistent with superrational decision-makers. It is also consistent with some decision-makers being superrational and all the other people not responding at all to the change in the likelihood of punishment. Etc. If you think it through you end up with something like the following: The finding is consistent with some of the people being sorta rational some of the time, and their effects outweighing the effects that are due to people behaving contrary to what the theory says. 

For Friedman and me, that's fine. Leave psychology to the psychologists.

But there is one very important consequence of all this: If your macro-level finding is consistent with a theory built on a microfoundation that assumes rational agents, this does not show many people act rationally in any substantial sense. This is important because economists like to argue otherwise, and soon you arrive at the stance that all drugs should be legal because drug addicts are rational. I'm open to the idea that all drugs should be legal, but a finding that increases in financial or nonfinancial drug prices decrease demand does not provide a strong argument in favour of legalization. If you want to argue individual decision making, bring individual-level data.

Note. All cites from memory. And not even a list of references!
*Of course, the ideas formalized in Becker's theory had been around for centuries, even in writing. E.g., Cesare Beccaria.
**There's also quite a bit of "theory" in the social sciences that's not really theory in the sense of a system of falsifiable hypotheses. Sociology is big in this department.


Around the Blogs, Vol. 101: Long Wait, Long List

Because I've been collecting for so long, it's so many links. Because it's so many links, I'm posting it early.

1. If the effect in question was found in a particularly small sample, should that strengthen or weaken your belief in the effect? (Eric Falkenstein) From the same author: A critique of Stevenson and Wolfers' happiness research.

2. Thoughtful, personal essay by Eric S. Raymond about the emotion and cognition of racism.

3. A body-mind theory of lefties and righties (Agnostic)

4. "Annals of Self-Refuting Tweets" (Jeremy Freese presents the American Sociological Association make an ass of itself)

5. Wie intensiv werden die Deutschen eigentlich von der eigenen Regierung ausgespäht? Man weiß es nicht. (Niko Härting) (via)

6. "A conservative estimate is that we’re spending a million dollars per year per terrorist, maybe more – that’s not even counting Iraq and Afghanistan." (Gregory Cochran)

7. The case against (eating lunch) outside (Matthew Yglesias) (via)

8. Matthew Desseem reviews Rififi.

9. Person fixed effects and psychological testing.

10. The theory that Marcia Lucas contributed more to Star Wars' quality than is usually acknowledged. (Fabio Rojas)

11. A discussion of reviewing and reviewers (with a focus on sociology) (olderwoman and commenters)

12. Is US violent crime actually down? Looking at non-police data. (Steve Sailer)

13. "William Boyd’s Taxonomy of the Short Story" (Will Wilkinson)

14. How not to get published. (Andrew Gelman/Brian Nosek, Jeffrey Spies, and Matt Motyl)

15. Getting the priorities straight (Foseti) (on this blog)

16. Male feminists: Demand and supply. (Nick Borman)

17. Real life cases of amnesia that are stranger than fiction. (Christian Jarrett)

18. Season of birth is endogenous (Eric Crampton/Kasey S. Buckles and Daniel M. Hungerman)

19. A model of how the internet works (Marco Arment) (via)


Identification: A Rule of Thumb for Social Scientists

The better a model is at identifying a causal effect, the less likely it is the effect is going to look substantial. That's because of (i) publication bias, (ii) how the world works.


Seems It's Never Really, Really Hot in Peking (First Post from Berlin)

So, first post from Berlin. I can't really say much about the city yet, as I've been preoccupied with work and looking for a flat. So far, my predominant impression of Berlin is that it's hot, though that doesn't really distinguish the city from the rest of the country. Hence, a (heat-related) anectoid about Peking instead.
I've recently had breakfast with a couple - Chinese husband, German wife - who live there most of the year. They said that Peking is both extremely smoggy and extremely hot. But official temperatures rarely reach 40 degrees centigrade. The reason is easy enogh to see: There is a rule that if it's 40 degrees or more, factories need to close for the day. 

Lest you think this kind of thing only happens in extremely authoritarian societies: I once heard from people who were consultants with Copenhagen (?) city services. The city was obliged to provide snow clearing services when the ground was "covered by snow". Can you guess who were the last people in Copenhagen to notice when the ground was covered by snow? That's right: City officials.

If you teach stats, you may want to try and get your hand on a dataset of official Peking temperatures and see if you can quasi-replicate Quetelet: Presumably, actual temperatures follow a near-normal distribution, so in official Peking temperatures, you should see a bump at 40 degrees. This might also be useful when introducing regression discontinuity approaches.

P.S.: Another thingy the Chinese husband told: A taxi driver asked him: "What? You have a European wife and you live in China? Why?"


Hiatus (Plus: Silly Playlist)

This blog will be on, in, or perhaps even at hiatus for about two weeks' time. I guess. It might be more, it might be less. The reason is that I'll be relocatin' tomorrow. To ease your pain, here's a Spotify playlist featuring songs featuring silly voices: