Which political science journals will have a data policy?

Making available replication materials for the research you do is A Good Thing. It’s also work, and it’s quite easy to never get around to. Certainly I claim no special virtue in this department so I am always happy when there’s an institutional stick to prod my better nature in the right direction. One such institutional prod comes from academic journals and their data policies.  If you have to give them your replication data before you they’ll publish your paper, then you probably will. What sorts of journals have data policies?
Continue reading Which political science journals will have a data policy?

R to Latex packages: Coverage

There are now quite a few R packages to turn cross-tables and fitted models into nicely formatted latex. In a previous post I showed how to use one of them to display regression tables on the fly. In this post I summarise what types of R object each of the major packages can deal with.

Unsurprisingly, there’s quite some variation…
Continue reading R to Latex packages: Coverage

Tools for making a paper

Since it seems to be the fashion, here’s a post about how I make my academic papers.
Actually, who am I trying to kid? This is also about how I make slides, letters, memos and “Back in 10 minutes” signs to pin on the door. Nevertheless it’s for making academic papers that I’m going to recommend this particular set of tools.

I use the word make deliberately because I’m thinking of ‘academic paper’ broadly, as the sum of its words, analyses, tables, figures, and data. In this sense, papers can contain their own replication materials and when they do it should be possible in a single movement to rerun a set of analyses and reconstruct the paper that reports them.

To get anywhere near that goal, I use a mix of latex, its newer bibliography system biblatex, and the R package knitr. Also, I use a Mac, though that won’t make very much difference to the exposition.

Here’s how this currently works…
Continue reading Tools for making a paper

Quantifying the international search for meaning

Inspired by Preis et al.’s article Quantifying the advantage of looking forward, recently published in Scientific Reports (one of Nature publishing group’s journals), I wondered if similar big-data web-based research methods might address a question even bigger than how much different countries wonder about next year. How about the meaning of life. Who is searching for clarification about the meaning of life? And how is that related to the more obvious life task of getting richer?
Continue reading Quantifying the international search for meaning

No more ascii-art

At least fourfive R packages will turn your regression models into pretty latex tables: texreg, xtable, apsrtable, memisc, and stargazer.  This is very nice if you happen to be a latex document or its final reader, but it’s not so great if you’re making those models to start with.

What if you wanted to see these as you were working on them? In particular what if you wanted to see all your models lined up as if they were already Table 4 of the Masterwork Yet To Be Named that is only now slowly taking shape in your mind?
Continue reading No more ascii-art

On the use and abuse of weasels in science journalism

Dean Burnett writes a column in Guardian, sometimes about science but more entertainingly on pseudo-, wannabe-, and not-actually- science. Most of the time this is good BS-shovelling fun and I recommend it. Unfortunately today we get some ill-considered overreach under the guise of shovelling.

The subject is a silly equation purporting to define how depressing any day of the year is, and thereby to identify the most depressing one. It is sufficiently silly that it doesn’t deserve a link, has no redeeming features and if you’ve not read it yet you’re just lucky. He’s right. It is nonsense.

The arguments are more interesting, if a bit alarming. To put it bluntly: if they were sound they would sink all regression models about any social issue of any interest to anybody ever.
Continue reading On the use and abuse of weasels in science journalism

Constants in Logit scales

A little while ago I got a query about the calculation of the logit policy scales from Lowe et al. (2011). I thought it might be useful to repeat the answer slightly more publicly, in case anybody else was wondering. The pesky constants in that paper confuse people. Anyway, here’s the question:

In the article you give the formula as log(R+.5)-log(L+.5). I had assumed that in the formula that ‘R’ and ‘L’ were the total number of sentences on each ‘side’ of a policy scale and so consequently .5 is added to the total number of sentences in all the manifesto categories assigned to each side of a policy scale. However I was reading an article [… where they] seem to add .5 to each of the categories assigned to a policy scale and then also divide by the number of items used in the scale (their approximate formula [without proper subscripting] is: p = [(log(p_1 +0.5)-log(p_2+.5)+ \ldots +(log(p_3+.5)-log(p_4+.5)]/3) where p is a manifesto category). Consequently I’m slightly worried that I’ve misinterpreted how you calculate your scale

OK, so the way to think about this scale is as follows…
Continue reading Constants in Logit scales