Tuesday, December 08, 2009
Monday, December 07, 2009
Truth or dare: on the pain of not being a relativist
Sometimes I wish I was a relativist.
If I were a relativist (or whatever fancy name they have now) then I don't think I would have tied myself into such epistemological knots as I did just a few hours ago. I was doing a bit of web research for my previous blog post on academic publishing and Web 2.0. Specifically I was trying to find out about arXiv (the document sharing platform used by physicists, mathematicians and the like), even more specifically I was researching the claims that some physicists (including Nobel Prize winners) were blocked simply because of who they were rather than the content of their articles. Terrible stuff.
One of the articles I referred to (by Frank Tipler) consisted on an excoriating attack on the weakness of the peer reviewing process, arguing that (1) nowadays 'genius' papers are likely to be reviewed by 'stupid' (his words) people, and (2) some topics will be dismissed out of hand because they go against current scientific orthodoxy. I found myself nodding in half agreement at these arguments while expressing a certain caution at his choice of words which tended to be rather bellicose.
Then I read on.
It turns out that one of the topics he believed was off limits was intelligent design as espoused by Michael Behe and William Dembski and argued that these folk should have a voice. "OK" I thought, maybe he has a point. Researching Tipler a little further it turns out that he has a pretty glittering career in mathematical physics (Nature and Science publications). Then I read this in Wikipedia:
"In his controversial 1994 book The Physics of Immortality,[4][5][6] Tipler claims to provide a mechanism for immortality and the resurrection of the dead, consistent with the known laws of physics, provided by a computer intelligence he terms the Omega Point and which he identifies with God. The line of argument is that the evolution of intelligent species will enable scientific progress to grow exponentially, eventually enabling control over the universe even on the largest possible scale."
Err....
Tipler's article is fascinating but problematical for four reasons. The first I have already dealt with above, should I believe the opinions of someone who believes what appears to me to be crackpot ideas? The second concerns the fact that the article is of uncertain provenance. Rather undermining the argument of my previous post I kept asking myself "was this peer reviewed". My suspiscions were further aroused by the fact that (third reason) there was no reference section and (fourth reason) it contained typos. Surely in the title
"Refereed Journals: Do They Insure Quality or Enforce Orthodoxy?"
"Insure" should be "Ensure", no? (OK, I guess he could have used either but "ensure" seems more traditional). Further, physicist Max Planck is referred to at one point as"Man Planck".
I do typos too of course (I'm sure you're aware of this, as doubtless there are some in here) but this is an opinion piece, dashed off, rather than a deeply considered piece of writing. The more serious a piece is the more typos matter.
Typos and the like aside, theoretical physics messes with people's heads because it relies on fiddling around with mathematics until it tells you something. The great thing about doing this is that it can lead you to some really surprising predictions (e.g. the quantum indeterminacies that underpin the Schrödinger's cat thought experiment) unfortunately psychology seldom avails itself of such mathematical reasoning which is possibly why most of its theories (if not its data) are almost indistinguishable from common sense. Maths does that, it's not that these guys necessarily believe their theories. This kind of jiggery pokery leaves belief far behind; the maths tells them that it must be so, even if what it is telling them is weirder than the worst acid trip. Physicists are in this way as much a slave to their equations as the "computer says no" benefits operative. Of course the other way of doing it is to simply start with a random belief, that God is made from cheese, say, and prove this as an ineluctable fact by similar mathematical jiggering and pokering -- which approach Tipler used is hard to judge, though I have my money on the latter.
It's not just theoretical physics, though, determining the truth is a tricky task. In many ways science makes things easier (no, really) because it provides (more or less) an agreed-upon framework for testing hypotheses. And in much of my own domain -- psychology -- I can usually make some kind of a judgement as to whether a particular hypothesis is supported or not by the data by examining the results sections of academic papers. But on some of the stuff, I haven't a clue. I've tried reading some of the stuff on game theory -- the really heavy mathematical stuff-- and I'm just not equipped to judge. Likewise theoretical physics, likewise pretty much anything outside my narrow domain of expertise.
So what do I do? I do what everyone does; I rely on (a) authority and (b) consensus.
For (a) I happen to have a few folk whose views I happen to hold in high esteem. I know Richard Dawkins isn't everyone's cup of tea, but I have a deep seated admiration for his singlemindedness, his powers of explanation and (sharp intake of breath) his humility (honestly). (I think I also like him because his voice reminds me of Oliver Postgate of Clangers and Bagpus fame, which is why I think Charlie Brooker -- whose views I also admire but not on issues such as these -- describes Dawkins as "looking and sounding exactly like Professor Yaffle” the aforementioned bookend, carved into the shape of a woodpecker was voiced by Postgate.) The philosopher Daniel Dennett is someone else whose opinions I will take seriously. I don't blindly follow them, of course, but in certain areas I will follow them somewhat myopically.
For (b) well everyone does this don't they, at least in some areas? And don't we keep hearing with reference to global warming about the 'scientific consensus'? Well if consensus was what mattered the scientific consensus 60 years ago was that plate tectonics (or continental drift as it was known then) was nonsense leaving its founder (Alfred Wegener) an object of ridicule among the scientific community. Not that I am a climate change denier*, of course, just to point out that one era's consensus is another era's pseudoscience (phlogiston anyone?).
So here I am in an epistemological knot. Not knowing what to believe. If I were a relativist I would be untroubled, if there's no such thing as the truth then there's no need to be concerned when I can't lay an easy hand on it. But is anyone a relativist, really? I had a colleague, a Sociologist, who used to refer to himself as a "nine-to-five relativist". Relativism was his day-to-day stock in trade, he wrote papers about it, used it as a interpretive framework for his academic research which was on the social construction of learning in the planarian flatworm [!] (he also smoked a pipe). But when he was driving home and saw a red light he would put his foot on the brake: traffic signals might be socially constructed but he clearly wasn't going to put his life on the line testing his own world view.
He might have been a nine-to-five relativist but I’m a 24/7 realist and as a result the truth always bothers me, whether it’s the true location of my door keys or more arcane philosophical truths. The truth hurts, that’s for sure, but its absence hurts even more.
*The word "denier" is a funny one. If you look it up it most commonly refers to a measurement of textiles. Female readers will be most familiar with it as a measure of the density of what used to be called 'hosiery' in the department stores of my childhood. With this interpretation I advance a new product with the following strapline "climate change denier: tights that keep your legs cool as the world heats up."
Academic publishing and Web 2.0
Tuesday, September 22, 2009
One hand clapping: when you should and shouldn't share ideas
Why is so much educational 'theory' so damn touchy feely? I hate that. Educationalists continually talk about knowledge being 'negotiated' or 'co-constructed' and about fostering learning 'communities' and so on. All of which, I'm sure, makes sense from the point of view of learning; I for one am glad that we have moved away from the hostile 'drill and practice' approach that democratised learning to the extent that it even ignores differences between species. (More than this, in fact, as it some of the Behaviorist models for learning were based on animals in a different biological order in the case of rats, a different biological class in the case of pigeons, and even a different phylum in the case of the sea slug Aplysia. Sadly there were no comparisons from a different kingdom, though mushroom learning might have been interesting.)
So there's nothing inherently objectionable about the ideas, but something seems to happen when the ideas get taken up and disseminated by the university learning and learning teaching contingent. Somehow it all becomes a bit happy clappy: take up thy tambourine and teach (or more likely 'support learning' as teaching as seen as being all a bit too Aplysia).
From this new touchy-feely view of learning students are always motivated to learn, they are perfectly happy to cooperate with one another and any observed failings is simply due a failure of the educators to present the information in the correct way (there are shades here of the Nuremberg funnel which I always seem to be banging on about). I'm currently embroiled in the early stages of writing an A-level textbook and I'm amazed at the way that the pedagogic devices -- you know the kind of thing, interim summaries, critical thinking questions and the like -- seem to be valued more highly that the content of the book, you know what the book is actually supposed to be about. I guess this isn't too surprising as it is easier to market a book simply by listing all of the devices it contains (including websites, multiple choice question banks and, doubtless soon, Twitter feeds) rather than on how it reads. This is probably also partly to do with the fact that students don't choose the books, the tutor does, and the tutor won't have read the book so the longer the list of features and the more labour saving they are with respect to the tutor's time, the more likely the book is to be chosen.
Weird but true.
Here's a quote from none other than Thomas Jefferson on education which seems to me to sum up this happy-clappy educational philosophy.
“He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.”
It's certainly a nice idea. The implication is that sharing ideas is a non-zero activity. If I have an apple and I give it to you, I no longer have the apple the number of apples is fixed. Information, however, is different, as you share them the idea proliferates. This crucial difference could well underlie the belief that the sharing of ideas is less problematical than the sharing of apples (or other finite resources) making learning communities not only possible but likely.
The question is, however, whether Jefferson is correct. Here are two reasons why he might not be.
First, ideas and information are usually tied to resources. If I share the location of my favourite blackberry patch I do not lose the information but I may well lose the blackberries. Second, and perhaps more importantly, when I give you an idea I am not just donating information but also the time and effort that has gone into acquiring that. This is fine so long as you give me ideas (or you pay me as is done with professional teachers). But would Jefferson repeatedly give away ideas? Would he permit people indefinitely lighting their tapers at his? Or would he eventually say 'FFS light your own taper you lazy........".
Here's the point, all information is tied to resources in one of the above two ways then information sharing is susceptible to the free rider problem which will lead to a wariness of sharing information. This is, of course, entirely theoretical, is there any data to support this? Well there is certainly data showing that people often soft-pedal in apparently cooperative environments (so-called social loafing) but more needs to be done in what I think is an important area. People will share ideas, of course, at the moment I am sharing ideas. One big reason why people will share ideas is if they own them, the are my ideas rather than just ideas that I happen to be in possession of. have. People seem to love sharing their opinions and experiences (witness Twitter) possibly because it increases status and prestige. Your sharing you ideas can also influence people such that they start acting in ways that change the world in ways are concordant with your interests. Essentially you seize partial control of their nervous system in order that they work for you. A good test of this is Jefferson himself. He shared his ideas with others and gained massive prestige and influence and doubtless changed the world to fit his own vision. The fact that we quote and venerate his ideas nearly 200 years after his death is testament to the power of those ideas. This is the hidden payoff of sharing.
I was going to finish off with a cool little idea which specifies in some detail what factors lead to sharing and which do not, but I have decided that it is so cool that I want to keep it to myself for a while. I mean, I though of it, and I don't want you to steal it.
Bye
Friday, September 04, 2009
A Sorry Tail
Thursday, April 30, 2009
Tweet flu
Wednesday, April 15, 2009
Why you don't need to keep close friends close
Thursday, April 02, 2009
Academic journals are killing science
I don't particularly like thinking with my gut either, but in this blog I am going to give my gut its voice. There is doubtless much research and argument relating to what I am going to say, but right now I'm too busy or lazy to look it up. So without further ado, I can hand you over to my esteemed colleague Mr Gut.
It can't just be me that is becoming increasingly frustrated with the whole process of submitting papers to academic journals. You faff about putting in the correct format (which always seems to be different for different journals, even those within the same topic area). You put the figures at the end, or in the text, you save the figures as a .tiff, or just as word files, you anonymise the paper (or not), etc. etc. Then you send it off (thankfully via email or upload now -- some things have improved) and then you wait. You wait until the editor finds some people to review the paper, gets agreement from potential reviewers that they are happy to do and then sends out copies (or more likely emails a link to a pdf) to the reviewers.
Sometime between 3 and 6 months after you originally submitted the article you get back the reviews and are told whether or not the paper is accepted, and if it is accepted what revisions are required. If it is accepted it might come out a year later.
So lets summarise. If all goes well it might take a minimum of a year between actually doing the research and getting the paper into print. If it all goes less well it can take much longer. For example myself and a colleague submitted a paper in 2001 (the research was done in 2000) to Cognitive Science. They rejected without review because it wasn't interdisciplinary enough. It then went off to Journal of Educational Psychology who required too many fundamental changes. We were then invited to submit it to a journal called Discource Processes by the editor himself. This we did in 2003. They wanted changes, we made the changes, they rejected the manuscript, we submitted it to International Journal of Human Computer Studies. They required revisions, we did them, they accepted, job done.
The paper was published in 2007, six years after we submitted it to the first journal and seven years after we did the experiments. Fortunately for us the paper was rather theoretical and wasn't something that dated, but can you imagine that it was a paper on social media? We would have had a paper on discussion lists and MUDs published in the age of Twitter and Facebook, potentially still relevant but hardly current.
Not all paper take this long of course, but even a two year gap between study and publication is unacceptable, this is probably one reason why many academics are turning to blogs and the like to get their ideas in the public domain. I am fine with this. If it is in an area I know well the absence of peer review causes me no problem at all. I can tell for myself whether the arguments and data are good or bad. But it is important for those who cannot do this that the article has independent verification of quality and it has to be independent. Anyone can get their academic chums to give their blog-paper the thumbs-up and therefore the specious patina of respectability.
So there must be a way of speeding up the review process whilst still offering quality control.
In Wikinomics the authors discuss the case of particle physicists who use upload their manuscript to a wiki which is then edited by collaborators and finally published, a process that takes weeks if not days to complete. This is particularly imporant in some of the hard sciences (high-energy physics, genetics, etc.) where things move so quickly, but I also think it is important in many other academic disciplines (such as social media). The question is how to movivate the 'reviewers'? They could be rewarded by becoming a named author on the paper but then there is the problem that people might develop a pro-publication bias to get a publication. But the motivation should really be that participating in the reviewing process allows you to submit your own articles to the journal: everyone would surely benefit from their getting their papers turned round in 1/10 or so of the time it would do normally so academics should be falling over themselves to obtain membership of this club by performing reviews.
Now that my gut has been given its head (as it were), the rational part of me (to committ that egregious Cartesian fallacy) would like to ask anyone reading this. What do you know about attempts to do this, especially in the non-physical sciences. Are there any problems (one can imagine all kinds of game theoretic problems occuring). But does it work? It certainly should.