Monday, December 07, 2009

Truth or dare: on the pain of not being a relativist

Sometimes I wish I was a relativist.

If I were a relativist (or whatever fancy name they have now) then I don't think I would have tied myself into such epistemological knots as I did just a few hours ago. I was doing a bit of web research for my previous blog post on academic publishing and Web 2.0. Specifically I was trying to find out about arXiv (the document sharing platform used by physicists, mathematicians and the like), even more specifically I was researching the claims that some physicists (including Nobel Prize winners) were blocked simply because of who they were rather than the content of their articles. Terrible stuff.

One of the articles I referred to (by Frank Tipler) consisted on an excoriating attack on the weakness of the peer reviewing process, arguing that (1) nowadays 'genius' papers are likely to be reviewed by 'stupid' (his words) people, and (2) some topics will be dismissed out of hand because they go against current scientific orthodoxy. I found myself nodding in half agreement at these arguments while expressing a certain caution at his choice of words which tended to be rather bellicose.

Then I read on.

It turns out that one of the topics he believed was off limits was intelligent design as espoused by Michael Behe and William Dembski and argued that these folk should have a voice. "OK" I thought, maybe he has a point. Researching Tipler a little further it turns out that he has a pretty glittering career in mathematical physics (Nature and Science publications). Then I read this in Wikipedia:

"In his controversial 1994 book The Physics of Immortality,[4][5][6] Tipler claims to provide a mechanism for immortality and the resurrection of the dead, consistent with the known laws of physics, provided by a computer intelligence he terms the Omega Point and which he identifies with God. The line of argument is that the evolution of intelligent species will enable scientific progress to grow exponentially, eventually enabling control over the universe even on the largest possible scale."

Err....

Apparently, however, his views were supported (to some extent, at least) by David Deutsch the parallel universes guy who is pretty well respected. But then some of Deutsch's ideas can be a little left field as well. But then, isn't all theoretical physics left field nowadays?

Tipler's article is fascinating but problematical for four reasons. The first I have already dealt with above, should I believe the opinions of someone who believes what appears to me to be crackpot ideas? The second concerns the fact that the article is of uncertain provenance. Rather undermining the argument of my previous post I kept asking myself "was this peer reviewed". My suspiscions were further aroused by the fact that (third reason) there was no reference section and (fourth reason) it contained typos. Surely in the title

"Refereed Journals: Do They Insure Quality or Enforce Orthodoxy?"

"Insure" should be "Ensure", no? (OK, I guess he could have used either but "ensure" seems more traditional). Further, physicist Max Planck is referred to at one point as"Man Planck".

I do typos too of course (I'm sure you're aware of this, as doubtless there are some in here) but this is an opinion piece, dashed off, rather than a deeply considered piece of writing. The more serious a piece is the more typos matter.

Typos and the like aside, theoretical physics messes with people's heads because it relies on fiddling around with mathematics until it tells you something. The great thing about doing this is that it can lead you to some really surprising predictions (e.g. the quantum indeterminacies that underpin the Schrödinger's cat thought experiment) unfortunately psychology seldom avails itself of such mathematical reasoning which is possibly why most of its theories (if not its data) are almost indistinguishable from common sense. Maths does that, it's not that these guys necessarily believe their theories. This kind of jiggery pokery leaves belief far behind; the maths tells them that it must be so, even if what it is telling them is weirder than the worst acid trip. Physicists are in this way as much a slave to their equations as the "computer says no" benefits operative. Of course the other way of doing it is to simply start with a random belief, that God is made from cheese, say, and prove this as an ineluctable fact by similar mathematical jiggering and pokering -- which approach Tipler used is hard to judge, though I have my money on the latter.

It's not just theoretical physics, though, determining the truth is a tricky task. In many ways science makes things easier (no, really) because it provides (more or less) an agreed-upon framework for testing hypotheses. And in much of my own domain -- psychology -- I can usually make some kind of a judgement as to whether a particular hypothesis is supported or not by the data by examining the results sections of academic papers. But on some of the stuff, I haven't a clue. I've tried reading some of the stuff on game theory -- the really heavy mathematical stuff-- and I'm just not equipped to judge. Likewise theoretical physics, likewise pretty much anything outside my narrow domain of expertise.

So what do I do? I do what everyone does; I rely on (a) authority and (b) consensus.

For (a) I happen to have a few folk whose views I happen to hold in high esteem. I know Richard Dawkins isn't everyone's cup of tea, but I have a deep seated admiration for his singlemindedness, his powers of explanation and (sharp intake of breath) his humility (honestly). (I think I also like him because his voice reminds me of Oliver Postgate of Clangers and Bagpus fame, which is why I think Charlie Brooker -- whose views I also admire but not on issues such as these -- describes Dawkins as "looking and sounding exactly like Professor Yaffle” the aforementioned bookend, carved into the shape of a woodpecker was voiced by Postgate.) The philosopher Daniel Dennett is someone else whose opinions I will take seriously. I don't blindly follow them, of course, but in certain areas I will follow them somewhat myopically.

For (b) well everyone does this don't they, at least in some areas? And don't we keep hearing with reference to global warming about the 'scientific consensus'? Well if consensus was what mattered the scientific consensus 60 years ago was that plate tectonics (or continental drift as it was known then) was nonsense leaving its founder (Alfred Wegener) an object of ridicule among the scientific community. Not that I am a climate change denier*, of course, just to point out that one era's consensus is another era's pseudoscience (phlogiston anyone?).

So here I am in an epistemological knot. Not knowing what to believe. If I were a relativist I would be untroubled, if there's no such thing as the truth then there's no need to be concerned when I can't lay an easy hand on it. But is anyone a relativist, really? I had a colleague, a Sociologist, who used to refer to himself as a "nine-to-five relativist". Relativism was his day-to-day stock in trade, he wrote papers about it, used it as a interpretive framework for his academic research which was on the social construction of learning in the planarian flatworm [!] (he also smoked a pipe). But when he was driving home and saw a red light he would put his foot on the brake: traffic signals might be socially constructed but he clearly wasn't going to put his life on the line testing his own world view.

He might have been a nine-to-five relativist but I’m a 24/7 realist and as a result the truth always bothers me, whether it’s the true location of my door keys or more arcane philosophical truths. The truth hurts, that’s for sure, but its absence hurts even more.


*The word "denier" is a funny one. If you look it up it most commonly refers to a measurement of textiles. Female readers will be most familiar with it as a measure of the density of what used to be called 'hosiery' in the department stores of my childhood. With this interpretation I advance a new product with the following strapline "climate change denier: tights that keep your legs cool as the world heats up."

Academic publishing and Web 2.0

Which of the following is true and which is false?

(1) An academic article is only as good as the journal in which it is published
(2) A journal is only as good as the academic articles that it publishes

Of the two (1) seems to me to be obviously false. Of course researchers and their research can gain a kudos for being published in a high-impact factor journal (Science, Nature, and so on) but it is that "only as good as" that sticks in the craw: there are independent factors that contribute to the quality of a piece of research other than where it is published. Naturally there will be a high correlation between some independent assessment of "research quality" and the impact factor of the journal in which it is published but the correlation will not be perfect (there are doubtless very good papers published, for whatever reason, in lower impact factor journals, and doubtless also some dross published in the "good" ones).

So now let's examine statement (2), this seems to me, at least, to be entirely true, at least in the long term. If the editors of Nature, say, started to suddenly publish low grade research then pretty soon fewer people would read it, it would thus have less influence and its impact factor would tumble.

But Nature, Science, Cell and the like are unlikely to start publishing rubbish so what am I talking about, where is this thought experiment going?

It has seemed to me for quite a while that the whole nature of academic publishing is the wrong way round. Having written up their experiment(s) researchers will usually strive to get the paper in the highest impact factor journal they can given their discipline, topic area, methodology and the like. This 'aim high' strategy sometimes works, but often the paper will be rejected (either before or after review) and the researchers will then move down the "quality" ladder until a journal accepts the paper (or they give up!).

But this seems wrong because, as the answer to the above conundrum seems to suggest ultimately journals have more to gain from accepting good articles than researchers have from publishing in good journals. So it is the journals that should be soliciting high-quality articles from the researchers rather than the researchers going cap-in-hand to the journals. (Note that I am using "should" in an ideal world kind of way here, rather than referring in a real world kind of way -- more of which later.)

This seems to happen in some scientific disciplines. I was interested to read here a story concerning the first experiments conducted on the Large Hadron Collider (LHC). The data were collected on Monday 23rd of November 2009, the paper written up by the following Saturday, and three days after this it was accepted by the European Journal of Physics. Now, it has to be said that the impact factor of this journal is not very high, about 1.7 compared to around 30 for Nature and Science, read into that what you will, but the point I am making is that it took just over a week to go from laboratory to "in press" -- unbelievable if you compare it to psychology (my field) where the same process is likely to take a year or more. (This assuming that it is accepted by the first journal with only minor revisions required by the reviewers.)

How did this happen? Well physicists can upload papers to a server called arXiv (pronounced "archive" as the X is supposed to be the Greek letter Chi) where it is moderated by other physicists which can lead to the authors revising the manuscript (or sometimes the moderators get in on the act of revisions too). Whatever, the process is much more rapid than the glacial act of peer review. How did the European Journal of Physics get in on the act? Well the article doesn't say but the implication is that the editors visited arXiv and decided to publish the article. Why? Because for a relatively lowly journal picking up on the first data to come out of the LHC will gain it a great deal of publicity which may, in the long term, lead to greater influence subscriptions, money and the like.

This is exactly the process that we've seen in other industries such as popular music. In the old days (pre-internet, I mean) a band would scrape together some cash to record a demo tape which they would send to the A&R department of various record companies in the hope that one of them would give it a listen, like it and sign them. This may still happen, but many artists and record companies are forgoing this process. The band puts their music on Myspace or wherever and waits for the record companies to find them.

The world has changed but academic publishing is still in the era of cassette tapes and jiffy bags. It is actually worse than this. Pre-web 2.0 musicians could submit their cassettes to as many record companies as they liked to maximise their chances of getting heard and maybe hoping that they could stimulate a bidding war if more than one company was interested in signing them. When you submit an article to a journal you have to sign a form (electronic, thankfully) stating that your manuscript has not been and will not be submitted to another journal: the journal has exclusive rights to review your paper.

Do we want bidding wars between journals? Won't that harm science in the long run? Maybe, but I guess the future is a world without journals as we understand them today. Quite a few influential papers are 'published' in arXiv and never end up in a journal. But, you might argue, if these articles haven't been peer reviewed how can we guarantee academic quality? Well, you can't of course, but then you never could. I will only refer you again to Alan Sokal's paper that was accepted for publication in a high-profile discourse journal despite being peer reviewed and despite being deliberate nonsense and to this interesting if occasionally borderline unhinged article by physicist Frank J. Tippler and move on. It seems to me that the community will provide far better checks and balances on academic quality than three anonymous reviewers who only (usually) get one bite at the cherry.

ArXiv isn't perfect, and there have been some claims that the administrators have blacklisted some scientists from publishing on arXiv simply because they have expressed views that run counter to current scientific dogma; but such problems should be relatively easy to solve by for example, expanding the number and diversity of administrators, or by having papers be submitted anonymously in the first instance ensuring acceptance is based upon quality of research rather than on judgements made ad hominem. (My feeling is that this also happens in traditional journals, btw, as well as its opposite: low quality articles gaining acceptance simply as a result of their being authored by someone with a lot of intellectual clout.)

If we as social scientists want our research to be truly current, not two or more years out of date then we need something like arXiv, academia needs to catch up with the Web 2.0 revolution.








Tuesday, September 22, 2009

One hand clapping: when you should and shouldn't share ideas

Why is so much educational 'theory' so damn touchy feely? I hate that. Educationalists continually talk about knowledge being 'negotiated' or 'co-constructed' and about fostering learning 'communities' and so on. All of which, I'm sure, makes sense from the point of view of learning; I for one am glad that we have moved away from the hostile 'drill and practice' approach that democratised learning to the extent that it even ignores differences between species. (More than this, in fact, as it some of the Behaviorist models for learning were based on animals in a different biological order in the case of rats, a different biological class in the case of pigeons, and even a different phylum in the case of the sea slug Aplysia. Sadly there were no comparisons from a different kingdom, though mushroom learning might have been interesting.)

So there's nothing inherently objectionable about the ideas, but something seems to happen when the ideas get taken up and disseminated by the university learning and learning teaching contingent. Somehow it all becomes a bit happy clappy: take up thy tambourine and teach (or more likely 'support learning' as teaching as seen as being all a bit too Aplysia).

From this new touchy-feely view of learning students are always motivated to learn, they are perfectly happy to cooperate with one another and any observed failings is simply due a failure of the educators to present the information in the correct way (there are shades here of the Nuremberg funnel which I always seem to be banging on about). I'm currently embroiled in the early stages of writing an A-level textbook and I'm amazed at the way that the pedagogic devices -- you know the kind of thing, interim summaries, critical thinking questions and the like -- seem to be valued more highly that the content of the book, you know what the book is actually supposed to be about. I guess this isn't too surprising as it is easier to market a book simply by listing all of the devices it contains (including websites, multiple choice question banks and, doubtless soon, Twitter feeds) rather than on how it reads. This is probably also partly to do with the fact that students don't choose the books, the tutor does, and the tutor won't have read the book so the longer the list of features and the more labour saving they are with respect to the tutor's time, the more likely the book is to be chosen.

Weird but true.

Here's a quote from none other than Thomas Jefferson on education which seems to me to sum up this happy-clappy educational philosophy.

“He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.

It's certainly a nice idea. The implication is that sharing ideas is a non-zero activity. If I have an apple and I give it to you, I no longer have the apple the number of apples is fixed. Information, however, is different, as you share them the idea proliferates. This crucial difference could well underlie the belief that the sharing of ideas is less problematical than the sharing of apples (or other finite resources) making learning communities not only possible but likely.

The question is, however, whether Jefferson is correct. Here are two reasons why he might not be.

First, ideas and information are usually tied to resources. If I share the location of my favourite blackberry patch I do not lose the information but I may well lose the blackberries. Second, and perhaps more importantly, when I give you an idea I am not just donating information but also the time and effort that has gone into acquiring that. This is fine so long as you give me ideas (or you pay me as is done with professional teachers). But would Jefferson repeatedly give away ideas? Would he permit people indefinitely lighting their tapers at his? Or would he eventually say 'FFS light your own taper you lazy........".

Here's the point, all information is tied to resources in one of the above two ways then information sharing is susceptible to the free rider problem which will lead to a wariness of sharing information. This is, of course, entirely theoretical, is there any data to support this? Well there is certainly data showing that people often soft-pedal in apparently cooperative environments (so-called social loafing) but more needs to be done in what I think is an important area. People will share ideas, of course, at the moment I am sharing ideas. One big reason why people will share ideas is if they own them, the are my ideas rather than just ideas that I happen to be in possession of. have. People seem to love sharing their opinions and experiences (witness Twitter) possibly because it increases status and prestige. Your sharing you ideas can also influence people such that they start acting in ways that change the world in ways are concordant with your interests. Essentially you seize partial control of their nervous system in order that they work for you. A good test of this is Jefferson himself. He shared his ideas with others and gained massive prestige and influence and doubtless changed the world to fit his own vision. The fact that we quote and venerate his ideas nearly 200 years after his death is testament to the power of those ideas. This is the hidden payoff of sharing.

I was going to finish off with a cool little idea which specifies in some detail what factors lead to sharing and which do not, but I have decided that it is so cool that I want to keep it to myself for a while. I mean, I though of it, and I don't want you to steal it.

Bye


Friday, September 04, 2009

A Sorry Tail

A recent drive from Leeds down to Hatfield had me reflecting on two things. First, why I wanted to spend time in Hatfield; I lived there or thereabouts for three years when doing my undergraduate degree and feel that I have therefore served my sentence. My other reflection -- and the one that is the subject of this blog -- is how to reduce driver aggression. There is, of course, loads of psychobabble relating to driver aggression or 'road rage', much of it deriving from pseudo-Freudian 'theories' of unfulfilled sexual desire (and the like).

Penis size notwithstanding it occurred to me that at least one reason for this aggression is related to the kind of signals that you can and cannot make to other drivers. It is relatively easy to thank someone, for example, for letting you out of a junction or into a motorway lane. From the front this can be done by flashing the headlights, or from the rear by either using the hazard warning lights (something I first encountered in Cape Town) or by doing that funny thing where you quickly flick the indicators left then right then left again. Either way, the meaning is clear "Thanks", "Cheers", "You're a good 'un".

You can send negative signals too. By hanging onto the your light flasher for a split second longer than for the "thank you" signal you can say "get out of my way". Horns can be blasted and fingers held aloft to say "Twat" and its many variants.

But the hardest word for your car to say is sorry. Weird that, particularly in the UK where we seem to spend most of our time uttering those 5 letter as we negotiate our way through the social milieu.

So here's the big idea. A sorry light. This could simply be a light on the tail of the car (as it seems to me that for some reason that you nearly always want to say sorry to the person behind you, possibly as a result of cutting them up). I'm not sure what colour it would be but its function should be clear, flashing it sends an apology to the driver behind. You could even imagine subtle differences in flashing patterns evolving a quick flash could be done when you just nip in front of someone infringing their personal space but not requiring their touching their brakes. This would be a the kind of sorry that means only a little more than the "excuse me" that you mumble when having to push through a gap in a queue at the airport in order to stop your kids losing their fingers on the baggage carousel. More elaborate flashes could be used when the driver behind has to take evasive action: stamping on the brakes, for example, or switching lanes. More like the kind of sorry you would say having airily putting your cigarette out in someone's half-full can of coke. Which I did once. On a train to Hatfield, as it turns out.

Of course as Biologist Amotz Zahavi taught us, any signalling system is open to misuse. I used to know someone who'd ride his bike through a crowded pedestrian area in the centre of Manchester elbowing people out of the way and each time shouting a cheery "Sorry!" Here the signal is fake; he didn't mean sorry in the true sense, all he meant was "Don't hit me" and it worked! The Zahavian problem is that if more and more people use a signal in this deceptive sense then fewer people will pay attention to it and ultimately the signal is ignored. How many times have you seen hazard warning lights used to signal a hazard rather than being used to excuse the fact that someone has parked on the pavement or in front of a fire station or something.

Zahavi argues that one way to preserve signal honesty is that they should be costly to the sender. Thus the sorry light could give the driver a painful but non-lethal electric shock when making it, or maybe it could automatically text your name and address to a "mea culpa" list on the Internet. Or possibly instead of flashing a light, pushing the sorry button could flash up a photograph of the transgressor in a sexually compromising position on the rear of the car. Actually scratch that: there are already too many things for drivers to fiddle with as they drive without providing impetus for yet another.

But of course even these solutions could be exploited by masochists and exhibitionists, so maybe they should pay by being sent to Hatfield.

Sometimes a signal can cost too much.

Thursday, April 30, 2009

Tweet flu

There was a minor kerfuffle recently about Twitter's role in spreading misinformation about swine flu. BBC Radio 4's Media Show contained a section in which media expert Evgeny Morozov discussed Twitter's role in the spreading of this misinformation. The evidence, as it turns out is incredibly scant. A few Tweets were cited on this programme, most of which sounded either like quotes from newspaper headlines or people being deliberately humerous (there is more on this here). 

Is Twitter being used to spread misinformation? Undoutedly yes, but that is not the important question: all media are used to spread lies as well as truth. The important question is how does Twitters truth to lies ratio compare to those of other media? And of course no one can answer this question, though I suspect it would turn out to be no different from the kind of discussions that you get on the bus. I suppose the speed with which tweets can proliferate from person to person could lead misinformation being spread more rapidly and potentially create a panic. 

But hey, who's panicking? According to an interview with, I think, the UK's Chief Medical Officer people don't seem to be panicking (in the UK at least): GP's phone lines are not being jammed by anxious callers; there has not (yet) been an overwhelming demand for face masks (although one of my PhD students saw someone today wearing one in Sheffield, but that could just be Sheffield).

Swine flu is not yet an epidemic, the spread of misinformation on Twitter is not yet an epidemic but I am starting to worry that the spread of scare stories about new media has reached epidemic proportions. 

From cancer to swine flu in just a few weeks.

Wednesday, April 15, 2009

Why you don't need to keep close friends close

In The Godfather, Michael Coreleone apparently famously says "keep you friends close, and your enemies closer". I say 'apparently' because I've not seen any of The Godfather trilogy. No real reason, just other things to do. Happily, it turns out that the quote is apparently plagiarised from the Chinese general Sun-Tzu who said it about 400 BC. (Again is say 'apparently', because I just Googled it and people lie on the Internet, apparently).

Anyhow, it is a good quote and an interesting one. It suggests that one of the reasons for keeping people close to us is that we don't trust them. Is this the only reason why we wish to keep people close? I'm going to argue that it is, although I could be wrong. Take the titi monkey as described by Helena Cronin in a recent article on the battle of the sexes.  As she writes:

"Picture a pair a titi monkeys, husband and wife, in close embrace, their tails entwined, in sleep cuddled together, when awake always close preferring one another's company above that of all others."

It just seems so cute, so human, so much like being in love, so much like close friendships (with the possible exception in the latter case of cuddling together during sleep, but I might be displaying old fashioned attitudes here). She then goes on to explain that the reason for this behavior is that each is party is protecting its investment. The male is making sure that no other male reproduces with his mate whereas the female is making sure that her mate doesn't run off and shirk his childrearing responsibiltities (male titis happen to invest a lot of effort rearing the kids). Titi 'love' -- and why not call it that? -- is fundamentally based on mistrust.

Keeping friends close, then, could simply be a way of ensuring that they will return our investment in them (emotional, material, etc.) rather than their going off and giving it to someone else. The rather wonderful Carl Bergstrom has, in fact, proposed that when friends 'hang out' together apparently wasting time, they are in fact keeping each other close; each making sure that the other isn't off hanging out with others.

As relationships mature, of course, we come to trust our friends and partners and we give them more freedom. As Sting so rightly sang, if you love someone set them free (which, as it turns out, he plagiarised from American novelist Richard Bach). Why might increased emotional closeness lead to our giving our friends and partners more freedom? The economist Russell Hardin might have the answer: we trust someone to the extent that their interests encapsulate ours.  We believe that they would not betray us because betraying us would be to betray their own interests. A successful courtship -- whether that be romantic of becoming friends with someone -- is a process whereby we identify interests and maybe even 'grow together' in the sense that our interests become increasingly aligned and entwinend. The closer are the interests, the lower the likelihood that either party will defect on the other (see my earlier posts on homophily and trust).

That's not to say that we can ever trust anyone 100%. No matter how perfectly my interests might overlap with yours I'm still here in this body and you are still there in yours and that is a fundamental conflict of interests that can never be breached. But Michael Coreleone or Sun-Tzu or whoever it was was right: you do not need to keep your friends close, because they are always close you've chosen them because they have your interests at heart and, if you've done your job properly, you've planted in them the goal to act for you. Just like they have with you.

 

Thursday, April 02, 2009

Academic journals are killing science

In an interview the astrophysicist and all round clever chap was asked a question that was outside the current scientific data. After Sagan repeatedly told this to the interviewer the interviewer asked him to give a gut answer. Sagan famously replied "but I prefer not to think with my gut."

I don't particularly like thinking with my gut either, but in this blog I am going to give my gut its voice. There is doubtless much research and argument relating to what I am going to say, but right now I'm too busy or lazy to look it up. So without further ado, I can hand you over to my esteemed colleague Mr Gut.

It can't just be me that is becoming increasingly frustrated with the whole process of submitting papers to academic journals. You faff about putting in the correct format (which always seems to be different for different journals, even those within the same topic area). You put the figures at the end, or in the text, you save the figures as a .tiff, or just as word files, you anonymise the paper (or not), etc. etc. Then you send it off (thankfully via email or upload now -- some things have improved) and then you wait. You wait until the editor finds some people to review the paper, gets agreement from potential reviewers that they are happy to do and then sends out copies (or more likely emails a link to a pdf) to the reviewers.

Sometime between 3 and 6 months after you originally submitted the article you get back the reviews and are told whether or not the paper is accepted, and if it is accepted what revisions are required. If it is accepted it might come out a year later.

So lets summarise. If all goes well it might take a minimum of a year between actually doing the research and getting the paper into print. If it all goes less well it can take much longer. For example myself and a colleague submitted a paper in 2001 (the research was done in 2000) to Cognitive Science. They rejected without review because it wasn't interdisciplinary enough. It then went off to Journal of Educational Psychology who required too many fundamental changes. We were then invited to submit it to a journal called Discource Processes by the editor himself. This we did in 2003. They wanted changes, we made the changes, they rejected the manuscript, we submitted it to International Journal of Human Computer Studies. They required revisions, we did them, they accepted, job done.

The paper was published in 2007, six years after we submitted it to the first journal and seven years after we did the experiments. Fortunately for us the paper was rather theoretical and wasn't something that dated, but can you imagine that it was a paper on social media? We would have had a paper on discussion lists and MUDs published in the age of Twitter and Facebook, potentially still relevant but hardly current.

Not all paper take this long of course, but even a two year gap between study and publication is unacceptable, this is probably one reason why many academics are turning to blogs and the like to get their ideas in the public domain. I am fine with this. If it is in an area I know well the absence of peer review causes me no problem at all. I can tell for myself whether the arguments and data are good or bad. But it is important for those who cannot do this that the article has independent verification of quality and it has to be independent. Anyone can get their academic chums to give their blog-paper the thumbs-up and therefore the specious patina of respectability.

So there must be a way of speeding up the review process whilst still offering quality control.

In Wikinomics the authors discuss the case of particle physicists who use upload their manuscript to a wiki which is then edited by collaborators and finally published, a process that takes weeks if not days to complete. This is particularly imporant in some of the hard sciences (high-energy physics, genetics, etc.) where things move so quickly, but I also think it is important in many other academic disciplines (such as social media). The question is how to movivate the 'reviewers'? They could be rewarded by becoming a named author on the paper but then there is the problem that people might develop a pro-publication bias to get a publication. But the motivation should really be that participating in the reviewing process allows you to submit your own articles to the journal: everyone would surely benefit from their getting their papers turned round in 1/10 or so of the time it would do normally so academics should be falling over themselves to obtain membership of this club by performing reviews.

Now that my gut has been given its head (as it were), the rational part of me (to committ that egregious Cartesian fallacy) would like to ask anyone reading this. What do you know about attempts to do this, especially in the non-physical sciences. Are there any problems (one can imagine all kinds of game theoretic problems occuring). But does it work? It certainly should.

Sunday, March 08, 2009

The Iago syndrome

I am probably the least qualified person to discuss Shakespeare, and my experience of 'live' Shakespeare is limited to one viewing of Othello (and that was the recent one with Lenny Henry playing in the title role FFS). But I was gripped by the psychological content of this play. That Iago -- a man who would probably nowadays be diagnosed with a narcissistic or some other cluster B personality disorder -- managed to manipulate Othello to kill his wife thus punishing Othello for (among other things) promoting Cassius to his lieutenant over him (Iago). Iago's reputation for honesty (he is frequently referred to as 'honest Iago') is essential to his goal. 

What I find interesting about this is not particularly the fact that one man can manipulate another in this way, but rather the psychological reality of this play to the everyday interior of the mind. Freud was undoutedly a well-read man (and a good writer). So it amazes me that Freud never wrote specifically about this particular play. Like many classical scholars he probably thought that the Greeks and also the Romans had nailed all of the important psychological conflicts. 

Psychiatrist have, however, identified a diagnostic category of mental illness that some call Othello syndrome, in which the sufferer displays pathalogical jealousy, frequently about a spouse or other romantic partner. (DSM-IV-TR, the diagnostic book used by many psychiatrists identifies a similar disorder called 'delusional disorder -- jealous type.)

An Othello syndrome, then, but not an Iago syndrome, why not? Because there cannot be an Othello without an Iago. We have probably all felt the excoriating blast of our own internal Iago  manipulating our personal Othello into a frenzy of paranoia. Why did that person hang up when I answered the phone? Who is he texting? Is that person spreading malicious rumours about me? Iago has the answers and they are seldom balanced. In this article the sociobiologist David Buss discusses the evolutionary function of jealousy which, he says, is designed to prevent our investments in our social and romantic relationships from becoming compromised by the actions of a third party. If our jealousy makes us want to act to become closer to our close friends, to make amends, perhaps for years of neglect, then this is for the good. But if we are gripped by Iago our jealousy -- even if it doesn't lead to physical violence or murder as for Othello -- can drive the relationship into the dirt: relationships seldom thrive in a climate of suspicion as Elvis Presley pointed out.



Thursday, March 05, 2009

Google, Twitter and the paradox of choice

The following is a true story. A friend of mine had booked a room in a Bed & Breakfast in order to attend some event or other. On arriving at the B&B the propriotor informed him that he had two rooms available and could choose the one he preferred. He then took my friend to view the rooms to better inform his decision. The first room was large and airy with nice decor and a good view of the garden. The second was considerably smaller and darker, the decor was somewhat careworn and the view was over the bins out the back. Assuming that the smaller room must be cheaper he asked the propriotor what the price difference was between the two rooms. He was told that both the rooms were the same price, £50 a night.

"But that's ridiculous" said my friend. "The first room is obviously much better than the second room, so what's wrong with it?"

"There's absolutely nothing wrong with the first room." Replied the owner. "Except that it has a wasp nest in the shower."

Americans to whom I have told this story take it as symptomatic of the kind of service offered in British hotels, but I want to make a different, more general, point. Simply having choice is largely unimportant, what matters in the quality of the items that you can choose between.

The British government is obsessed with choice and we constantly hear about providing parents and patients with increasingly large amounts of choice. Possibly because being given a choice makes people feel good, that there needs are somehow being considered. Possibly also because if it all goes belly up, you can blame the individual for choosing poorly. But there is a negative side to choice which transcends political conspiracy theories. Research shows that having more choice can decrease satisfaction with the item chosen (see, e.g. Barry Schwartz's book The Paradox of Choice published in 2004). Having a dizzying number of alternatives can also confuse a person to the extent that they fail to choose at all. This is particularly so if they have to decide among items that vary on more than one dimension. (This smart phone has 8 gigs of memory, but the battery life is poor and it doesn't sync with Outlook, this one syncs with outlook but has much less memory, this one has excellent battery life but has a poor screen and so on and so forth -- we've all been there.)

In this thought-provoking blog the author argues that Google's propensity to return several million hits when you type in a simple word such as "accountant" can likewise be deleterious. Surely, he argues, what we want are just a few hits but of high quality? I think so too, but how do you ensure quality? How do you remove irrelevant hits? Well you can do it yourself. Although 'accountant' returns in the order of 67 million hits, if I wanted an accountant I would presumably not want one in Azerbaijan (cos I live in West Yorkshire), typing "accountants leeds" (not in quotes) returns a smaller but still-large number of hits, 397,000. But this is irrelvant because there on the first page is a list of Leeds-based accountants, so I am unlikely to move on from there to view the remaining 396,990 items.

In fact I have tried more than once to replicate the 'choice is demotivating' effect for information choice (does choosing an article to read from a large initial set lead to people liking the article less than if it were a small choice set) and get null results all the time. Whether this is the way I'm doing it or whether it doesn't apply to information, I'm not sure.

There's one more wrinkle in the paradox of choice. Only some people find choice demotivating. Schwartz divides the world into two people maximizers and satisficers. Maximizers tend to want the best, satisficers (a term originally coined by the great H.A. Simon) are content to choose something that is 'good enough' to satisfy their goals. I hope you can immediately see how 'rational' satisficing is. Given a large enough set of items or a complex enough set of attributes a maximizer would quickly grind themselves into the ground weighing up the alternatives. If you take cognitive costs into account, maximizing is rather a foolish strategy. (I am reminded at this point Elliot, a patient studied by Antonio Damasio who following the removal of a tumour from near his frontal lobes would spend an afternoon at work deciding whether to classify his data by date or place, thinking through all the possible implication of both to decide on the optimal choice. This is clearly dysfunctional behaviour -- he soon lost his job -- so no one really maximizes all the time.)

The aforementioned blog also argues that Twitter could be a threat to Google. If people increasingly rely on Twitter for recommendations people might possibly be less likely to go and search for themselves on Google. Recommendations are a win-win situation in some regards. The recommendee saves time and effort by not conducting the search themselves which would seem to be a kind of free-riding strategy, were it not for the prestige and social status that can be achieved by a prolific recommender. And here comes another paradox. People tend to dramatically overrate the importance of single cases, especially if they are recounted by a trusted person. I had this recently when thinking about getting a new car. I wanted something reliable and looked at the various surveys to help to find something appropriate. I decided on an X (I'm not going to tell you what it was because it was a Volvo and apparently they're embarrasing) and told a friend about it. "Oh my dad had one of those and it was never out of the garage." So I crossed that off the list. But why? Why should one person's experience outweigh those of many thousands? I don't know, so it you have any ideas please let me know. (I will resist the temptation to give some kind of cod evolutionary explanation about us having evolved in groups without multivariate statistics, this might be the case but I would prefer to discount more interesting explanations).

So people might ultimately prefer Twitter and it might take away from Google (and particularly those horrible price comparison websites). But do people get better products and services (I wrote an earlier blog on the dangers of 'group think' that can arise in highly homophilous networks see also this paper)? How do you decide between the multiple conflicting opinions? And does any of this matter? Maybe we should pay the price of lower quality products for a less stressful and more collegiate  existence of mutual recommendation. 

By the way, my friend chose the smaller room.

Tuesday, March 03, 2009

Good question...

Quite frequently when I give a talk about my research, I usually get one of those questions. I think we probably all have them. The question that we dread not because it is particularly challenging, nor because it strikes at the heart of the research (although they can be the subject of night-before-cold-sweat-style-dreams). No the kind of question I really hate is the one that is so resoundingly stupid, so you-haven't-thought-about-this-for-more-than-a-second-have-you? that you wonder how to pitch your face. The one that I've been getting quite a lot recently when I talk about social network sites and the goes something like:

"But how is all this any different from having pen pals?"

Now I do kinda know what he (and it is always a he) means. Kinda.

He has probably been reading the usual deranged Daily-Mail style spoutings that social networking is shrinking our brains and that in the Future all teenagers will be born with massive, prehensile thumbs and a 3mm jack socket instead of a belly button, and seem me in the same mold. Shame because IF HE HAD ACTUALLY LISTENED instead of occupying his mind imagining what it must be like to suck a lemon and thus pulling the appropriate facial expression, he would have seen that I wasn't saying that at all. Quite the opposite.

However.

Is the use of social network sites like penpals? Yes, of course,  and an Ipod is like really the same as a ukelele (both are portable and play music). You can imagine that the first time the telephone was revealed to an admiring public demonstrating that people could now communicate at a distance immediately Mr Penpal's great grandfather would put up his weary hand, clear his throat and in a disparaging voice ask "but how is this any different from shouting?"

People can be divided into two categories: those who think that each new technology represents some kind of quantum leap in the way we do things (for good or for bad), and those who think that everything is just the same as everything else, really. Susan Greenfield's and Aric Sigman's recent pronouncements represent the negative side of the everything is going to change. Who represents the same-old-same-old view? Well there's Mr Penpal, of course, but the media won't speak to him because, as well as being as dull as a horned toad, he just isn't newsworthy.

Or is he?

About a year and a half ago I excited a degree of media interest which must have had the headline writers straining. Basically my message was "social networking not really all that different from what you do face to face." As a media friendly message it was not all that different from saying "dog bites man", or "bear shits in wood". But it must have been a slack news week and the headline writers must have had extra cocaine rations of something because they pulled it off. I even appeared on News 24. Nerve racking but exciting, me on telly! Better phone my parents! But it was an awful experience. I teetered on a barstool in the corner of an empty room in Leeds seeing my perspiring face on a large monitor while the folk in the studio in London joked and flirted and doubtless slapped each other with towels. Then I was on. I won't say any more as it is far too distressing to recount, but my earpiece kept slipping out and I think I felt that to save time I would economise by saying more than one word at once.

I thought it best not to tell my parents.

I did a few other things, but then it all faded away. I got the occasional call from Mumbia or Dhaka, presumably places where visible perspiration and continually playing with one's ear is seen as deserving of some kind of respect.

When Aric Sigman did his thing a couple of weeks ago I half expected to get a call from the media.

Producer: "Hey have you heard this? This chap Aric Sigman is saying that social network sites give you cancer. Who can we get on to challenge him?"

Researcher: "There's always Mr Sweaty."

Producer: "Mmmmmmmm"

They never called.

But they wouldn't call Mr Penpal either.

Sunday, March 01, 2009

Fodor's guide to oblivion

As an academic psychologist I generally have a positive regard for philosophers, particularly those who can cut through the muddy thinking of some of my colleagues. That said, I do wonder about some of them. Take Jerry Fodor, for example. Jerry has been around for quite some time and I, like most psychologists, have cited his work sometime, I'm ashamed to say, without reading the originals. I did read Modularity of mind, however, a slim volume published in 1983 which was voted in 2000 as being the seventh most influential cognitive science book of the 20th Century.  (Actually now I look back at those awards the 'esteemed judges' they turn out to be a bunch of people I've never heard of and all at the University of Minnesota, isn't it always the case with these things?)

Although slim Modularity of mind proved, to my 25-year-old self, to be a challenging read. I struggled through its pages never quite feeling that I'd got to grips with Fodor's argument. My problem was, I now believe, was that I was looking for the utility in Fodor's ideas and it was that I couldn't find and thus chided myself for my lack of intelligence. What do I mean? Well, put it this way. Like many people I'm occasionally seduced by kitchen gadgets. You know the kind of thing, something that makes chopping garlic easier (microplane it, damn it); things for doing perfect julienned carrots, and 'easy' graters for parmesan cheese. All of these things turn out to be initially attractive but ultimately useless and languish in my drawer with all of the other crap I've bought over the years. Modularity of mind is like that. It sounded impressive (informational encapsulation, etc.) but I never really got what it did.

Some people who did apparently get what it did were Leda Cosmides and John Tooby of UCSB who married sociobiology to Fodorian modularity which gave birth to Evolutionary Psychology. Thus we had "mental modules, innately specified, shaped by the evolutionary pressures that our ancestors encountered in the Environment of Evolutionary Adaptation that corresponds to the Upper Pleistocene period" and other specious seductera. Fodor, who has even gone so far as to suggest that all concepts are innate, or at least I think he did because I haven't read that one either, hates what they did to his precious modularity. He HATES Evolutionary Psychology, and now it seems he has a problem with evolution in general. How do I know this? I know this because he has written a paper about it and is currently writing a book about What Darwin Got Wrong. The paper is amazing in its wrongheadedness (and I suppose the book will be too). Its not that Fodor fails to understand evolution by natural selection. He is a highly intelligent man who has thorougly researched the area. Or at least it not that he fails to understand in in the way that you or I would fail to understand something. No, he seems to have invented and entirely new way of failing to understand something. He has applied his massive intellect to the theory, become so intimate with it and understood it so well that he's kind of gone through the theory and out the other side (I imagine there must have been a small popping sound when this happened, but that could just be me). So there he is literally beyond understanding with a unique and, one has to say, bizarre perspective on Darwin. I imagine it must have been a bit like that experience that you have when you repeat a familar word over and over again and suddenly it becomes stripped of its meaning, it as if it is an entirely alien word (I recall discovering this as a child with the word 'constable' and the words 'saddle bag' -- strange but true). This phenmenon is technically known as jamais vu from the French meaning 'never seen'. This is not quite Fodor, because obviously he feels that he understands what he's saying. He seems to be producing a kind of vicarious jamais vu where we suffer from the consequences of his over familiarity.

I have a lot of respect for Jerry Fodor, I just wonder whether someone put him here to mess with my mind, modular or otherwise.


Saturday, February 28, 2009

Same old, same old

Ethan Zuckerman argues here that homophilly -- the tendency for people to like people who are like them -- can lead to ignorance. This irritates me. Not because I disagree with it, in fact I think he has a point, just that it was something I wrote about in 2001 on a now-defunct website. The idea is that people will associate with people who share similar views and interets as them (among other things) and will also tend to read articles written by people who share their opinions. I suppose this is obvious. What is troubling about this tendency is that it is hard to explain from a psychological point of view. Of course there are explanations based on concepts such as 'identity' and 'self-esteem' but personally I find these unsatisfactory. To say, for example, that people associate with like-minded people in order to bolster their identity or to boost their self-esteem only rasises another question as to why our minds are designed with such fragile self-esteem or identity that it needs to be massaged by the present of similar opinion.

This is one of the reasons why I like evolutionary explanations, and I think the puzzle of homophily can be better answered by asking the question 'why would the mind be designed to want to associate with people who are like you?' How might this help us to leave behind more copies the genes that lead people to be homophilous (either directly, by having kids who survive to reproductive age or by our helping genetic reletatives in this regard).

The answer, I think, is quite simple and has two parts which are opposite sides of the same coin. The first is that associating with people like you reduces the possibility for conflict. People who have similar values and so on, are likely to want the world to be the same as you thus you are less likely to end up with conflicts relating to how the world should be. For example, people of the same political persuasion usually want to inhabit a similar world are are likely to work together to achieve that. People with different political beliefs want different worlds and this very fact can lead to conflict. The second, related, reason is that the more similar people are the more motivation one has to work in the interests of the other.

Of course, conflict can arise in even the most homophilous groups. This is because ultimately we have been designed to look after our own interests. (Before you draw breath to shout at your monitor of course I know that there are many examples of people acting altruistically and even laying down their lives for others, that is a very interesting subject that I will have to leave for another blog -- if I ever work out an answer). In The Selfish Gene Richard Dawkins makes an interesting observation regarding parasitism. He asks why selfish genes (usually) cooperate with one another when they find themselves in the same organism. Dawkins’s answer is that they cooperate because they all share the ‘interest’ of reproduction: all the genes of most organisms leave the host through the same exit point (the germ cells – sperm or eggs). Thus it is in each gene’s ‘interest’ that it cooperates with the others in maximizing its chances of achieving its goal of propagation; they are, in effect, all ‘playing for the same team’. If genes could propagate themselves by leaving the body by other, individual, routes then we would expect more conflict to occur. One reason why parasites are frequently harmful to the organism is that they often do not share the same exit point as the organism’s own genes.

Our own mitochonria (the power houses of the cells as biology teachers since time immemorial have called them) very probably started of as parasites -- they have their own private DNA, separate from "our" DNA which resides in the cellular nucleus. The theory goes that they gradually changed such that both mitochondial and cellular DNA now leave the body via the egg. At this point both share the same interest and 'cooperate' with one another.

So homophily has an upside: it potentially enables cooperation among genetically unrelated individuals. Evidence for this? So far it is weak (although someone might point me to relvant research). Research by Jens Binder and Andrew Howes at the university of Manchester suggests that the more diverse the friends on a persons social network site the more conflict the site owner reports. Perhaps more compellingly, in Marek Kohn's recent book on trust he cites evidedence that the most trusting societies tend to be those that are the most homogeneous. I hope it goes without saying that I report this as a research finding, rather than as a recommendation that we should attempt to socially engineer our societies by some system of ethic cleansing in order to increase societal trust: sometimes the cure is worse than the disease.

Homophily also has a downside which is the topic of Zuckerman's talk: it locks individuals into a system whereby they hear the same opinions over and over again to the point where whatever the opinion or value system happens to be to the extent that extreme and undesireably opinions can be normalised. In the eight-year old blog I mentioned above, I wrote of how the internet allows people with minority values in touch with one another. This is great if you happen to have a child who suffers from, say, Williams syndrome (a rare disorder affecting 1 in 10,000 live births) or depression. Such individuals can develop support networks and exchange tips and experiences with people whom they would be unlikely to bump into in the street. It also has a darker side allowing people with views that we might consider undesireable to meet, paedophiles for example (whether or not they happen to be radioactive as this hilarious 'news' item from the Daily Mail reports). Repeated exposure to such views can lead to such opinions being normalised.

The Internet may also, as Zuckerman argues in his talk and as my younger self argued in 2001, lead to individuals having a more restricted information diet. When reading a newspaper one's eye can be caught by articles one would not considered choosing to read. I concluded my article by suggesting that the Internet by reducing such serendipitous encounters can potentially narrow or experiences. But the internet has had enough negative press recently (see my previous blog posts on Susan Greenfield and Aric Sigman) and I do not wish to try and create a further moral panic (fat chance of that). I'm not even sure I believe it any more.

EDIT: (6/3/09) this paper (.PDF) suggests that if people are able to view the results of others in a problem solving exercise (analogous, I suppose, to using sites such as Delicious and Digg) then they tend to accept extant solutions rather than generate their own. Thus, they conclude, social bookmarking (etc.) sites can impeded creativity.

Thursday, February 26, 2009

There is a Greenfield far away (but not far enough for me)

I fully agree with Andrew Howes's blog about Susan Greenfield's claims that social network sites 'infantalise the mind'. As head of the Royal Institute she has a remit to promote the public understanding of science, this has, I think two strands. First there is the promotion of the findings of science, keeping the public up-to-date on recent research in physics, biology, medicine, etc. in an easily understandable way. Second, there is the promotion of the actual process by which science operates. This would include hypothesis testing, the nature of control conditions, the process of peer review and so on and so forth. Dull stuff, but important I think. It is in this second goal that I think Baroness Greenfield has committed the most egregious piece of malpractice (which is what I consider it to be). 

Her pronouncements about social networking were filled so many with 'mays' and 'mights' and 'possiblies' that you could equally well -- from what she said -- draw the opposite conclusion to the one she drew. But in my limited media experience I know that the media are deaf to such words. When a scientist say "X may cause Y", the newspapers hear X unequivocably and definitely causes Y because I AM A SCIENTIST AND I HOLD THE TRUTH IN MY HANDS. 

I also love her description of social network sites, she says:

"My fear is that these technologies are infantilising the brain into the state of small children who are attracted by buzzing noises and bright lights, who have a small attention span and who live for the moment"

I suppose 'bright lights' could possibly describe some Myspace pages, but "buzzing noises"? Has ANYONE EVER heard a buzzing noise on a social network site?

Anyhow, Baroness Greenfield should, as a result of the above, be sacked from her role as head of the Royal Institution for singularly failing to abide by scientific strictures. On the  basis of no eviedence whatsoever she has attempted to promote herself by that most base of tactics, creating a moral panic. Two recent such scientifically derived moral panics were created by Andrew Wakefield who claimed a link between the MMR vaccine and autism, and Arpad Pusztai who claimed that genetically modified potatoes could damage the digestive system of rats. Both were sanctioned but *at least* in both cases the claims were based on data (very weak in the case of Wakefield, in the case of Pusztai it seems that his sacking from the  Rowett Rearsearch institute in Aberdeen was due to his employers being leant on by the Biotech company Monsanto). 

A friend of mine who knew the great evolutionary biologist John Maynard Smith recounted a story about JMS's experience with the media (this was in the 70s and pertained, IIRC, to Horizon). He told my friend that once, when asked a question about sexual selection, he did what scientists usually do, listed the twelve most relevant theories and then discussed each one with respect to the available data. When the relevant part of the programme was broadcast some time later JMS was seen to launch into his tortuous theoretical analysis only for his voice to fade out and the narrator's voice to speak over it saying (as JMS recounted the story) "The professor then went on to say that men like women with big tits, and women like men with lots of money."

Scientists must communicate with the media, we have a responsibility to do so, after all it is public money that funds us, and who else is better placed to do the job? We must make things understandable, yet we must recognise that they want a good story (particularly the tabloids). We must therefore be aware of the interpretations that they are likely to place on our words and guard against it. We must not be misled into saying something to them just because we believe it will make them happy. Most importantly we must place our long-term reputations above short term gain, and the reputation of all science above all. 

Greenfield is a very media savvy cookie (charging between 5 & 10K for a public speaking engagement according to her agent's website) and as such she will know the impact that her words would've had.  As it turns out, most comment on blogs and in the broadsheets seems to take the line that she is a bit of a jerk, a great way of publicising science then Susan. 

Ta for that.

Tuesday, February 24, 2009

Friday, February 20, 2009

Transgressing the boundaries

I must extend my congratulations to Aric Sigman for exposing the weakness of the peer reviewing process in the manner of Alan Sokal. To recap: Alan Sokal, a physicist, was irked by postmodernists hijacking of quantum theory and misapplying in a half-arsed way to cultural studies. So he wrote a paper called "Transgressing the boundaries: Towards a transformative hermaneutics of quantum gravity" and submitted it to a cultural studies journal called the Social Text. The paper was, as the title suggests, bullshit, but it nonetheless got published (and yes it was peer-reviewed). Thus Sokal neatly showed that the postmodernist Emperor was indeed not only naked but contemptuously waving it in our faces.

Now Aric Sigman has done the same thing. This time the target is The Biologist a peer-reviewed journal aimed to communicate research findings to the professional biologist and the interested lay-person. He submitted the article Well connected?: the biological implications of social  networking to the journal. A paper that tenuously connected the decline in social capital to an increase in various forms of physical and psychological illness (including heart disease, cancer and dementia). This is well-understood and not new. Sigman's genius was to state that the decline in face-to-face communication cause by people's increasing use of social media (such as SNS) was therefore tantamount to a one-way ticked to an early grave.  No evidence was cited for this conjecture becausei, of course, there isn't any but it sounds like it might be true. But would this get by the eagle-eyed reviewers? Mirabile dictu, it did the reviewers swallowed it whole.

Thanks Aric, you've done science a great service in showing just how flawed the process of publication is. And I know that any resemblance to Kevin Warwick is entirely coincidental.