scholarly comms

The problem with peer review (by @LibGoddess)

 

I am ridiculously excited to introduce a new guest post.

I've been wrestling for a while with the validity or otherwise of the peer review process, and where that leaves us as librarians teaching information literacy. I can't say 'if you use databases you'll find good quality information' because that isn't neccessarily true - but nor is it true to say that what one finds on Google is always just as good as what one finds in a journal.

There was only one person I thought of turning to in order to make sense of this: Emma Coonan. She writes brilliantly about teaching and information on her blog and elsewhere - have a look at her fantastic post on post-Brexit infolit, here.


The Problem With Peer Review | Emma Coonan

Well, peer review is broken. Again. Or, if you prefer, still.

The problems are well known and often repeated: self-serving reviewers demanding citations to their own work, however irrelevant, or dismissing competing research outright; bad data not being picked up; completely fake articles sailing through review. A recent discussion on the ALA Infolit mailing list centred on a peer-reviewed article in a reputable journal (indexed, indeed, in an expensive academic database) whose references consisted solely of Wikipedia entries. This wonderfully wry PNIS article - one of the most approachable and most entertaining overviews of the issues with scholarly publishing - claims that peer reviewers are “terrible at spotting weaknesses and errors in papers”.

As for how peer review makes authors feel, well, there’s a Tumblr for that. This cartoon by Jason McDermott sums it up:

Click the pic to open the original on jasonya.com in a new window

Click the pic to open the original on jasonya.com in a new window

- and that’s from a self-proclaimed fan of peer review.

For teaching librarians, the problems with peer review have a particularly troubling dimension because we spend so much of our time telling students of the vital need to evaluate information for quality, reliability, validity and authority. We stress the importance of using scholarly sources over open web ones. What’s more our discovery services even have a little tickbox that limits searches to peer reviewed articles, because they’re the ones you can rely on. Right? …

So what do we do if peer review fails to act as the guarantee of scholarly quality that we expect and need it to be? Where does it leave us if “peer review is a joke”?

The purpose of peer review

From my point of view as a journal editor, peer review is far from being a joke. On the contrary, it has a number of very useful functions:

·        It lets me see how the article will be received by the community

The reviewers act as trial readers who have certain expectations about the kind of material they’re going to find in any given journal. This means I can get an idea of how relevant the work is to the journal’s audience, and whether this particular journal is the best place for it to appear and be appreciated.

·        It tests the flow of the argument

Because peer reviewers read actively and critically, they are alert to any breaks in the logical construction of the work. They’ll spot any discontinuities in the argument, any assumptions left unquestioned, and any disconnection between the method, the results and the conclusions, and will suggest ways to fix them.

·        It suggests new literature or different viewpoints that add to the research context

One of the hardest aspects of academic writing is reconciling variant views on a topic, but a partial – in any sense – approach does no service to research. Every argument will have its counter-position, just as every research method has its limitations. Ignoring these doesn’t make them go away; it just makes for an unbalanced article. Reviewers can bring a complementary perspective on the literature that will make for a more considered background to the research.

·        It helps refine and clarify a writing style which is governed by rigid conventions and in which complex ideas are discussed

If you’ve ever written an essay, you’ll know that the scholarly register can work a terrible transformation on our ability to articulate things clearly. The desire to sound objective, knowledgeable, or just plain ‘academic’ can completely obscure what we’re trying to say. When this happens (and it does to us all) the best service anyone can do is to ask (gently) “What the heck does this mean?”

In my journal’s guidelines for authors and reviewers we put all this a bit more succinctly:

The role of the peer reviewer is twofold: Firstly, to advise the editor as to whether the paper is suitable for publication and, if so, what stage of development it has reached. [ ….] Secondly, the peer reviewer will act as a constructively critical friend to the author, providing detailed and practical feedback on all the aspects of the article.

But you’ll notice that these functions aren’t to do with the research as such, but with the presentation of the research. Scholarly communication always, necessarily, happens after the fact. It’s worth remembering that the reviewers weren’t there when the research was designed, or when the participants were selected, or when the audio recorder didn’t work properly, or the coding frame got covered in coffee stains. The reviewers aren’t responsible for the design of the research, or its outputs: all they can do is help authors make the best possible communication of the work after the research process itself is concluded.

Objective incredulity

Despite this undeniable fact, many of the “it’s a joke” articles seem to suggest that reviewers should take personal responsibility for the bad datasets, the faulty research design, or the inflated results. However, you can’t necessarily locate and expose those problems on reading alone. The only way to truly test the quality and validity of a research study is to replicate it.

Replication - the principle of reproducibility - is the whole point of the scientific method, which is basically a highly refined and very polite form of disbelief. Scholarly thinking never accepts assertions at face value, but always tests the evidence and asks uncomfortable, probing questions: is that really the case? Is it always the case? Supposing we changed the population, the dosage, one of the experimental conditions: what would the findings, and the implications we draw from them, look like then?

And here’s the nub of the whole problem: it’s not the peer reviewer’s job to replicate the research and tell us whether it’s valid or not. It’s our job - the job of the academic community as a whole, the researcher, the reader. In fact, you and me. Peer reviewers can’t certify an article as ‘true’ so that we know it meets all those criteria of authority, validity, reliability and the rest of them. All a reviewer can do is warrant that the report of a study has been composed in the appropriate register and carries the signifiers of academic authority, and that the study itself - seen only through this linguistic lens - appears to have been designed and executed in accordance with the methodological and analytical standards of the discipline. Publication in a peer-reviewed journal isn’t a simple binary qualifier that will tell you whether an article is good or bad, true or false; it’s only one of many nuanced and contextual evaluative factors we must weigh up for ourselves.

So when we talk to our students about sources and databases, we should also talk about peer review; and when we talk about peer review, we need to talk about where the authority for deciding whether something is true really rests.

Tickboxing truth

This brings us to one of the biggest challenges about learning in higher education: the need to rethink how we conceive of truth.

We generally start out by imagining that the goal of research is to discover the truth or find the answer - as though ‘Truth’ is a simple, singular entity that’s lying concealed out there, waiting to be for us to unearth it. And many of us experience frustration and dismay at university as a direct result of this way of thinking. We learn, slowly, that the goal of a research study is not to ‘find out the truth’, nor even to find out ‘a’ truth. It’s to test the validity of a hypothesis under certain conditions. Research will never let us say “This is what we know”, but only “This is what we believe - for now”.

Research doesn’t solve problems and say we can close the book on them. Rather it frames problems in new ways, which give rise to further questions, debate, discussion and further research. Occasionally these new ways of framing problems can painfully disrupt our entire understanding of the world. Yet once we understand that knowledge is a fluid construct created by communities, not a buried secret waiting for us to discover, then we also come to understand that there can be no last word in research: it is, rather, an ongoing conversation.

The real problem with peer review is that we’ve elevated it to a status it can’t legitimately occupy. We’ve tried to turn it into a truth guarantee, a kind of kitemark of veracity, but in doing so we’ve shut our eyes to the reality that truth in research is a shifting and slippery beast.

Ultimately, we don’t get to outsource evaluation: it’s up to each one of us to make the judgement on how far a study is valid, authoritative, and relevant. As teaching librarians, it’s our job to help our learners develop a critical mindset - that same objective incredulity that underlies scientific method, that challenges assertions and questions authority. And that being so, it’s imperative that we not only foster certain attitudes to information in our students, but model them ourselves in our own behaviour. In particular, our own approach to information should never be a blind acceptance of any rubber-stamp, any external warrant, any authority - no matter how eminent.

This means that the little tickbox that says ‘peer reviewed’ may be the greatest disservice we do to the thoughtful scepticism we seek to help develop in our students, and in our society at large. Encouraging people to think that the job of assessing quality happens somewhere else, by someone else, leads to a populace which is alternatively complacent and outraged, and in both states unwilling to undertake the critical engagement with information that leads us to be able to speak truth to power.

The only joke is to think that peer review can stand in for that.

This is brilliant: Broken library communications and how to fix them

 

Very occasionally I feature someone else's slides on this blog, and this is one of those times - because this presentation brilliantly elucidates almost everything I feel about modern library communication.

(Andy and Ange are on Twitter if you want to follow them for more.)

I worry that there's a sort of echo-chamber thing here, where people who already think like this nod and go 'yes absolutely' and people who don't agree shake their heads and say 'but what about [insert reason to carry on doing things ineffectively here]?' and none of us really change our thinking - but I hope that's not the case.

I think some people might think that the issues described in the presentation above are window-dressing or otherwise somehow superficial, but they absolutely are not - communication is at the heart of what we do in the information profession.

All of these things add up to make a huge contribution to the user experience - and ultimately the user experience defines whether your library is successful or not.

So in keeping with the advice in the slides, let's end with a call to action - is there one change you can make to the way you or your library does things, based on the above? Whether it's simply amending your signs so that if they say 'You can't do X here' they ALSO say 'but you can do it in location Y - here's how to get there' or a full-scale review of the communications in your institution, try and make a change! 

Email Communication Part 2: Measuring Impact, Subject Line Length, Email Frequency & More

 

Earlier in the week I wrote about the basics of good email communication, the three Ts. In this follow-up I want to cover a few more complex things which didn't fit in the previous post, particularly around the theme of an email newsletter.

 

Why do people unsubscribe from mailing lists and newsletters?

Reasons people no longer want to receive a regular email from an organisation include, in ascending order of importance according to Litmus: Found an alternative way to get the same info, Preferred to seek out info on their own, Found the content irrelevant, Received too many emails generally, Found the content to be repetitive or boring over time.

And in First Place? 54% of people said the reason they unsubscribed was this: Emails came too frequently. Don't over-saturate your audience - pick your moments, based on THEIR needs, the lifecycle of their work, and on how much vital information you have to impart. Which leads us to...

So how frequently should I email?

In Part 1 of this post I talked about the importance of being Timely - emailing at the right time of the day so that your email is received both at a time when people actually read emails and when yours isn't buried under a pile of other emails coming in at the same time. 3pm appears to be the sweet spot.

There's another aspect to being timely though. Firstly, Monday appears to be the best day on which to send an email newsletter - for all the reasons you'd imagine (people are still full of vim and vigour early on in the week, and not yet weighed down by all the things they wished they'd got done but ran out of time to do).

Secondly, I believe it's really important NOT to email to a fixed schedule, for example the first Monday of every month. A vital aspect of good communication is not wasting opportunities by communicating when you don't need to - it reduces the value of your communication overall and edges you closer to becoming part of the white noise. So send your newsletter or other 'important updates' email only when there's a weight of useful things to say, rather than just when it's the time you usually send it. Communicate because your audience NEEDS to hear what you have to say, rather than because you need to send a monthly update.

How long should my subject line be?

Click to view this as part of a larger (and very useful) article on EmailAudience

Click to view this as part of a larger (and very useful) article on EmailAudience

Click to view this as part of a larger (and also quite useful) article on MailChimp

Click to view this as part of a larger (and also quite useful) article on MailChimp

I mentioned in Part 1 that I think the Subject Line is hugely important, and that calling your email 'Newsletter' is a recipe for being completely ignored. You need a decent title, one which gives people a specific reason to open the email; a benefit to them.

As to how long this subject line should be, there is conflicting evidence about this. EmailAudience found that very short worked well, medium length worked REALLY badly, and very long (tweet length subject lines, basically) worked fantastically.

However MailChimp analysed 12 billion emails and concluded 'Subject line length means absolutely nothing'! If you compare the graphs showing the two companies' findings, they do actually follow broadly the same pattern - short and long are good, medium length doesn't do much for anyone. It's just that how much of a difference this makes is clearly reflected as being much smaller by MailChimp. 

They conclude with this very sensible statement which makes a good point about Subject Line length and the kinds of devices and platforms your audience are reading on.

Your audience is chock-full of individuals with different reading habits, interests, and demographics. Maybe my audience is full of Apple fanboys and every one of them reads my newsletter on their iPhone. Well, then the subject line they see might need to be shorter for their small screen. Or maybe my newsletter is geared toward businessfolk who mostly run Outlook. In that case, maybe a longer subject is more acceptable.
— MailChimp

Like so many things, it comes down to understanding your audience. Which takes us neatly on to...

How can I measure and track the impact of my email?

There are elaborate things you can do with Gmail to track open-rates of emails, but I think that's much more important in traditional business than it is in non-profit comms. More important is to measure and track the impact - essentially, do people ACT on the email, and can I influence how often this happens by changing when I send the email, the subject line, the tone, the length and so on?

If your email is a just general update, measuring impact is very hard to do. If, however, it includes a Call to Action, and that action has a website involved - e.g. 'Try our new online resource [link to resource]' or 'Come to our workshop [link to booking form]', it becomes possible to see how effective the email is by measuring the Engagement Rate. This is the number of clicks on the link divided by the number people who received the email - so take a really basic example, if you email 100 people and 30 click the link, the Engagement rate is 30%. 

How do you find out how many people click on the link? You use a unique URL created especially for the email via bit.ly. Bit.ly is a URL shortening service (useful in itself) which provides two fantastic functions for Comms purposes - it allows you to customise the URL, and it counts the number of times that specific customised URL is used.

(So for example, at the end of the 10 Tiny Tips for Trainers slidedeck from a couple of weeks back, I put in a customised bit.ly URL for people who found the presentation on Slideshare, and wanted to read the blogpost which accompanied it on here - a lot more people see my slides than see this blog. The reason I used bit.ly was just to make a short, memorable URL - I chose bit.ly/10TinyTips which is a lot easier to fit on a slide in large letters than http://www.ned-potter.com/blog/2014/8/8/10-tiny-tips-for-trainers! - but it also gives me the stats. So I know that 225 people clicked on the link in the slides, because that link didn't appear anywhere else.)

So going back to the email newsletter - if you're launching a resource or promoting an event, and you want to know how effective your email has been, use a customised bit.ly URL to see how many people clicked your link. If half the people you email click on the link you've included, that's a pretty fantastic rate of return as these things go. Build on whatever formula you used and do it again! If only 5% click, then do things differently next time - perhaps change the subject line, or make the call to action more prominent, or email at a different time of the day or week.

Incidentally, another possibility bit.ly allows is to compare effectiveness of promotional activities. So you have one bit.ly URL for your new resource that you use in emails, one that you use on Facebook, and another you use on Twitter. Then you compare the stats for each, with the overall stats for the resource, and see which communication method is the best way to get people to use your stuff...

Should I care about email preview panes?

If your audience are young (University students, for example) then yes, this matters a little bit. According to ConvinceandConvert, 84% of 18-34 year olds use an email preview pane. I am 34 and I do this! I have a preview pane in both Outlook and Gmail for work, where I can - but not in Yahoo for my personal email, where I've never worked out how. I much prefer a preview.

This means these users will see the first few lines of the email (depending on how big they've set their pane) BEFORE they click to open. Worst-case scenario is they dislike what they see so much they never get as far as opening the email. The best-case scenario is they are enticed or inspired by the preview, and when they open the email they are fully engaged with its contents. So, as with so many media in the web 2.0 age, the important thing here is to hit the ground running. No long intros, no scene setting - just useful, headline information right at the top. It's using the journalism model, rather than the academic paper model - put your conclusions at the start.

Are there any useful email tools I should know about?

I think MailChimp looks great for non-profits. You'd probably need to use the paid-for version, but if email is an important part of your marketing strategy, and you can afford it, then MailChimp is worth it because of the level of control, and of impact measurement, it gives you.

I'm a big fan of Scribd (as a creator rather than a consumer - don't be put off by its homepage which is aimed at the latter!) - it takes PDFs and turns them into web documents. So if your email newsletter is a PDF, I cannot stress enough how people don't click on and open PDFs nearly as much as we'd hope they do, so put in a link to a Scribd version (or embed it in the email if that's possible with the tools you use). Not only does this make the newsletter more easily accessible, it allows it to be discovered on the web, and it gives you built-in usage statistics for how it much it's being read too.

#BLAle14 Tuning out the white noise in library communication

A lot of the communication between Libraries and academic departments is just white noise, unless we tailor and personalise it. This takes a large amount of time, but the returns you get are absolutely huge - and this is the basis of my #BLAle14 keynote, a version of which is here:.

Tuning out the white noise: marketing your library services from Ned Potter

For context, here's the Twitter back-channel during the presentation - divided into sections so you can read along with the slides if you're especially keen. There's more on the conference itself below the Storify.

=======

The BLA

I became a Business Librarian this year, when I took over looking after the York Management School alongside my other departments in January. I also took over our membership of the Business Librarians Association and have been looking forward to the BLA Annual Conference, which everyone told me was excellent. And it was! I had a great time, it was great to catch up with old friends and make new ones, and I very much appreciate Nathan and the organisers inviting me to speak. As I said in my talk, I've found the BLA to be an extremely useful and helpful organisation to be a part of, so if anyone reading this looks after a Business School but isn't a member, I'd recommend signing up.

I was only able to attend two days of the conference but for me the highlights included:

  • The National Space Centre where we were lucky enough to experience a Key Stage 2 film all about The Stars and that in the Planet-arium
  • Very nice accomodation as part of the conference venue which made everything extremely easy - it's much more relaxing never having to worry about travelling from a hotel etc, so other conference organisers take note
  • A very interesting presentation about The Hive in Worcester - the UK's first joint public and academic library, from Stephanie Allen. I have to admit it never even occured to me that a public-academic library was possible, but although it sounds complicated Stephanie made a pretty convicing case for it being a great idea. It sounds like a great place - generally I have no interest in Libraries as places but I'd quite like to visit The Hive...
  • Joanne Farmer showing us Northampton's very nicely done video on employability (which she scripted)
  • Andy Priestner's very engaging talk about how UX in Libraries is very much a thing now - here's Andy's presentation on Slideshare, take a look .

I was sad to miss Aidan Smith's presentation on Occupye, used at Birbeck to show where there is seating free in the Library - this won the best short paper prize.

I thought the organisers did a great job, and it was the first conference I'd been to since LIASA so it felt great to be at that kind of event again. Thanks for having me!

Twitter tips for improvers

Here's a new set of slides I've just uploaded to my Library's slideshare account:

Tips for Twitter IMPROVERS

from

University of York Library

I think the key to good feedback in a workshop is probably 10% about the content, 10% about the delivery, and 80% about whether it is pitched at the level the participants expect and require. That's probably an exaggeration but you get my point. I've blogged on here before about how I run sessions around Web 2.0 and academia for the Researcher Development Team at York, and in the last couple I've really felt for a small number of participants who were at a stage beyond the level I was pitching at. The workshops are introductions so participants literally set up, for example, a Twitter account from scratch - so anyone who is already past that point but wants to know about content and tone, is doing far too much thumb-twiddling for my liking, until later in the session.

With all that in mind, as of next academic year we're reworking the workshops, and in each case I'll run one 'A beginner's guide to' type session and one 'Improvers' type session, so people can get exactly what they need out of the workshops. We didn't have time to arrange that for this terms' workshops, so I produced the slides above to send on to participants of my introductory workshop, for those who wanted to go further. In January when the next set of workshops run (I don't do any in the Autumn term, because AUTUMN TERM), I'll flesh this out into a proper interactive 1.5 hour session.

Have I left anything important out? One of the things I love about Slideshare is that you can update and reupload slides over the same URL, so you don't lose that continuity (and your statistics). So if there's anything you'd add to this, let me know in a comment, and I can eventually make a new and improved version to put online in place of this one.

My advice to Tweeters: ignore advice to Tweeters...

I think guides for tweeting well are most important for organisations - it's key that companies, businesses and public bodies get this stuff right, and they often don't. For individuals though, I'm increasingly of the mind that unless you specifically want Twitter to DO something for you which it currently isn't doing (and the slides above are aimed at researchers who specifically want to grow their network in order to find more value in it), it's not worth reading 'how to tweet' guides (of the kind I used to write myself) and trying to change how you approach it. There's plenty of good advice to be had in these, but it's not necessary to follow any of it - apart from not being unpleasant or otherwise making people bad about themselves. If you want to tweet about your lunch every day, why should you stop doing that just to retain followers? I think it's better to be yourself and have a group of followers who are prepared to put with that, for better or for worse...

Number of followers isn't an end in itself. A smaller group of engaged followers who want to interact with YOU is far better than a huge group for whom you have to put on any kind of show. So while when writing in print it's important to adopt a style appropriate for the medium, I consider Twitter to be much closer to spoken communication. As long as you're prepared to deal with the consequences, why not just be yourself?