Information Literacy

Padlet and Flipped Learning in Information Skills Training (by Emma Shaw)

Emma Shaw is the Library Manager and Liaison Librarian (Medicine) at Imperial College London. I while ago I saw her tweeting about the use of Padlet in her teaching sessions - students were using it in groups to come up with search strategies for healthcare-related databases. I like Padlet anyway, but I loved this use of it - so immediately beneficial, practical, and indeed stealable! So I asked Emma to write a guest post about the whole process, and she kindly agreed. Here's an example of the use of padlet she examines below.


I’m a Liaison Librarian at Imperial College London supporting medicine, along with another 4 Liaisons. Amongst a heavy timetable of information skills training for undergraduates and postgraduates, we have for several years been running a couple of Library skills workshops which are embedded in the MBBS Medicine course timetable for the 1st year medical students. We were using the traditional presentation slides telling them about the Library and how to access it, along with a hands on database searching component using the PubMed database. The database searching was taught in the form of a paper based tutorial handout. The students would sit in the computer lab and work through it for 20 minutes whilst also having the opportunity to ask questions. More recently I would go away and wonder whether they would really apply what we were teaching, in the format we were using. I asked myself if it was really meaningful for them, particularly as it was all new to them and we were teaching them how to look up research on a database, when they hadn’t even started using journal articles yet.

The other reason it got me thinking was, Information literacy is not an obvious skill that screams out ‘you need me in your life’ so you therefore need to convey it to the students in a way that makes them realise that they do. Especially when they have other priorities and a timetable jam packed with medical training. I’ve learned and observed over the years of teaching information skills that, in order for them to understand its direct use, to see its value and engage, they need to see it in context. When I say in context, I mean actually directly relating what they are learning in front of them, to a specific area in their coursework or even clinical practice. Rather than just telling them this is stuff they need to know now and in the future. This led me to question if, our presentation slides and paper tutorial were engaging and putting it in context enough. Could there be a better way of delivering the content so they engage with us, and see its direct value?

Feedback from the students about how we could develop the session included comments like:

“Maybe have an example of an assignment similar to what we would have this year and show how we might use online resources for researching that assignment.”

 

“Interactively doing it together with the students instead of following instructions on a page.”

This made it apparent that we were right to question this, and that it was a good time to reconsider the delivery of our training. We could see why PowerPoint slides were just not cutting it anymore, they needed interaction and context to stay engaged. On top of this, in the MBBS Medicine course, they were already being presented with e-learning modules, and using different teaching methods and technology. I could then see that very shortly our presentation slides and paper based database tutorial was not going to be enough anymore, and that our sessions were in danger of becoming irrelevant. We needed a fresh and new approach.

I had various sources of inspiration for revamping the workshops. We just so happened to have a visit from Caroline Pang, Director of the Medical Library at Lee Kong Chian School of Medicine, Singapore. One of Imperial College London’s partners. She demonstrated what library training they offered for Medical students. This consisted of initial training on the Library and database searching. They were then given a clinical question from the course tutors, and had to work together in groups to form the best search strategy to answer it. She had a whole day dedicated to this project as well as the presence of the tutors. This looked like a really good approach, not only was it more engaging by getting the students to actively search for a given question, but it was also relevant to the course content so they could directly see its value. If we wanted to do something similar however, the problem we faced was we only had 1.5 hours for each session!

It was then one day at a meeting with tutors from the MBBS Medicine course that, the idea of Flipped learning was presented to me. I won’t go in to too much detail about it as you can find a very good definition here by Professor Dilip Barad (Ph.D.). It’s essentially where work is given to the students to be done before the session e.g. a video lecture, or tutorial to work through. The session is then dedicated to applying the knowledge they have learnt from the pre-sessional work, through activities and group work and allowing them to ask questions. In this way it becomes student centred learning as opposed to trainer centred.

To be honest the flipped learning approach initially filled me with dread! There was the worry of giving them pre-sessional work to do with the risk of them not doing it.  It also seemed like a lot of work and preparation, as well as the fear of having to get the students in to groups and managing them. Also having many other duties aside from training, it’s very easy to just slip in to the habit of repeating the same old slides each year. It’s easy, it’s safe. However, if it would improve engagement it was worth a try, and I thought this would be an excellent model for our teaching. It would allow us to save some time in the session by getting the students to do the straight forward PubMed tutorial before the session. This would then allow us to try out the database searching exercise in groups, which we didn’t think we would have time for. We could dedicate time in the session to getting them to do real searches on PubMed, using related topics in up and coming assessments, with the trainers feeding back to the groups as they did the searches. This would allow for more engagement and they would directly see the use of searching a database, by pulling up relevant articles that could be of use for their assessments.

The final plan for the session consisted of an online PubMed database tutorial created using Articulate software. This was essentially a series of online slides taking the students step by step through using PubMed, which we hosted on the Blackboard platform. We emailed the students a week in advance via the Medical School to ask them to do the online tutorial before the session. To encourage them to do it, we mentioned that it would be for a group exercise in the session. We then sent a reminder a couple of days before the session for good measure. Some good advice I got from an e-learning technologist was to give them an approximate time of how long the tutorial would take, so they could plan it around their schedules. We aimed for 30 minutes which we thought they would see as achievable.

For the session, we refined our slides on the Library induction section. We then did a brief summary of what they should have learnt from the PubMed tutorial and gave an opportunity for questions. There was some debate on what to do if the students didn’t do the tutorial before the session. Should we include a more detailed summary in case, or would we run the risk of disengaging the students because of the duplication of information? We decided to go with a very brief summary just confirming points from the tutorial. We would then play it by ear and adapt the session if necessary. We then presented them with a search question related to a future assessment and arranged the students in to small groups and asked them to come up with a search strategy for that question. To provide the students with more feedback in the session and to give it a competitive edge (another bit of advice from a tutor that they liked competition!), we added some blended learning in to the mix. We used an online tool known as Padlet for the groups to add their search strategy to, for which we could then feedback to all the students how they all got on with the task. An example from one session is below.

 An example of a Padlet board, used by the students to detail search strategies

An example of a Padlet board, used by the students to detail search strategies

The first sessions we ran in 2016 went very well and we had over 80% of the 320 students do the pre-session tutorial. As it was successful we ran it again last year in 2017 and over 90% of the students did the pre-session tutorial. The group exercises went well, and we could really see the students engaging with the task and coming up with good search strategies.

The feedback was mainly positive and gave the impression that our new teaching method was working. The following comments were from 2016:

“Clear explanations; the delivery was concise. The activities helped us put the skills into practice.”

“It's really nice to practice searching in class and in group and it really helps when comparing different searching methods within groups.”

Some Feedback from 2017:

Expanded on the pre reading material and explained things more clearly and gave sufficient exercises to ensure I actually understood the methods of searching databases”

“Learnt new and useful techniques for searching up articles. The session was interactive and fun. Everything was explained well and thoroughly.”

In terms of negative comments, in 2016 we had a few to do with some of the session’s content repeating parts of the pre-session tutorial. As these were our first sessions, we hadn’t yet got the balance right in terms of summarising the tutorial, so we then adapted this for the sessions in 2017 to avoid this. For the 2017 sessions, a few comments said it was a bit rushed, and they wanted more searching examples. They also struggled with some of the concepts like Subject Heading searching, and found it too advanced. This could potentially be because some of the students did not do the tutorial beforehand, but I think we perhaps also need to consider more about those students who learn at different speeds. This is a challenge when teaching 45+ students per session and with the time constraint. However, this is something to bear in mind for the next sessions, and to perhaps offer opportunities for optional follow up training on a 1-2-1 basis for those who require it.

Overall it’s been a real success. Not only do I put this down to the hard work by the Liaison team but, also down to the fact we had really good support from the tutors from the Medical School who always ensure to make it clear to students that information literacy is a crucial part of the curriculum. For anyone wanting to try Flipped learning, I would therefore always recommend getting the faculty on board. Despite all the preparation work, we also enjoyed delivering the session. It was a really good experience actually going around the room and engaging with the students and giving feedback, instead of mainly stood in front of PowerPoint slides and answering questions.

For anyone interested in looking at the session content, such as the online PubMed tutorial please feel free to get in touch.

A guide to joining twitter now it’s an unremittingly bleak document of how awful everything is

burning twitter.gif

As a librarian using Twitter, my experiences follow the classic three act structure of a movie. (Not a feel-good film. One of those more grown up films where you leave the cinema feeling depressed.)

Act 1: Hope and expectation

You take your first few steps into the online world. It turns out to be AMAZING! There are so many like-minded people there, and they’re so helpful! Ideas are shared, collaborations begin. Real life progress is catalysed by Twitter conversations. Cheers!

Act 2: Growing up

Twitter makes more and more things possible. But the community is fracturing. Was this inevitable? Progress still happens, among so much infighting. Nothing is allowed to be unequivocally good anymore – anything previously thought of as positive now comes with a handily placed fellow twitter user who is cleverer than you and so can tell you how actually it’s all terrible after all. It’s better to know, right? Although naivety felt great compared to this. Things got real.

Act 3: An unending garbage fire where joy goes to die

The world is divided into two types of people – those who know how horrible humans really are, and those who steer clear of social media. Twitter is a mirror to society and what it shows is ugly as hell. Twitter shows humans for what they really are in the way that previously Science Fiction stories did. We are past the point of allegory; who needs it?  Brexit, Trump, Katie Hopkins, fear, anger, sorrow. Libraries are in trouble? The WORLD is in trouble. We’re all doomed. “Name something that shows your age, which the younger generation wouldn’t understand what you’re talking about” goes the meme. Everyone tweeting about winding back unspooled cassette tapes with a pencil. And you’re thinking: hope? Decency? The Labour Party? Check Twitter. Go to sleep feeling sick. Wake up feeling sick. Check Twitter again. Rising panic. Repeat to fade.

<end>

So how do you answer this question?

I’ve written guides on ‘if you’re new to Twitter, here’s what you do’ before – they’ve been among the most read posts on this site. But all that seems very quaint and a little moot now. Like reading a guide to the eatery options on board the Titanic after it’s hit the iceberg.

Here’s my attempt at giving this a proper go.

What advice would you give someone who’s just joining Twitter now?

1.      Lay down some ground-rules and stick to them. Twitter works when you are in control of it rather than it being in control of you. It needs to be something you DECIDE to engage with, rather than getting into a cycle of dependence, checking it listlessly until all hours even though you don’t even want to, getting ever more scared or depressed. So, don’t check it after 9pm or before 8am for starters. No one needs to start their day with that shit. Think about whether you really need it on your phone at all – and if you do, consider deleting it (the app, not your account) during holidays and over Christmas.

2.      CURATE. Find the good people. Use the search box to look for people tweeting about stuff you care about. Follow the ones talking sense. Find the community you want to be part of and join in. You need to curate Twitter, proactively following and unfollowing to make it work for you. That said…

3.      Get out of the echo chamber. If you follow 1000 people who all think the same you’ll be in an echo chamber and that’s no good to anyone. Everyone will reinforce your view of the world and then Brexit will happen and you’ll be all, WTF? But if you follow a bunch of people who really wind you up, you’ll be wound-up all the time. So a middle ground must be found. To quote, well, me, in a thing I wrote for the University of York’s MOOC: “Make sure your online social circle doesn’t consist entirely of People Like You - follow and interact with people from different professions, socio-economic demographics, locations, nationalities and ethnicities. This at least builds a more rounded picture of the way the world thinks.” I’ve have learned SO MUCH from people on Twitter. Not just about my profession, but about society, about behaviour. That’s why I still love it even though it’s a shit-show now, by and large. Look to be challenged as well as supported, but if someone is hateful or obnoxious, mute, block, lock your account - do what you need to feel comfortable.

4.      Trust that the right people will find you, rather than changing to please the wrong people. Better to give of yourself, be yourself, present an unvarnished version of yourself, and take your time to find a network who is happy with you as you, than to try and adapt to be like everyone else. I know this sounds like a self-help book. But honestly, Twitter is huge. Your people WILL be there. Wait for them rather than watering yourself down. Everything is fragmented now. Find your fragment.

 Twitter, yesterday

Twitter, yesterday

5.      Don’t slow down to look at the car crash. Of course it’s compelling. Of course you want to know what’s going on. But you don’t NEED to see it. You don’t need images that are going to haunt you and still be there when you close your eyes to go to sleep tonight. If certain world leaders are tweeting horrifying things, block them then you won’t see them ReTweeted. Do it. Add a load of words to your mute list – use the advanced mute options. You need to take care of yourself to get the most out of Twitter. Self-care is vital.

6.      For celebs and politicians Twitter is a broadcast medium. For the rest of us it’s still a conversation. Tweet about your work. Tweet about your life if you’re comfortable doing so. But tweet about other people’s work too. RT stuff. Reply. Get involved in chats. Back and forth. Twitter is the social media platform that is most like just chatting to people in a room.

7.      Make Twitter the best place it can possibly be. While the world falls apart around you, make your part of it a place where good things happen. Be positive but realistic. Be supportive. Don’t RT nonsense or propaganda or lies. GO TO THE SOURCE. Don’t be unquestioning. Think about your role in other people’s echo chambers too. Help people out. Be approximately 30% nicer online than you are in real life to allow for the potential misinterpretations of un-nuanced written text. Don’t make people’s days worse. Make things a little bit more Act 1 (above) and a little bit less Act 3.

8.      Don’t be afraid to quit. No one ever regrets shutting down a social media account. If it’s not having a positive impact on your life, get rid.

The tl;dr version of this post

It's a little late for that unless you've scrolled right to the end, but basically find the right people and Twitter can still be great. I still love it. It's still useful. It's still enriching. And that's because of the people I follow and interact with.

5 stages to processing and acting on 100+ hours of ethnographic study

This post is reblogged from the Lib-Innovation blog, to tie up and follow on from the previous post on THIS blog about the Understanding Academics Project.

Understanding Academics, introduced in the last blog post, is far and away the biggest UX project we’ve attempted at York, and the processing and analysis of the data has been very different to our previous ethnographic studies. This is due to a number of factors: primarily the sheer size of the study (over 100 hours’ worth of interviews), the subject matter (in depth and open ended conversations with academics with far ranging implications for our library services), and actually the results themselves (we suspected they’d be interesting, but initial analysis showed they were SO insightful we needed to absolutely make the most of the opportunity).  

Whereas for example the first UX project we ran conformed almost exactly to the expected 4:1 ratio of processing to study – in other words for every 1 hour of ethnography it took four hours to analyse and process – the total time spent on Understanding Academics will comfortably be in excess of 400 hours, and in fact has probably exceeded that already. 

UX is an umbrella term which has come to mean a multi-stage process – first the ethnography to understand the users, then the design to change how the library works based on what you learned. In order to ensure we don’t drown in the ethnographic data from this project and never get as far as turning it into ‘proper’ UX with recommendations and changes, Michelle Blake and Vanya Gallimore came up with a 5 stage method of delivering the project. 

Two particular aspects of this I think are really useful, and not things we’ve done in our two previous UX projects: one is assigning themes to specific teams or individuals to create recommendations from, and the other is producing and publicising recommendations as soon as possible rather than waiting until the end of the whole project. 

As you can imagine the 5 stage method is very detailed but here’s a summary:

  Coloured pens used in cognitive mapping (in this case with the interviewer's reminder about the order in which to use them)

Coloured pens used in cognitive mapping (in this case with the interviewer's reminder about the order in which to use them)

      1)  Conduct and write up the ethnography. Academic Liaison Librarians (ALLs) spoke to around 4 academics from each of ‘their’ Departments, usually asking the subject to draw a cognitive map relating to their working practice, 
and then conducting a semi-structured interview based on the results. 

The ALLs then wrote up their notes from the interviews, if necessary referring to the audio (all interviews were recorded) to transcribe sections where the notes written during the process didn’t adequately capture what was said. The interviews happened over a 2 month period, with a further month to complete the writing up. 

      2)   Initial coding and analysis. A member of the Teaching and Learning Team (also based in the library) who has a PhD and experience of large research projects then conducted initial analysis of the entire body of 100+ interviews, using NVIVO software. The idea here was to look for trends and themes within the interviews. The theming was done based on the data, rather than pre-existing categories – a template was refined based on an initial body of analysis. In the end, 23 over-arching themes emerged – for example Teaching, Digital Tools and Social Media Use, Collaborations, Research, Working Spaces. This process took around 2 months. 

      3)   Assigning of themes for further analysis and recommendations. Vanya then took all of the themes and assigned them (and their related data) to members of the Relationship Management Team – this consists of the Academic Liaison and Teaching and Learning teams already mentioned, and the Research Support team. This is the stage we are at now with the project – each of us in the team have been assigned one or more theme and will be doing further analysis at various times over the next 8 to 10 months based on our other commitments. A Gantt chart has been produced of who is analysing what, and when. The preparation and assigning of themes took around 2 weeks.

      4)   Outcomes and recommendations. There are three primary aims here. To come up with a set of practical recommendations for each of the themes of the project, which are then taken forward and implemented across the library. To come up with an evidence-base synthesis of what it means to be an academic at the University of York: a summary of how academics go about research and teaching, and what their key motivations, frustrations and aspirations are. (From this we’ll also aim to create personas to help articulate life for academics at York.) And finally to provide Information Services staff with access to data and comments on several areas in order to help inform their work – for example members of the Research Support team will have access to wealth of views on how academics think about Open Access or the repository. 

These aims will be achieved with a combination of devolved analysis assigned to different groups, and top-down analysis of the everything by one individual. Due to other projects happening with the teams involved, this stage will take up to 7 months, although results will emerge sooner than that, which leads us neatly to...

      5)  Distribution and Dissemination. Although this is last on the list, we’re aiming to do it as swiftly as possible and where appropriate we’ll publicise results before the end of the project, so stages 4 and 5 will run simultaneously at times. The total duration from the first interview to the final report will be around 18 months, but we don’t want to wait that long to start making changes and to start telling people what we’ve learned. So, once an evidence-based recommendation has been fully realised, we’ll attempt to design the change and make it happen, and tell people what we’re doing - and in fact the hope is to have a lot of this work completed by Christmas (half a year or so before the Summer 2017 intended end date for the final report). 

The full methods of dissemination are yet to decided, because it’s such a massive project and has (at a minimum) three interested audiences: York’s academic community, the rest of Information Services here, and the UX Community in Libraries more widely. We know there will be a final report of some sort, but are trying to ensure people aren’t left wading through a giant tome in order to learn about what we’ve changed. We do know that we want to use face to face briefings where possible (for example to the central University Learning and Teaching Forum), and that we’ll feedback to the 100 or so academics involved in the study before we feedback to the community more widely. 

Above all, Understanding Academics has been one of the most exciting and insightful projects any of us have ever attempted in a library context. 

Using Kahoot in Library Induction and Teaching Sessions

A colleague at York, Tony Wilson, brought Kahoot! to our attention recently for possible use in teaching and orientation sessions: it's a really nice quiz tool. There is nothing new about using quizzes in library sessions and there's about a million and one tools out there for making them, but Kahoot differs in its execution of the idea. It's so much slicker and just more FUN than anything like this I've looked at before. And interestingly, it already seems to have currency with students:

One of the most useful aspects of a quiz is that people are asked to actively engage with the information rather than passively receive it. I'm absolutely convinced the students are remembering more this way than if we just presented them with the complete information.

4 reasons Kahoot works so well

It's really, really nice, for these reasons in reverse order of importance:

The music. It has cool retro sort of 8-bit music in the background.
The aesthetics. It has bright colours and looks generally nice. Here's what a question looks like:

 An example of a question as it looks on the screen during the quiz

An example of a question as it looks on the screen during the quiz

The leaderboard. Oh yes. It has a LEADERBOARD. This is the key thing, really: people put in their nicknames and after each question the top 5 is displayed (based on, obviously, how acurate their answers are but also how quick). Even completely non-competitive people get excited when they see their name in the top 5... I tweeted about using Kahoot and Diana Caulfied chimed in about the tension the leaderboard brings:

 The mobile view from the student perspective

The mobile view from the student perspective

It's VERY easy to use. These things have to be SO simple to justify using them. In the case of Kahoot, you load up the quiz, and the students go to kahoot.it and put in the pin number the quiz gives you on the screen. It works perfectly on phones, tablets, or PCs. There's only one thing on the screen - the box to put the pin number in; and only one thing to do - put the pin number in. This simplicity and intuitive interface means everyone can get on board right away. There's no hunting around. 

You can also use it on an epic scale - one colleague just came back from using it with 95 undergraduates today, who responded really well, another used it with over 100 who were absolutely buzzing after each question. You can actually have up to 4,000 players at once.

Here's what the students are presented with when they go to the (very simple) URL:

An example from York

So here's the quiz I made for Induction, click here if you want to have a go. This particular link is (I think) in ghost mode, where you're competing with a previous group of players. So if you do the quiz now, you'll be up against my History of Art PostGraduates and will only show up in the Top 5 leaderboard if you get better scores than at least 25 of them! But normally in a session I'd use a completely blank slate.

Possible uses

In this example the questions I chose are basically just a way to show off our resources and services: it's all stuff I'd be telling them as part of a regular induction talk anyway:

 My Kahoot quiz questions

My Kahoot quiz questions

The students I've used it with so far have really enjoyed it (as far as I can tell!). It's much more interesting than listing things, and, intruigingly, I think that asking people to guess between possible options actually seems the answer more impressive than just telling them the fact outright. So for example in the Google Apps question above, there were gasps when I revealed they get unlimited storage and the majority had chosen one of the lower options (the results screen shows how many people have chosen each option) - I'm fairly sure if I'd just told them they get unlimited storage, not one person would have gasped.

But there are plenty of other possibilities for Kahoot that are a bit more pedagogical in nature. Using it to measure how much of the session has sunk in at the end; using it at the start and end to measure a difference in knowledge; and using it to establish the level of student understanding:

There's also a Discussion mode rather than a Quiz mode. You pose a question and students type their answers in (rather than selecting from multiple choice) and their words come up on the screen. Anything rude or offensive can be deleted with one click. It would be a great way to find out what students felt unsure of or wanted to learn about, or to discuss the merits of a particular approach.

In summary

So I'd recommend taking a look at Kahoot and seeing if you can incorporate it into your teaching. As well as using it throughout Induction I'm planning on using different kinds of quizzes as part of infolit sessions and am excited to see how that works. You can easily incorporate your own library's images and videos and the tool is free, very easy to use, nicely made, and FUN. 

The problem with peer review (by @LibGoddess)

 

I am ridiculously excited to introduce a new guest post.

I've been wrestling for a while with the validity or otherwise of the peer review process, and where that leaves us as librarians teaching information literacy. I can't say 'if you use databases you'll find good quality information' because that isn't neccessarily true - but nor is it true to say that what one finds on Google is always just as good as what one finds in a journal.

There was only one person I thought of turning to in order to make sense of this: Emma Coonan. She writes brilliantly about teaching and information on her blog and elsewhere - have a look at her fantastic post on post-Brexit infolit, here.


The Problem With Peer Review | Emma Coonan

Well, peer review is broken. Again. Or, if you prefer, still.

The problems are well known and often repeated: self-serving reviewers demanding citations to their own work, however irrelevant, or dismissing competing research outright; bad data not being picked up; completely fake articles sailing through review. A recent discussion on the ALA Infolit mailing list centred on a peer-reviewed article in a reputable journal (indexed, indeed, in an expensive academic database) whose references consisted solely of Wikipedia entries. This wonderfully wry PNIS article - one of the most approachable and most entertaining overviews of the issues with scholarly publishing - claims that peer reviewers are “terrible at spotting weaknesses and errors in papers”.

As for how peer review makes authors feel, well, there’s a Tumblr for that. This cartoon by Jason McDermott sums it up:

 Click the pic to open the original on jasonya.com in a new window

Click the pic to open the original on jasonya.com in a new window

- and that’s from a self-proclaimed fan of peer review.

For teaching librarians, the problems with peer review have a particularly troubling dimension because we spend so much of our time telling students of the vital need to evaluate information for quality, reliability, validity and authority. We stress the importance of using scholarly sources over open web ones. What’s more our discovery services even have a little tickbox that limits searches to peer reviewed articles, because they’re the ones you can rely on. Right? …

So what do we do if peer review fails to act as the guarantee of scholarly quality that we expect and need it to be? Where does it leave us if “peer review is a joke”?

The purpose of peer review

From my point of view as a journal editor, peer review is far from being a joke. On the contrary, it has a number of very useful functions:

·        It lets me see how the article will be received by the community

The reviewers act as trial readers who have certain expectations about the kind of material they’re going to find in any given journal. This means I can get an idea of how relevant the work is to the journal’s audience, and whether this particular journal is the best place for it to appear and be appreciated.

·        It tests the flow of the argument

Because peer reviewers read actively and critically, they are alert to any breaks in the logical construction of the work. They’ll spot any discontinuities in the argument, any assumptions left unquestioned, and any disconnection between the method, the results and the conclusions, and will suggest ways to fix them.

·        It suggests new literature or different viewpoints that add to the research context

One of the hardest aspects of academic writing is reconciling variant views on a topic, but a partial – in any sense – approach does no service to research. Every argument will have its counter-position, just as every research method has its limitations. Ignoring these doesn’t make them go away; it just makes for an unbalanced article. Reviewers can bring a complementary perspective on the literature that will make for a more considered background to the research.

·        It helps refine and clarify a writing style which is governed by rigid conventions and in which complex ideas are discussed

If you’ve ever written an essay, you’ll know that the scholarly register can work a terrible transformation on our ability to articulate things clearly. The desire to sound objective, knowledgeable, or just plain ‘academic’ can completely obscure what we’re trying to say. When this happens (and it does to us all) the best service anyone can do is to ask (gently) “What the heck does this mean?”

In my journal’s guidelines for authors and reviewers we put all this a bit more succinctly:

The role of the peer reviewer is twofold: Firstly, to advise the editor as to whether the paper is suitable for publication and, if so, what stage of development it has reached. [ ….] Secondly, the peer reviewer will act as a constructively critical friend to the author, providing detailed and practical feedback on all the aspects of the article.

But you’ll notice that these functions aren’t to do with the research as such, but with the presentation of the research. Scholarly communication always, necessarily, happens after the fact. It’s worth remembering that the reviewers weren’t there when the research was designed, or when the participants were selected, or when the audio recorder didn’t work properly, or the coding frame got covered in coffee stains. The reviewers aren’t responsible for the design of the research, or its outputs: all they can do is help authors make the best possible communication of the work after the research process itself is concluded.

Objective incredulity

Despite this undeniable fact, many of the “it’s a joke” articles seem to suggest that reviewers should take personal responsibility for the bad datasets, the faulty research design, or the inflated results. However, you can’t necessarily locate and expose those problems on reading alone. The only way to truly test the quality and validity of a research study is to replicate it.

Replication - the principle of reproducibility - is the whole point of the scientific method, which is basically a highly refined and very polite form of disbelief. Scholarly thinking never accepts assertions at face value, but always tests the evidence and asks uncomfortable, probing questions: is that really the case? Is it always the case? Supposing we changed the population, the dosage, one of the experimental conditions: what would the findings, and the implications we draw from them, look like then?

And here’s the nub of the whole problem: it’s not the peer reviewer’s job to replicate the research and tell us whether it’s valid or not. It’s our job - the job of the academic community as a whole, the researcher, the reader. In fact, you and me. Peer reviewers can’t certify an article as ‘true’ so that we know it meets all those criteria of authority, validity, reliability and the rest of them. All a reviewer can do is warrant that the report of a study has been composed in the appropriate register and carries the signifiers of academic authority, and that the study itself - seen only through this linguistic lens - appears to have been designed and executed in accordance with the methodological and analytical standards of the discipline. Publication in a peer-reviewed journal isn’t a simple binary qualifier that will tell you whether an article is good or bad, true or false; it’s only one of many nuanced and contextual evaluative factors we must weigh up for ourselves.

So when we talk to our students about sources and databases, we should also talk about peer review; and when we talk about peer review, we need to talk about where the authority for deciding whether something is true really rests.

Tickboxing truth

This brings us to one of the biggest challenges about learning in higher education: the need to rethink how we conceive of truth.

We generally start out by imagining that the goal of research is to discover the truth or find the answer - as though ‘Truth’ is a simple, singular entity that’s lying concealed out there, waiting to be for us to unearth it. And many of us experience frustration and dismay at university as a direct result of this way of thinking. We learn, slowly, that the goal of a research study is not to ‘find out the truth’, nor even to find out ‘a’ truth. It’s to test the validity of a hypothesis under certain conditions. Research will never let us say “This is what we know”, but only “This is what we believe - for now”.

Research doesn’t solve problems and say we can close the book on them. Rather it frames problems in new ways, which give rise to further questions, debate, discussion and further research. Occasionally these new ways of framing problems can painfully disrupt our entire understanding of the world. Yet once we understand that knowledge is a fluid construct created by communities, not a buried secret waiting for us to discover, then we also come to understand that there can be no last word in research: it is, rather, an ongoing conversation.

The real problem with peer review is that we’ve elevated it to a status it can’t legitimately occupy. We’ve tried to turn it into a truth guarantee, a kind of kitemark of veracity, but in doing so we’ve shut our eyes to the reality that truth in research is a shifting and slippery beast.

Ultimately, we don’t get to outsource evaluation: it’s up to each one of us to make the judgement on how far a study is valid, authoritative, and relevant. As teaching librarians, it’s our job to help our learners develop a critical mindset - that same objective incredulity that underlies scientific method, that challenges assertions and questions authority. And that being so, it’s imperative that we not only foster certain attitudes to information in our students, but model them ourselves in our own behaviour. In particular, our own approach to information should never be a blind acceptance of any rubber-stamp, any external warrant, any authority - no matter how eminent.

This means that the little tickbox that says ‘peer reviewed’ may be the greatest disservice we do to the thoughtful scepticism we seek to help develop in our students, and in our society at large. Encouraging people to think that the job of assessing quality happens somewhere else, by someone else, leads to a populace which is alternatively complacent and outraged, and in both states unwilling to undertake the critical engagement with information that leads us to be able to speak truth to power.

The only joke is to think that peer review can stand in for that.