guest posts

Padlet and Flipped Learning in Information Skills Training (by Emma Shaw)

Emma Shaw is the Library Manager and Liaison Librarian (Medicine) at Imperial College London. I while ago I saw her tweeting about the use of Padlet in her teaching sessions - students were using it in groups to come up with search strategies for healthcare-related databases. I like Padlet anyway, but I loved this use of it - so immediately beneficial, practical, and indeed stealable! So I asked Emma to write a guest post about the whole process, and she kindly agreed. Here's an example of the use of padlet she examines below.


I’m a Liaison Librarian at Imperial College London supporting medicine, along with another 4 Liaisons. Amongst a heavy timetable of information skills training for undergraduates and postgraduates, we have for several years been running a couple of Library skills workshops which are embedded in the MBBS Medicine course timetable for the 1st year medical students. We were using the traditional presentation slides telling them about the Library and how to access it, along with a hands on database searching component using the PubMed database. The database searching was taught in the form of a paper based tutorial handout. The students would sit in the computer lab and work through it for 20 minutes whilst also having the opportunity to ask questions. More recently I would go away and wonder whether they would really apply what we were teaching, in the format we were using. I asked myself if it was really meaningful for them, particularly as it was all new to them and we were teaching them how to look up research on a database, when they hadn’t even started using journal articles yet.

The other reason it got me thinking was, Information literacy is not an obvious skill that screams out ‘you need me in your life’ so you therefore need to convey it to the students in a way that makes them realise that they do. Especially when they have other priorities and a timetable jam packed with medical training. I’ve learned and observed over the years of teaching information skills that, in order for them to understand its direct use, to see its value and engage, they need to see it in context. When I say in context, I mean actually directly relating what they are learning in front of them, to a specific area in their coursework or even clinical practice. Rather than just telling them this is stuff they need to know now and in the future. This led me to question if, our presentation slides and paper tutorial were engaging and putting it in context enough. Could there be a better way of delivering the content so they engage with us, and see its direct value?

Feedback from the students about how we could develop the session included comments like:

“Maybe have an example of an assignment similar to what we would have this year and show how we might use online resources for researching that assignment.”

 

“Interactively doing it together with the students instead of following instructions on a page.”

This made it apparent that we were right to question this, and that it was a good time to reconsider the delivery of our training. We could see why PowerPoint slides were just not cutting it anymore, they needed interaction and context to stay engaged. On top of this, in the MBBS Medicine course, they were already being presented with e-learning modules, and using different teaching methods and technology. I could then see that very shortly our presentation slides and paper based database tutorial was not going to be enough anymore, and that our sessions were in danger of becoming irrelevant. We needed a fresh and new approach.

I had various sources of inspiration for revamping the workshops. We just so happened to have a visit from Caroline Pang, Director of the Medical Library at Lee Kong Chian School of Medicine, Singapore. One of Imperial College London’s partners. She demonstrated what library training they offered for Medical students. This consisted of initial training on the Library and database searching. They were then given a clinical question from the course tutors, and had to work together in groups to form the best search strategy to answer it. She had a whole day dedicated to this project as well as the presence of the tutors. This looked like a really good approach, not only was it more engaging by getting the students to actively search for a given question, but it was also relevant to the course content so they could directly see its value. If we wanted to do something similar however, the problem we faced was we only had 1.5 hours for each session!

It was then one day at a meeting with tutors from the MBBS Medicine course that, the idea of Flipped learning was presented to me. I won’t go in to too much detail about it as you can find a very good definition here by Professor Dilip Barad (Ph.D.). It’s essentially where work is given to the students to be done before the session e.g. a video lecture, or tutorial to work through. The session is then dedicated to applying the knowledge they have learnt from the pre-sessional work, through activities and group work and allowing them to ask questions. In this way it becomes student centred learning as opposed to trainer centred.

To be honest the flipped learning approach initially filled me with dread! There was the worry of giving them pre-sessional work to do with the risk of them not doing it.  It also seemed like a lot of work and preparation, as well as the fear of having to get the students in to groups and managing them. Also having many other duties aside from training, it’s very easy to just slip in to the habit of repeating the same old slides each year. It’s easy, it’s safe. However, if it would improve engagement it was worth a try, and I thought this would be an excellent model for our teaching. It would allow us to save some time in the session by getting the students to do the straight forward PubMed tutorial before the session. This would then allow us to try out the database searching exercise in groups, which we didn’t think we would have time for. We could dedicate time in the session to getting them to do real searches on PubMed, using related topics in up and coming assessments, with the trainers feeding back to the groups as they did the searches. This would allow for more engagement and they would directly see the use of searching a database, by pulling up relevant articles that could be of use for their assessments.

The final plan for the session consisted of an online PubMed database tutorial created using Articulate software. This was essentially a series of online slides taking the students step by step through using PubMed, which we hosted on the Blackboard platform. We emailed the students a week in advance via the Medical School to ask them to do the online tutorial before the session. To encourage them to do it, we mentioned that it would be for a group exercise in the session. We then sent a reminder a couple of days before the session for good measure. Some good advice I got from an e-learning technologist was to give them an approximate time of how long the tutorial would take, so they could plan it around their schedules. We aimed for 30 minutes which we thought they would see as achievable.

For the session, we refined our slides on the Library induction section. We then did a brief summary of what they should have learnt from the PubMed tutorial and gave an opportunity for questions. There was some debate on what to do if the students didn’t do the tutorial before the session. Should we include a more detailed summary in case, or would we run the risk of disengaging the students because of the duplication of information? We decided to go with a very brief summary just confirming points from the tutorial. We would then play it by ear and adapt the session if necessary. We then presented them with a search question related to a future assessment and arranged the students in to small groups and asked them to come up with a search strategy for that question. To provide the students with more feedback in the session and to give it a competitive edge (another bit of advice from a tutor that they liked competition!), we added some blended learning in to the mix. We used an online tool known as Padlet for the groups to add their search strategy to, for which we could then feedback to all the students how they all got on with the task. An example from one session is below.

An example of a Padlet board, used by the students to detail search strategies

An example of a Padlet board, used by the students to detail search strategies

The first sessions we ran in 2016 went very well and we had over 80% of the 320 students do the pre-session tutorial. As it was successful we ran it again last year in 2017 and over 90% of the students did the pre-session tutorial. The group exercises went well, and we could really see the students engaging with the task and coming up with good search strategies.

The feedback was mainly positive and gave the impression that our new teaching method was working. The following comments were from 2016:

“Clear explanations; the delivery was concise. The activities helped us put the skills into practice.”

“It's really nice to practice searching in class and in group and it really helps when comparing different searching methods within groups.”

Some Feedback from 2017:

Expanded on the pre reading material and explained things more clearly and gave sufficient exercises to ensure I actually understood the methods of searching databases”

“Learnt new and useful techniques for searching up articles. The session was interactive and fun. Everything was explained well and thoroughly.”

In terms of negative comments, in 2016 we had a few to do with some of the session’s content repeating parts of the pre-session tutorial. As these were our first sessions, we hadn’t yet got the balance right in terms of summarising the tutorial, so we then adapted this for the sessions in 2017 to avoid this. For the 2017 sessions, a few comments said it was a bit rushed, and they wanted more searching examples. They also struggled with some of the concepts like Subject Heading searching, and found it too advanced. This could potentially be because some of the students did not do the tutorial beforehand, but I think we perhaps also need to consider more about those students who learn at different speeds. This is a challenge when teaching 45+ students per session and with the time constraint. However, this is something to bear in mind for the next sessions, and to perhaps offer opportunities for optional follow up training on a 1-2-1 basis for those who require it.

Overall it’s been a real success. Not only do I put this down to the hard work by the Liaison team but, also down to the fact we had really good support from the tutors from the Medical School who always ensure to make it clear to students that information literacy is a crucial part of the curriculum. For anyone wanting to try Flipped learning, I would therefore always recommend getting the faculty on board. Despite all the preparation work, we also enjoyed delivering the session. It was a really good experience actually going around the room and engaging with the students and giving feedback, instead of mainly stood in front of PowerPoint slides and answering questions.

For anyone interested in looking at the session content, such as the online PubMed tutorial please feel free to get in touch.

The problem with peer review (by @LibGoddess)

 

I am ridiculously excited to introduce a new guest post.

I've been wrestling for a while with the validity or otherwise of the peer review process, and where that leaves us as librarians teaching information literacy. I can't say 'if you use databases you'll find good quality information' because that isn't neccessarily true - but nor is it true to say that what one finds on Google is always just as good as what one finds in a journal.

There was only one person I thought of turning to in order to make sense of this: Emma Coonan. She writes brilliantly about teaching and information on her blog and elsewhere - have a look at her fantastic post on post-Brexit infolit, here.


The Problem With Peer Review | Emma Coonan

Well, peer review is broken. Again. Or, if you prefer, still.

The problems are well known and often repeated: self-serving reviewers demanding citations to their own work, however irrelevant, or dismissing competing research outright; bad data not being picked up; completely fake articles sailing through review. A recent discussion on the ALA Infolit mailing list centred on a peer-reviewed article in a reputable journal (indexed, indeed, in an expensive academic database) whose references consisted solely of Wikipedia entries. This wonderfully wry PNIS article - one of the most approachable and most entertaining overviews of the issues with scholarly publishing - claims that peer reviewers are “terrible at spotting weaknesses and errors in papers”.

As for how peer review makes authors feel, well, there’s a Tumblr for that. This cartoon by Jason McDermott sums it up:

Click the pic to open the original on jasonya.com in a new window

Click the pic to open the original on jasonya.com in a new window

- and that’s from a self-proclaimed fan of peer review.

For teaching librarians, the problems with peer review have a particularly troubling dimension because we spend so much of our time telling students of the vital need to evaluate information for quality, reliability, validity and authority. We stress the importance of using scholarly sources over open web ones. What’s more our discovery services even have a little tickbox that limits searches to peer reviewed articles, because they’re the ones you can rely on. Right? …

So what do we do if peer review fails to act as the guarantee of scholarly quality that we expect and need it to be? Where does it leave us if “peer review is a joke”?

The purpose of peer review

From my point of view as a journal editor, peer review is far from being a joke. On the contrary, it has a number of very useful functions:

·        It lets me see how the article will be received by the community

The reviewers act as trial readers who have certain expectations about the kind of material they’re going to find in any given journal. This means I can get an idea of how relevant the work is to the journal’s audience, and whether this particular journal is the best place for it to appear and be appreciated.

·        It tests the flow of the argument

Because peer reviewers read actively and critically, they are alert to any breaks in the logical construction of the work. They’ll spot any discontinuities in the argument, any assumptions left unquestioned, and any disconnection between the method, the results and the conclusions, and will suggest ways to fix them.

·        It suggests new literature or different viewpoints that add to the research context

One of the hardest aspects of academic writing is reconciling variant views on a topic, but a partial – in any sense – approach does no service to research. Every argument will have its counter-position, just as every research method has its limitations. Ignoring these doesn’t make them go away; it just makes for an unbalanced article. Reviewers can bring a complementary perspective on the literature that will make for a more considered background to the research.

·        It helps refine and clarify a writing style which is governed by rigid conventions and in which complex ideas are discussed

If you’ve ever written an essay, you’ll know that the scholarly register can work a terrible transformation on our ability to articulate things clearly. The desire to sound objective, knowledgeable, or just plain ‘academic’ can completely obscure what we’re trying to say. When this happens (and it does to us all) the best service anyone can do is to ask (gently) “What the heck does this mean?”

In my journal’s guidelines for authors and reviewers we put all this a bit more succinctly:

The role of the peer reviewer is twofold: Firstly, to advise the editor as to whether the paper is suitable for publication and, if so, what stage of development it has reached. [ ….] Secondly, the peer reviewer will act as a constructively critical friend to the author, providing detailed and practical feedback on all the aspects of the article.

But you’ll notice that these functions aren’t to do with the research as such, but with the presentation of the research. Scholarly communication always, necessarily, happens after the fact. It’s worth remembering that the reviewers weren’t there when the research was designed, or when the participants were selected, or when the audio recorder didn’t work properly, or the coding frame got covered in coffee stains. The reviewers aren’t responsible for the design of the research, or its outputs: all they can do is help authors make the best possible communication of the work after the research process itself is concluded.

Objective incredulity

Despite this undeniable fact, many of the “it’s a joke” articles seem to suggest that reviewers should take personal responsibility for the bad datasets, the faulty research design, or the inflated results. However, you can’t necessarily locate and expose those problems on reading alone. The only way to truly test the quality and validity of a research study is to replicate it.

Replication - the principle of reproducibility - is the whole point of the scientific method, which is basically a highly refined and very polite form of disbelief. Scholarly thinking never accepts assertions at face value, but always tests the evidence and asks uncomfortable, probing questions: is that really the case? Is it always the case? Supposing we changed the population, the dosage, one of the experimental conditions: what would the findings, and the implications we draw from them, look like then?

And here’s the nub of the whole problem: it’s not the peer reviewer’s job to replicate the research and tell us whether it’s valid or not. It’s our job - the job of the academic community as a whole, the researcher, the reader. In fact, you and me. Peer reviewers can’t certify an article as ‘true’ so that we know it meets all those criteria of authority, validity, reliability and the rest of them. All a reviewer can do is warrant that the report of a study has been composed in the appropriate register and carries the signifiers of academic authority, and that the study itself - seen only through this linguistic lens - appears to have been designed and executed in accordance with the methodological and analytical standards of the discipline. Publication in a peer-reviewed journal isn’t a simple binary qualifier that will tell you whether an article is good or bad, true or false; it’s only one of many nuanced and contextual evaluative factors we must weigh up for ourselves.

So when we talk to our students about sources and databases, we should also talk about peer review; and when we talk about peer review, we need to talk about where the authority for deciding whether something is true really rests.

Tickboxing truth

This brings us to one of the biggest challenges about learning in higher education: the need to rethink how we conceive of truth.

We generally start out by imagining that the goal of research is to discover the truth or find the answer - as though ‘Truth’ is a simple, singular entity that’s lying concealed out there, waiting to be for us to unearth it. And many of us experience frustration and dismay at university as a direct result of this way of thinking. We learn, slowly, that the goal of a research study is not to ‘find out the truth’, nor even to find out ‘a’ truth. It’s to test the validity of a hypothesis under certain conditions. Research will never let us say “This is what we know”, but only “This is what we believe - for now”.

Research doesn’t solve problems and say we can close the book on them. Rather it frames problems in new ways, which give rise to further questions, debate, discussion and further research. Occasionally these new ways of framing problems can painfully disrupt our entire understanding of the world. Yet once we understand that knowledge is a fluid construct created by communities, not a buried secret waiting for us to discover, then we also come to understand that there can be no last word in research: it is, rather, an ongoing conversation.

The real problem with peer review is that we’ve elevated it to a status it can’t legitimately occupy. We’ve tried to turn it into a truth guarantee, a kind of kitemark of veracity, but in doing so we’ve shut our eyes to the reality that truth in research is a shifting and slippery beast.

Ultimately, we don’t get to outsource evaluation: it’s up to each one of us to make the judgement on how far a study is valid, authoritative, and relevant. As teaching librarians, it’s our job to help our learners develop a critical mindset - that same objective incredulity that underlies scientific method, that challenges assertions and questions authority. And that being so, it’s imperative that we not only foster certain attitudes to information in our students, but model them ourselves in our own behaviour. In particular, our own approach to information should never be a blind acceptance of any rubber-stamp, any external warrant, any authority - no matter how eminent.

This means that the little tickbox that says ‘peer reviewed’ may be the greatest disservice we do to the thoughtful scepticism we seek to help develop in our students, and in our society at large. Encouraging people to think that the job of assessing quality happens somewhere else, by someone else, leads to a populace which is alternatively complacent and outraged, and in both states unwilling to undertake the critical engagement with information that leads us to be able to speak truth to power.

The only joke is to think that peer review can stand in for that.

A letter to a younger me

I've not been blogging for a month or so due to the arrival of baby Grace! But I'm back at work on Monday so I'm gradually easing back in to the world of librarianship, starting with some stuff I meant to blog links to ages ago but never got around to... I was delighted to be asked to write a post on the Letters to a Young Librarian blog run by Jessica Olin. It's a really good blog and one that I read a lot anyway, so it was really nice to do something for it. Here's my post.

I tried to really honestly (and at the risk of embarassing myself a bit) write a letter to the me that was about to start his first day in libraries, aged 25-and-a-half, back in 2006. I also tried to make it as relevant and useful as possible to a new professional today, so check it out and and tell me what you think. It includes a list of things I think we really should be doing in our profession:

  • Communicating our value PROPERLY at every opportunity.
  • Embracing informality.
  • Trying to inspire people rather than placate.
  • Understanding that work-life balance is important enough that it should not be considered with reference to what ANYONE ELSE IS DOING. 
  • Libraries have always been product orientated, but now they need to be market orientated. .

Each of these is expanded in the post; it was fun to think about this stuff. Thanks to Jessica for asking me and the reader of her blog who requested the post!

I've written quite a lot of stuff on platforms other than here (or the Toolkit blog) in recent months - there's a complete list in the Guest Posts On... section down the right-hand side of the website (you'll need to scroll down!) but here are the most recent:

Normal blog service will now be resumed!