infolit

Padlet and Flipped Learning in Information Skills Training (by Emma Shaw)

Emma Shaw is the Library Manager and Liaison Librarian (Medicine) at Imperial College London. I while ago I saw her tweeting about the use of Padlet in her teaching sessions - students were using it in groups to come up with search strategies for healthcare-related databases. I like Padlet anyway, but I loved this use of it - so immediately beneficial, practical, and indeed stealable! So I asked Emma to write a guest post about the whole process, and she kindly agreed. Here's an example of the use of padlet she examines below.


I’m a Liaison Librarian at Imperial College London supporting medicine, along with another 4 Liaisons. Amongst a heavy timetable of information skills training for undergraduates and postgraduates, we have for several years been running a couple of Library skills workshops which are embedded in the MBBS Medicine course timetable for the 1st year medical students. We were using the traditional presentation slides telling them about the Library and how to access it, along with a hands on database searching component using the PubMed database. The database searching was taught in the form of a paper based tutorial handout. The students would sit in the computer lab and work through it for 20 minutes whilst also having the opportunity to ask questions. More recently I would go away and wonder whether they would really apply what we were teaching, in the format we were using. I asked myself if it was really meaningful for them, particularly as it was all new to them and we were teaching them how to look up research on a database, when they hadn’t even started using journal articles yet.

The other reason it got me thinking was, Information literacy is not an obvious skill that screams out ‘you need me in your life’ so you therefore need to convey it to the students in a way that makes them realise that they do. Especially when they have other priorities and a timetable jam packed with medical training. I’ve learned and observed over the years of teaching information skills that, in order for them to understand its direct use, to see its value and engage, they need to see it in context. When I say in context, I mean actually directly relating what they are learning in front of them, to a specific area in their coursework or even clinical practice. Rather than just telling them this is stuff they need to know now and in the future. This led me to question if, our presentation slides and paper tutorial were engaging and putting it in context enough. Could there be a better way of delivering the content so they engage with us, and see its direct value?

Feedback from the students about how we could develop the session included comments like:

“Maybe have an example of an assignment similar to what we would have this year and show how we might use online resources for researching that assignment.”

 

“Interactively doing it together with the students instead of following instructions on a page.”

This made it apparent that we were right to question this, and that it was a good time to reconsider the delivery of our training. We could see why PowerPoint slides were just not cutting it anymore, they needed interaction and context to stay engaged. On top of this, in the MBBS Medicine course, they were already being presented with e-learning modules, and using different teaching methods and technology. I could then see that very shortly our presentation slides and paper based database tutorial was not going to be enough anymore, and that our sessions were in danger of becoming irrelevant. We needed a fresh and new approach.

I had various sources of inspiration for revamping the workshops. We just so happened to have a visit from Caroline Pang, Director of the Medical Library at Lee Kong Chian School of Medicine, Singapore. One of Imperial College London’s partners. She demonstrated what library training they offered for Medical students. This consisted of initial training on the Library and database searching. They were then given a clinical question from the course tutors, and had to work together in groups to form the best search strategy to answer it. She had a whole day dedicated to this project as well as the presence of the tutors. This looked like a really good approach, not only was it more engaging by getting the students to actively search for a given question, but it was also relevant to the course content so they could directly see its value. If we wanted to do something similar however, the problem we faced was we only had 1.5 hours for each session!

It was then one day at a meeting with tutors from the MBBS Medicine course that, the idea of Flipped learning was presented to me. I won’t go in to too much detail about it as you can find a very good definition here by Professor Dilip Barad (Ph.D.). It’s essentially where work is given to the students to be done before the session e.g. a video lecture, or tutorial to work through. The session is then dedicated to applying the knowledge they have learnt from the pre-sessional work, through activities and group work and allowing them to ask questions. In this way it becomes student centred learning as opposed to trainer centred.

To be honest the flipped learning approach initially filled me with dread! There was the worry of giving them pre-sessional work to do with the risk of them not doing it.  It also seemed like a lot of work and preparation, as well as the fear of having to get the students in to groups and managing them. Also having many other duties aside from training, it’s very easy to just slip in to the habit of repeating the same old slides each year. It’s easy, it’s safe. However, if it would improve engagement it was worth a try, and I thought this would be an excellent model for our teaching. It would allow us to save some time in the session by getting the students to do the straight forward PubMed tutorial before the session. This would then allow us to try out the database searching exercise in groups, which we didn’t think we would have time for. We could dedicate time in the session to getting them to do real searches on PubMed, using related topics in up and coming assessments, with the trainers feeding back to the groups as they did the searches. This would allow for more engagement and they would directly see the use of searching a database, by pulling up relevant articles that could be of use for their assessments.

The final plan for the session consisted of an online PubMed database tutorial created using Articulate software. This was essentially a series of online slides taking the students step by step through using PubMed, which we hosted on the Blackboard platform. We emailed the students a week in advance via the Medical School to ask them to do the online tutorial before the session. To encourage them to do it, we mentioned that it would be for a group exercise in the session. We then sent a reminder a couple of days before the session for good measure. Some good advice I got from an e-learning technologist was to give them an approximate time of how long the tutorial would take, so they could plan it around their schedules. We aimed for 30 minutes which we thought they would see as achievable.

For the session, we refined our slides on the Library induction section. We then did a brief summary of what they should have learnt from the PubMed tutorial and gave an opportunity for questions. There was some debate on what to do if the students didn’t do the tutorial before the session. Should we include a more detailed summary in case, or would we run the risk of disengaging the students because of the duplication of information? We decided to go with a very brief summary just confirming points from the tutorial. We would then play it by ear and adapt the session if necessary. We then presented them with a search question related to a future assessment and arranged the students in to small groups and asked them to come up with a search strategy for that question. To provide the students with more feedback in the session and to give it a competitive edge (another bit of advice from a tutor that they liked competition!), we added some blended learning in to the mix. We used an online tool known as Padlet for the groups to add their search strategy to, for which we could then feedback to all the students how they all got on with the task. An example from one session is below.

An example of a Padlet board, used by the students to detail search strategies

An example of a Padlet board, used by the students to detail search strategies

The first sessions we ran in 2016 went very well and we had over 80% of the 320 students do the pre-session tutorial. As it was successful we ran it again last year in 2017 and over 90% of the students did the pre-session tutorial. The group exercises went well, and we could really see the students engaging with the task and coming up with good search strategies.

The feedback was mainly positive and gave the impression that our new teaching method was working. The following comments were from 2016:

“Clear explanations; the delivery was concise. The activities helped us put the skills into practice.”

“It's really nice to practice searching in class and in group and it really helps when comparing different searching methods within groups.”

Some Feedback from 2017:

Expanded on the pre reading material and explained things more clearly and gave sufficient exercises to ensure I actually understood the methods of searching databases”

“Learnt new and useful techniques for searching up articles. The session was interactive and fun. Everything was explained well and thoroughly.”

In terms of negative comments, in 2016 we had a few to do with some of the session’s content repeating parts of the pre-session tutorial. As these were our first sessions, we hadn’t yet got the balance right in terms of summarising the tutorial, so we then adapted this for the sessions in 2017 to avoid this. For the 2017 sessions, a few comments said it was a bit rushed, and they wanted more searching examples. They also struggled with some of the concepts like Subject Heading searching, and found it too advanced. This could potentially be because some of the students did not do the tutorial beforehand, but I think we perhaps also need to consider more about those students who learn at different speeds. This is a challenge when teaching 45+ students per session and with the time constraint. However, this is something to bear in mind for the next sessions, and to perhaps offer opportunities for optional follow up training on a 1-2-1 basis for those who require it.

Overall it’s been a real success. Not only do I put this down to the hard work by the Liaison team but, also down to the fact we had really good support from the tutors from the Medical School who always ensure to make it clear to students that information literacy is a crucial part of the curriculum. For anyone wanting to try Flipped learning, I would therefore always recommend getting the faculty on board. Despite all the preparation work, we also enjoyed delivering the session. It was a really good experience actually going around the room and engaging with the students and giving feedback, instead of mainly stood in front of PowerPoint slides and answering questions.

For anyone interested in looking at the session content, such as the online PubMed tutorial please feel free to get in touch.

Using Kahoot in Library Induction and Teaching Sessions

A colleague at York, Tony Wilson, brought Kahoot! to our attention recently for possible use in teaching and orientation sessions: it's a really nice quiz tool. There is nothing new about using quizzes in library sessions and there's about a million and one tools out there for making them, but Kahoot differs in its execution of the idea. It's so much slicker and just more FUN than anything like this I've looked at before. And interestingly, it already seems to have currency with students:

One of the most useful aspects of a quiz is that people are asked to actively engage with the information rather than passively receive it. I'm absolutely convinced the students are remembering more this way than if we just presented them with the complete information.

4 reasons Kahoot works so well

It's really, really nice, for these reasons in reverse order of importance:

The music. It has cool retro sort of 8-bit music in the background.
The aesthetics. It has bright colours and looks generally nice. Here's what a question looks like:

An example of a question as it looks on the screen during the quiz

An example of a question as it looks on the screen during the quiz

The leaderboard. Oh yes. It has a LEADERBOARD. This is the key thing, really: people put in their nicknames and after each question the top 5 is displayed (based on, obviously, how acurate their answers are but also how quick). Even completely non-competitive people get excited when they see their name in the top 5... I tweeted about using Kahoot and Diana Caulfied chimed in about the tension the leaderboard brings:

The mobile view from the student perspective

The mobile view from the student perspective

It's VERY easy to use. These things have to be SO simple to justify using them. In the case of Kahoot, you load up the quiz, and the students go to kahoot.it and put in the pin number the quiz gives you on the screen. It works perfectly on phones, tablets, or PCs. There's only one thing on the screen - the box to put the pin number in; and only one thing to do - put the pin number in. This simplicity and intuitive interface means everyone can get on board right away. There's no hunting around. 

You can also use it on an epic scale - one colleague just came back from using it with 95 undergraduates today, who responded really well, another used it with over 100 who were absolutely buzzing after each question. You can actually have up to 4,000 players at once.

Here's what the students are presented with when they go to the (very simple) URL:

An example from York

So here's the quiz I made for Induction, click here if you want to have a go. This particular link is (I think) in ghost mode, where you're competing with a previous group of players. So if you do the quiz now, you'll be up against my History of Art PostGraduates and will only show up in the Top 5 leaderboard if you get better scores than at least 25 of them! But normally in a session I'd use a completely blank slate.

Possible uses

In this example the questions I chose are basically just a way to show off our resources and services: it's all stuff I'd be telling them as part of a regular induction talk anyway:

My Kahoot quiz questions

My Kahoot quiz questions

The students I've used it with so far have really enjoyed it (as far as I can tell!). It's much more interesting than listing things, and, intruigingly, I think that asking people to guess between possible options actually seems the answer more impressive than just telling them the fact outright. So for example in the Google Apps question above, there were gasps when I revealed they get unlimited storage and the majority had chosen one of the lower options (the results screen shows how many people have chosen each option) - I'm fairly sure if I'd just told them they get unlimited storage, not one person would have gasped.

But there are plenty of other possibilities for Kahoot that are a bit more pedagogical in nature. Using it to measure how much of the session has sunk in at the end; using it at the start and end to measure a difference in knowledge; and using it to establish the level of student understanding:

There's also a Discussion mode rather than a Quiz mode. You pose a question and students type their answers in (rather than selecting from multiple choice) and their words come up on the screen. Anything rude or offensive can be deleted with one click. It would be a great way to find out what students felt unsure of or wanted to learn about, or to discuss the merits of a particular approach.

In summary

So I'd recommend taking a look at Kahoot and seeing if you can incorporate it into your teaching. As well as using it throughout Induction I'm planning on using different kinds of quizzes as part of infolit sessions and am excited to see how that works. You can easily incorporate your own library's images and videos and the tool is free, very easy to use, nicely made, and FUN. 

The problem with peer review (by @LibGoddess)

 

I am ridiculously excited to introduce a new guest post.

I've been wrestling for a while with the validity or otherwise of the peer review process, and where that leaves us as librarians teaching information literacy. I can't say 'if you use databases you'll find good quality information' because that isn't neccessarily true - but nor is it true to say that what one finds on Google is always just as good as what one finds in a journal.

There was only one person I thought of turning to in order to make sense of this: Emma Coonan. She writes brilliantly about teaching and information on her blog and elsewhere - have a look at her fantastic post on post-Brexit infolit, here.


The Problem With Peer Review | Emma Coonan

Well, peer review is broken. Again. Or, if you prefer, still.

The problems are well known and often repeated: self-serving reviewers demanding citations to their own work, however irrelevant, or dismissing competing research outright; bad data not being picked up; completely fake articles sailing through review. A recent discussion on the ALA Infolit mailing list centred on a peer-reviewed article in a reputable journal (indexed, indeed, in an expensive academic database) whose references consisted solely of Wikipedia entries. This wonderfully wry PNIS article - one of the most approachable and most entertaining overviews of the issues with scholarly publishing - claims that peer reviewers are “terrible at spotting weaknesses and errors in papers”.

As for how peer review makes authors feel, well, there’s a Tumblr for that. This cartoon by Jason McDermott sums it up:

Click the pic to open the original on jasonya.com in a new window

Click the pic to open the original on jasonya.com in a new window

- and that’s from a self-proclaimed fan of peer review.

For teaching librarians, the problems with peer review have a particularly troubling dimension because we spend so much of our time telling students of the vital need to evaluate information for quality, reliability, validity and authority. We stress the importance of using scholarly sources over open web ones. What’s more our discovery services even have a little tickbox that limits searches to peer reviewed articles, because they’re the ones you can rely on. Right? …

So what do we do if peer review fails to act as the guarantee of scholarly quality that we expect and need it to be? Where does it leave us if “peer review is a joke”?

The purpose of peer review

From my point of view as a journal editor, peer review is far from being a joke. On the contrary, it has a number of very useful functions:

·        It lets me see how the article will be received by the community

The reviewers act as trial readers who have certain expectations about the kind of material they’re going to find in any given journal. This means I can get an idea of how relevant the work is to the journal’s audience, and whether this particular journal is the best place for it to appear and be appreciated.

·        It tests the flow of the argument

Because peer reviewers read actively and critically, they are alert to any breaks in the logical construction of the work. They’ll spot any discontinuities in the argument, any assumptions left unquestioned, and any disconnection between the method, the results and the conclusions, and will suggest ways to fix them.

·        It suggests new literature or different viewpoints that add to the research context

One of the hardest aspects of academic writing is reconciling variant views on a topic, but a partial – in any sense – approach does no service to research. Every argument will have its counter-position, just as every research method has its limitations. Ignoring these doesn’t make them go away; it just makes for an unbalanced article. Reviewers can bring a complementary perspective on the literature that will make for a more considered background to the research.

·        It helps refine and clarify a writing style which is governed by rigid conventions and in which complex ideas are discussed

If you’ve ever written an essay, you’ll know that the scholarly register can work a terrible transformation on our ability to articulate things clearly. The desire to sound objective, knowledgeable, or just plain ‘academic’ can completely obscure what we’re trying to say. When this happens (and it does to us all) the best service anyone can do is to ask (gently) “What the heck does this mean?”

In my journal’s guidelines for authors and reviewers we put all this a bit more succinctly:

The role of the peer reviewer is twofold: Firstly, to advise the editor as to whether the paper is suitable for publication and, if so, what stage of development it has reached. [ ….] Secondly, the peer reviewer will act as a constructively critical friend to the author, providing detailed and practical feedback on all the aspects of the article.

But you’ll notice that these functions aren’t to do with the research as such, but with the presentation of the research. Scholarly communication always, necessarily, happens after the fact. It’s worth remembering that the reviewers weren’t there when the research was designed, or when the participants were selected, or when the audio recorder didn’t work properly, or the coding frame got covered in coffee stains. The reviewers aren’t responsible for the design of the research, or its outputs: all they can do is help authors make the best possible communication of the work after the research process itself is concluded.

Objective incredulity

Despite this undeniable fact, many of the “it’s a joke” articles seem to suggest that reviewers should take personal responsibility for the bad datasets, the faulty research design, or the inflated results. However, you can’t necessarily locate and expose those problems on reading alone. The only way to truly test the quality and validity of a research study is to replicate it.

Replication - the principle of reproducibility - is the whole point of the scientific method, which is basically a highly refined and very polite form of disbelief. Scholarly thinking never accepts assertions at face value, but always tests the evidence and asks uncomfortable, probing questions: is that really the case? Is it always the case? Supposing we changed the population, the dosage, one of the experimental conditions: what would the findings, and the implications we draw from them, look like then?

And here’s the nub of the whole problem: it’s not the peer reviewer’s job to replicate the research and tell us whether it’s valid or not. It’s our job - the job of the academic community as a whole, the researcher, the reader. In fact, you and me. Peer reviewers can’t certify an article as ‘true’ so that we know it meets all those criteria of authority, validity, reliability and the rest of them. All a reviewer can do is warrant that the report of a study has been composed in the appropriate register and carries the signifiers of academic authority, and that the study itself - seen only through this linguistic lens - appears to have been designed and executed in accordance with the methodological and analytical standards of the discipline. Publication in a peer-reviewed journal isn’t a simple binary qualifier that will tell you whether an article is good or bad, true or false; it’s only one of many nuanced and contextual evaluative factors we must weigh up for ourselves.

So when we talk to our students about sources and databases, we should also talk about peer review; and when we talk about peer review, we need to talk about where the authority for deciding whether something is true really rests.

Tickboxing truth

This brings us to one of the biggest challenges about learning in higher education: the need to rethink how we conceive of truth.

We generally start out by imagining that the goal of research is to discover the truth or find the answer - as though ‘Truth’ is a simple, singular entity that’s lying concealed out there, waiting to be for us to unearth it. And many of us experience frustration and dismay at university as a direct result of this way of thinking. We learn, slowly, that the goal of a research study is not to ‘find out the truth’, nor even to find out ‘a’ truth. It’s to test the validity of a hypothesis under certain conditions. Research will never let us say “This is what we know”, but only “This is what we believe - for now”.

Research doesn’t solve problems and say we can close the book on them. Rather it frames problems in new ways, which give rise to further questions, debate, discussion and further research. Occasionally these new ways of framing problems can painfully disrupt our entire understanding of the world. Yet once we understand that knowledge is a fluid construct created by communities, not a buried secret waiting for us to discover, then we also come to understand that there can be no last word in research: it is, rather, an ongoing conversation.

The real problem with peer review is that we’ve elevated it to a status it can’t legitimately occupy. We’ve tried to turn it into a truth guarantee, a kind of kitemark of veracity, but in doing so we’ve shut our eyes to the reality that truth in research is a shifting and slippery beast.

Ultimately, we don’t get to outsource evaluation: it’s up to each one of us to make the judgement on how far a study is valid, authoritative, and relevant. As teaching librarians, it’s our job to help our learners develop a critical mindset - that same objective incredulity that underlies scientific method, that challenges assertions and questions authority. And that being so, it’s imperative that we not only foster certain attitudes to information in our students, but model them ourselves in our own behaviour. In particular, our own approach to information should never be a blind acceptance of any rubber-stamp, any external warrant, any authority - no matter how eminent.

This means that the little tickbox that says ‘peer reviewed’ may be the greatest disservice we do to the thoughtful scepticism we seek to help develop in our students, and in our society at large. Encouraging people to think that the job of assessing quality happens somewhere else, by someone else, leads to a populace which is alternatively complacent and outraged, and in both states unwilling to undertake the critical engagement with information that leads us to be able to speak truth to power.

The only joke is to think that peer review can stand in for that.

Where to start when planning talk or teaching session

This seems obvious, right? And yet so often it doesn't happen.

Venn diagram showing 'what you know' in one circle and 'what matters to your audience' in the other. Where they overlap is where your talk should be.

Venn diagram showing 'what you know' in one circle and 'what matters to your audience' in the other. Where they overlap is where your talk should be.

There are two main ways in which, when we give talks or run teaching sessions and workshops, we don't adhere to this principle. Clearly no one ever strays entirely into the blue circle (giving a talk about a subject which matters to your audience, but which you no absolutely nothing about, is pretty much impossible) but we can easily spend too much time in the orange circle where it doesn't overlap, or just not make the most of the overlapping section of the diagram.

NB: I very deliberately use the phrase 'what matters to your audience' above - rather than 'what interests them', because I'm not advocating taking a superficial approach and only telling your community about cool stuff they already care about. We can tell them things they don't know they need to know! Sometimes they wouldn't choose to hear it in advance, but they thank us afterwards. So it's very much what matters to them, whether they realise it before the session or not.

There's no excuse for telling an audience things which don't matter at all - unless it's a small part of your presentation, to serve a particular purpose.

Telling people everything we know

I don't wish to generalise but a lot of times Librarians give out too much information, particularly early on in a relationship between the institution and the user. Induction or Welcome talks often contain vast swathes of detail, or a talk at a conference will include ALL the info about a particular project - and often this can actually get in the way of the message. After a while the audience gets overwhelmed and starts to filter, or just switch off. We can only retain so much new information at one time.

So when crafting a talk or presentation, the starting point should not be 'What do I know about this subject?' but specifically what do the audience want to know about this subject, that I can tell them?

Missing out on the over-lap

There's a second, more subtle, factor here. The over-lap of what matters to your audience and what you know about can also include things which aren't part of your core message. In other words, you can establish your credibility with your audience by telling them things which matter to them, and THEN telling about the library's relevance to them - they're more inclined to take you seriously if you aren't just advocating for your own service or value. I use this a lot in infolit teaching - I'll tell the students about internet privacy, different search engines, how to use social media in an academic context etc, as well as telling them about what the library does and how to use databases effectively. Because it's in the overlap of the diagram above - I know about this stuff, and it matters to my audience. What's really interesting is when I started doing this *rather than just talking about the library) the feedback, both the scores and the qualitative feedback, went up hugely; they really liked the sessions. But when they're asked to rate the most useful part of the session, the vast majority mention the bits about the library!

As long as it doesn't conflict with our ethics and values, libraries can provide both services and expertise based on what our users need - it doesn't have to be a 'library' function in the traditional sense.

So: create presentations and teaching from the audience's point of view first, working back to what you know about what matters to them, rather than the other way around. It's only a small shift but it makes a huge difference.

6 Useful Online Tools for Academics (and anyone else who teaches)

 

I teach a session on the PGCAP (Post-Graduate Certificate of Academic Practice) at York - a programme of teaching-related workshops and classes over the course of a year, which every new teaching academic has to attend.

Here's the presentation from this year's workshop, EdTech: Useful online tools for academics:

It covers Blogging and Twitter (specifically their possible use in teaching, which is a lot less straightforward than their use in research, or academic profile building), the excellent Padlet which people always seem glad to be introduced to, Prezi itself, Slideshare, and sources of copyright free or creative commons images.

In previous years I've done a session on Information Literacy in the Digital Age - but I find it increasingly difficult to keep delivering the same things year after year. If I don't rewrite stuff, the feedback scores start to go down as I get less interested; clearly my declining interest is communicated to the audience somehow, despite my best efforts to prevent this. So last year when I had to submit the brief for them to put in the PGCAP brochure, I decided I'd redo the session and make it about useful online tools, and about trying to help the academics more digitally literate (rather than talk about student digital literacy) - that seemed to be the thing people enjoyed most about the previous version of the session, even though it was only a small element.

This academic year I've several times completely redone a workshop or teaching session, and stressed myself out massively in doing so, but ultimately felt much better delivering the new session and got better feedback too. It really is worth it. But it takes a huge amount of time and on this occasion I'd forgotten I'd need to do it - and the session was on the 3rd day back after Christmas... So it was a nightmare, really! But, ultimately, worth the time it took.

This kind of primer session on online tools is, in my experience, welcomed by academics. When asked about the most useful aspects of the session a lot of the feedback mentioned this, e.g.

  • Opened my eyes to new technologies and avenues to share teaching content.
  • Use of blogs images, twitter…all.   I had heard of many of these but the info was helpful to know how to use them effectively.

  • Seeing what is available, evaluated by presenter – gave good insight

  • Very objective analysis of tools and possible use..

  • Examples of how the tools had been used for teaching and learning

So something as simple as flagging up tools some may not have heard of, and giving examples and (objective, rather than evangelical) analysis of how they're used so that even those who have heard of them get something out of it, is often enough to be genuinely useful.

This is less and less of the case, however. I was talking with someone from the eLearning Team (not part of the Library or IT) this week and we agreed that a couple of years ago, just introducing these technologies to a group of academics was enough - but now there's much more understanding of the tools that are out there. So we have to up our game, and move onto more in-depth discussion of how to use these tools, rather than just what they are. Increasingly (albeit not in the presentation above) I find myself wanting to present things by user need, rather than by platform.