scholarly communications

The Student Communications Audit

This post brings together two articles from the Lib-Innovation blog, where colleagues from the University of York and I write about what's happening in the Library.

At York there are audits every few years around student communication. They're conducted centrally by the University, rather than being library-specific. The most recent one was shared with the Library Marketing and Comms Group (on which I sit) by York's Internal Communications Manager, and she's kindly given me permission to share the findings here because I think they're absolutely fascinating. They challenge some conventional wisdom, and reveal a lot.

There was a lot about email so part 1 of this post is devoted to that area; lower down part 2 covers social media, lecture capture, the VLE and other types of comms.

It's important to note the information was gleaned through focus groups rather than surveys in order to properly capture nuance, so it's not a giant sample size (under 100 people) and inevitably the views reflected will be representative of students engaged enough to turn up for a focus group... But personally I find the findings more useful than generic articles in the Higher Ed press about students of today.

Throughout this article I'll be using phrases like "Students prefer to do X" - the obvious caveat is that I mean "Students at York do X" but I'm not going to write that every time...

How do students communicate? The main findings around email 

1) Email remains the primary and preferred channel of communication with the University

I like this one because it confirms something I've thought for a while - that email is NOT dead. It gets a bad press and it's definitely far less cool than social media, but it still has a function. It's not that students especially love email, it's that they want US - the University and its key services - to communicate key info this way. 

Your users are triaging your emails, checking first on their phones...

Your users are triaging your emails, checking first on their phones...

2) Email mainly gets checked on phones, and this happens very frequently

Students check email primarily on their phones, sometimes moving on to a PC / laptop later (see point 3 below). 

Students check their phones for emails first thing when they wake up, last thing at night, and several times in between - many students have 'new message' alerts set up to go to their lock-screens, and will check new emails as they come in even whilst doing other things such as attending lectures.

3) Students triage messages according to 5 criteria

Students make quick decisions on whether an email gets read there and then, binned, or deferred. They consider (in order of importance):

Relevance - title, sender, and the opening part of the message visible on their phone before they press to open the message; 

Familiarity - do they know and trust the person sending the email? Trusted senders include tutors, supervisors, departmental administrators, the Library, Careers Service, and Timetabling; 

Urgency - does it relate to something important that day or a pending deadline; 

Action - do they need to do something?  (Notice this is 4th on the list of importance...) Interestingly if they do need to do something they'll star the email and find a PC or laptop to log onto and action the email;

Time - if it looks like it can be dealt with quickly they'll read it right away and then delete, file or just mark as read. 

4) The high volume of email they receive is okay, on one condition... it MUST be targeted

Students get a huge volume of emails but they don't mind this as long as the emails are targeted. They object to irrelevant emails and perhaps more so to emails that appear to be relevant but turn out not to be - one example given was an invitation to an employers' event for the wrong subject area or year / status. The sender of that email lost the trust of the students and future emails were deleted upon arrival, unread. 

Any sense of emails happening automatically or without proper thought as to their relevance was met with dissatisfaction. A particular type of email came at the same time each day, suggesting it was automated - this too became one to delete automatically. 

Newsletters and digest emails were read, but often only the first part (too much scrolling and the email was abandoned) and these are the first to go - to be deleted unread - when there's a day with an overly high volume of emails. 


What can libraries change about the way we email students? 

The first thing is don't give up on email. Students expect communication from us to be via this medium, and it was strongly expressed that important information should come this way - key info can be shared via social media but must ALSO be shared via email because it's the one channel everyone checks. The reports of email's death have been greatly exaggerated. 

The second thing is, small details - like titles - really matter. The Library appears to be on the list of trusted senders, but in order to get read you need a decent subject line. (This didn't come up in the audit but I'd argue time of day is important too - if students get a truck load of emails between 9am and 10am, it may be better to join a shorter queue for their attention later in the day at 11am.) Also, because students primarily read email on their phone, you need a very strong opening line. Open your email client on your phone right now - how much can you read without opening a specific email? The way my phone is set-up I get to see about 40 words. So your first few words need to go straight to the heart of the matter - no long intros.

This is obvious, right? We all check emails ourselves on mobiles, we know what it's like. But how many times do we craft emails specifically with the receiver on their phone in mind? I can't speak for anyone else but in my case the answer is: not nearly often enough. 

Thirdly, segment your audience and target them with relevant emails - never include a group in a mass email unless they are directly relevant and would benefit from the info. If an email isn't essential to anyone, does it even need to be sent at all? There are too many emails that are sent out not just to the relevant people but to a smattering of less relevant people too. Every time we do that we diminish our value as communicators - our currency - and get closer to joining the dreaded auto-delete list. 

And related to that, reduce automation because it suggests we're not trying hard enough to avoid wasting their time. It's very hard to think carefully whether or not to send an email if it's automated. I've always said we shouldn't send newsletters out at the same time each week or month - it should be because we have a critical mass of useful things to tell our audience, not because 'it's the time we send out the newsletter'. So anything automated should at least be reviewed to make sure it's still serving a worthwhile purpose and not alienating our users. 

The surprising popularity of the VLE and the unsurprisingly popularity of Lecture Capture

I must admit I was a little surprised to read that Blackboard was popular with the students. In actual fact they say it is difficult to navigate, but once mastered and if used well by their tutors, they are generally very positive about the VLE.

In particular the students liked the discussion forums where the lecturer takes the time to get involved. The opportunity to ask questions and clarify parts of the lecture they didn't understand is very much appreciated, and they highlighted the public availability of all the questions and answers - as opposed to a private conversation between student and lecturer which is seen as less fair and transparent.

The other things noted as positives were the email notifications when new content is added, and the posting of lecture materials and supporting information.

The most popular part of the VLE, however, was Replay, the lecture capture system that allows students to re-watch lectures (or catch-up if they were ill - lecture capture has been shown time and time again not to negatively impact on attendance, so it's not used as a way to avoid having to actually go to lectures...). To quote the report:

"At degree level they find it difficult to take in the level of detail and complexity in one sitting and so the opportunity to re-visit the lecture to listen and learn again, to take better notes and to revise is something they really, really value"

It is particularly valuable in conjunction with the discussion forums mentioned above, and reduces the need to seek out the tutor for extra guidance.

Not all students in the focus groups are on courses which use lecture capture extensively - when those that weren't heard from those that are, they made it clear they'd very much like this facility on their modules too.

You can read more about Replay on the E-Learning Development Team blog.

Students, social media and the University

As mentioned above the students would expect anything essential to be communicated by email. Social media can be used as well, but shouldn't ever be used exclusively for key info such as timetable changes and so on.

They're happy for Facebook to be used for 'fun stuff' but not serious stuff - they use it more than any other medium between themselves, but there's a mixed reaction to the University joining in. WhatsApp, Messenger and Snapchat are used a lot for peer-to-peer communication, and they really don't associate this kind of platform with the University and its communication channels at the moment. YikYak is known primarily as a place for cruelty and harm - students don't tend to use it unless there is a particular scandal they want to hear the gossip about.

Interestingly to me, Twitter is was reported as not being used abundantly and is considered as a tool for 'old people'. The main downside noted was about control, or the lack of control, over who sees what. At the Library we actually find Twitter to have quite high levels of engagement, the most of any social media platform we use, and the comms audit contradicting this chimes with other anecdotal evidence I've heard online of students being reluctant to really admitting they use Twitter but nevertheless using it anyway. It's also a lesson in trusting the stats, but interrogating the stats to make sure the engagement is actually coming from your users rather than your peers (and in our case, it is our actual users who interact with us and benefit from our Twitter account, predominantly).

Webpages and Google

Students prefer to use Google to find information, even if it's info they know exists on the University website. I do this too - I Google my query plus the word 'York' even for stuff on the library website because it's quicker and more reliable. The students don't always trust the University's search function... They also don't expect news via the website - they feel they'll get any updates they need via email and social media. 

Perhaps the most interesting theme for me which came up in this section was one of relevance - students feel a lot of University comms are aimed at potential students, rather than at them, the current incumbents. There's an opportunity here for libraries: we are predominantly focused on existing users, and we can pull in other content for an internal audience, for example via Twitter, and share this with the students too.

4 questions to ask to help you simplify your comms

Simplification is often useful to raise engagement with an audience. Not always, but often.

It's not about dumbing down, or making things superficial, or losing the nuance. The aim is simply to take away anything that isn't essential for the message. Get rid of the extraneous. Be brutal. It's like tuning out the white noise so you're left with a perfect signal; there's less to distract your audience, and a greater chance they'll understand the message and respond do it. Everyone is overwhelmed with information so anything to cut through that and make it easier for your audience is worth doing.

There is a check list of four questions you can ask yourself (in descending order or severity!) which can help to simplify your comms and your key marketing messages. I do this all the time in my day-job and I absolutely guarentee it makes a difference in the level of engagement I get from my audience.

1) Do we need to send this at all?

If we over-saturate our audience then our communications begin to loose value over time. So we have to be careful to only communicate when we have enough of important to that particular audience. The first way to simplify, then, is the most extreme: do I really need to send this? Most of the time the answer is yes, but occasionally opportunities to hold back arise, and those opportunities need to be taken.

Think of it from your audience's point of view. Would you want this if you were them?

2) Can we get rid of anything extraneous?

Again, it's not about making it TOO short. It's not about superficiality. It's about making it as short as possible whilst maintaining the meaning and the nuance of the message. Every sentance or element should be scrutinised - if it doesn't NEED to be there, get rid of it.

Some elements of your message can be present for the user at the next stage, rather than this initial contact - so for example, if the comms is to ask users to go to a website, some of the key information can be on the website without needing to also be included in the comms themselves.

For the user, a long message is a) more likely to make them not even read / watch at all, b) more likely to make them stop reading / watching before the end and c) make them less likely to retain the key information in their head afterwards.

3) Can we cascade this over more than one message?

Sometimes once you've got rid of what's extraneous you're still left with a LOT. To cut it further would be to leave out essential information. From a marketing point of view, it can be worth marketing one big thing at a time rather than trying to market everything at once - allowing your audience a chance to lock in on one key theme at a time, reducing the risk of getting lost in the detail. This is risky, because it can lead to over-communication (which goes against the first principle above) so you have to make a judgement call. But with something like academic induction, for example, telling everyone everything simply doesn't work. We KNOW people can't remember all that. So in that scenario it can be worth trying to cascasde one large message into two or three smaller ones.

4) Can the language be made clearer?

So you've done 1 - 3: yes it's essential and needs to go out; it's as short as it can be without losing nuance; it needs to be just one message. So can anything be done with the language and tone to make it clear and simple to follow? Can it be less formal without losing credibility? Is there any obscure terminology that your users won't all be familiar with? Are there acronyms that need replacing or explaining?

You can't always simplify comms, and it's not always desirable to do so. But if you ask yourself these four questions before disseminating key messages, the chances are you'll get a higher level of engagement from your audience.


BONUS QUESTION: Can we segment our audience?

Segmentation is too complex to get into detail here, but the basic principle is to divide your audience up into smaller groups and tailor the communication to each one. This often presents opportunities for simplification, because you're not having to include all of the information all of the time - you can pick and choose the parts that matter most to each segment.

The problem with peer review (by @LibGoddess)

 

I am ridiculously excited to introduce a new guest post.

I've been wrestling for a while with the validity or otherwise of the peer review process, and where that leaves us as librarians teaching information literacy. I can't say 'if you use databases you'll find good quality information' because that isn't neccessarily true - but nor is it true to say that what one finds on Google is always just as good as what one finds in a journal.

There was only one person I thought of turning to in order to make sense of this: Emma Coonan. She writes brilliantly about teaching and information on her blog and elsewhere - have a look at her fantastic post on post-Brexit infolit, here.


The Problem With Peer Review | Emma Coonan

Well, peer review is broken. Again. Or, if you prefer, still.

The problems are well known and often repeated: self-serving reviewers demanding citations to their own work, however irrelevant, or dismissing competing research outright; bad data not being picked up; completely fake articles sailing through review. A recent discussion on the ALA Infolit mailing list centred on a peer-reviewed article in a reputable journal (indexed, indeed, in an expensive academic database) whose references consisted solely of Wikipedia entries. This wonderfully wry PNIS article - one of the most approachable and most entertaining overviews of the issues with scholarly publishing - claims that peer reviewers are “terrible at spotting weaknesses and errors in papers”.

As for how peer review makes authors feel, well, there’s a Tumblr for that. This cartoon by Jason McDermott sums it up:

Click the pic to open the original on jasonya.com in a new window

Click the pic to open the original on jasonya.com in a new window

- and that’s from a self-proclaimed fan of peer review.

For teaching librarians, the problems with peer review have a particularly troubling dimension because we spend so much of our time telling students of the vital need to evaluate information for quality, reliability, validity and authority. We stress the importance of using scholarly sources over open web ones. What’s more our discovery services even have a little tickbox that limits searches to peer reviewed articles, because they’re the ones you can rely on. Right? …

So what do we do if peer review fails to act as the guarantee of scholarly quality that we expect and need it to be? Where does it leave us if “peer review is a joke”?

The purpose of peer review

From my point of view as a journal editor, peer review is far from being a joke. On the contrary, it has a number of very useful functions:

·        It lets me see how the article will be received by the community

The reviewers act as trial readers who have certain expectations about the kind of material they’re going to find in any given journal. This means I can get an idea of how relevant the work is to the journal’s audience, and whether this particular journal is the best place for it to appear and be appreciated.

·        It tests the flow of the argument

Because peer reviewers read actively and critically, they are alert to any breaks in the logical construction of the work. They’ll spot any discontinuities in the argument, any assumptions left unquestioned, and any disconnection between the method, the results and the conclusions, and will suggest ways to fix them.

·        It suggests new literature or different viewpoints that add to the research context

One of the hardest aspects of academic writing is reconciling variant views on a topic, but a partial – in any sense – approach does no service to research. Every argument will have its counter-position, just as every research method has its limitations. Ignoring these doesn’t make them go away; it just makes for an unbalanced article. Reviewers can bring a complementary perspective on the literature that will make for a more considered background to the research.

·        It helps refine and clarify a writing style which is governed by rigid conventions and in which complex ideas are discussed

If you’ve ever written an essay, you’ll know that the scholarly register can work a terrible transformation on our ability to articulate things clearly. The desire to sound objective, knowledgeable, or just plain ‘academic’ can completely obscure what we’re trying to say. When this happens (and it does to us all) the best service anyone can do is to ask (gently) “What the heck does this mean?”

In my journal’s guidelines for authors and reviewers we put all this a bit more succinctly:

The role of the peer reviewer is twofold: Firstly, to advise the editor as to whether the paper is suitable for publication and, if so, what stage of development it has reached. [ ….] Secondly, the peer reviewer will act as a constructively critical friend to the author, providing detailed and practical feedback on all the aspects of the article.

But you’ll notice that these functions aren’t to do with the research as such, but with the presentation of the research. Scholarly communication always, necessarily, happens after the fact. It’s worth remembering that the reviewers weren’t there when the research was designed, or when the participants were selected, or when the audio recorder didn’t work properly, or the coding frame got covered in coffee stains. The reviewers aren’t responsible for the design of the research, or its outputs: all they can do is help authors make the best possible communication of the work after the research process itself is concluded.

Objective incredulity

Despite this undeniable fact, many of the “it’s a joke” articles seem to suggest that reviewers should take personal responsibility for the bad datasets, the faulty research design, or the inflated results. However, you can’t necessarily locate and expose those problems on reading alone. The only way to truly test the quality and validity of a research study is to replicate it.

Replication - the principle of reproducibility - is the whole point of the scientific method, which is basically a highly refined and very polite form of disbelief. Scholarly thinking never accepts assertions at face value, but always tests the evidence and asks uncomfortable, probing questions: is that really the case? Is it always the case? Supposing we changed the population, the dosage, one of the experimental conditions: what would the findings, and the implications we draw from them, look like then?

And here’s the nub of the whole problem: it’s not the peer reviewer’s job to replicate the research and tell us whether it’s valid or not. It’s our job - the job of the academic community as a whole, the researcher, the reader. In fact, you and me. Peer reviewers can’t certify an article as ‘true’ so that we know it meets all those criteria of authority, validity, reliability and the rest of them. All a reviewer can do is warrant that the report of a study has been composed in the appropriate register and carries the signifiers of academic authority, and that the study itself - seen only through this linguistic lens - appears to have been designed and executed in accordance with the methodological and analytical standards of the discipline. Publication in a peer-reviewed journal isn’t a simple binary qualifier that will tell you whether an article is good or bad, true or false; it’s only one of many nuanced and contextual evaluative factors we must weigh up for ourselves.

So when we talk to our students about sources and databases, we should also talk about peer review; and when we talk about peer review, we need to talk about where the authority for deciding whether something is true really rests.

Tickboxing truth

This brings us to one of the biggest challenges about learning in higher education: the need to rethink how we conceive of truth.

We generally start out by imagining that the goal of research is to discover the truth or find the answer - as though ‘Truth’ is a simple, singular entity that’s lying concealed out there, waiting to be for us to unearth it. And many of us experience frustration and dismay at university as a direct result of this way of thinking. We learn, slowly, that the goal of a research study is not to ‘find out the truth’, nor even to find out ‘a’ truth. It’s to test the validity of a hypothesis under certain conditions. Research will never let us say “This is what we know”, but only “This is what we believe - for now”.

Research doesn’t solve problems and say we can close the book on them. Rather it frames problems in new ways, which give rise to further questions, debate, discussion and further research. Occasionally these new ways of framing problems can painfully disrupt our entire understanding of the world. Yet once we understand that knowledge is a fluid construct created by communities, not a buried secret waiting for us to discover, then we also come to understand that there can be no last word in research: it is, rather, an ongoing conversation.

The real problem with peer review is that we’ve elevated it to a status it can’t legitimately occupy. We’ve tried to turn it into a truth guarantee, a kind of kitemark of veracity, but in doing so we’ve shut our eyes to the reality that truth in research is a shifting and slippery beast.

Ultimately, we don’t get to outsource evaluation: it’s up to each one of us to make the judgement on how far a study is valid, authoritative, and relevant. As teaching librarians, it’s our job to help our learners develop a critical mindset - that same objective incredulity that underlies scientific method, that challenges assertions and questions authority. And that being so, it’s imperative that we not only foster certain attitudes to information in our students, but model them ourselves in our own behaviour. In particular, our own approach to information should never be a blind acceptance of any rubber-stamp, any external warrant, any authority - no matter how eminent.

This means that the little tickbox that says ‘peer reviewed’ may be the greatest disservice we do to the thoughtful scepticism we seek to help develop in our students, and in our society at large. Encouraging people to think that the job of assessing quality happens somewhere else, by someone else, leads to a populace which is alternatively complacent and outraged, and in both states unwilling to undertake the critical engagement with information that leads us to be able to speak truth to power.

The only joke is to think that peer review can stand in for that.

Lights, camera, Action Plans!

 

In academic libraries we're all seeking ways to deepen our relationships with the Departments we look after, and at York we've found a really valuable tool for doing this. Each year we come up with an Action Plan for each Department, and we discuss and modify this at a meeting with each Head of Department and Library Rep. Then over the following year we carry out the actions we agreed. It doesn't sound like that revelatory an idea, but the point is it's a genuine and meaningful piece of progress we've made - we get a lot done via this method.

For this year's Action Plans we made a change to the format and turned them into more of an annual report. The slides above are an adapted version of a presentation I gave at #BLA15, the Business Librarians Association Conference in Liverpool last month. It's only a brief overview, but it covers a process we've found really valuable, and which the academic Departments have found useful too.

The conference itself was great! As ever. I could only go to the first day but I enjoyed all the presentations I saw, and Jess has put together a really nice write-up of all the presentations which you can read on her blog here.

I particularly got a lot out of Emma Thompson's talk, which is worth checking out on Slideshare. Her idea about providing the library as a 'business' for PGT students doing their Market Research module to do actual market research about, is one I'm really interested in trying out here. The students get real experience and the library gets useful feedback - brilliant.

 

Digital Scholarship Training at @UniofYork: Facts and Figures

 

Andy Priestner has written about the importance of writing reports, even if no one asks you to, to showcase the value of what the Library is doing.

...it is not enough just to collate this data and wait to be asked for it. It is far better to ensure that the people who need to know this stuff are informed, at least once a year, of these top level statistics, before they ask for them: a pre-emptive strike if you like…
— Andy Priestner in Business School Libraries in the 21st Century, edited by Tim Wales

(You can read a larger excerpt from his chapter here.)

With that in mind, a while ago I produced an internal report on the Digital Scholarship Training I've run at York (and various exciting things happened as a result of doing this) - which I've now expanded into an external version, which includes the Google Apps for Education training run by my colleagues.

My message to you is that if you have any expertise in the area of digital scholarship, scholarly comms, Web 2.0 in HE etc, find a way to offer it to your academic community! As I've mentioned before, we've found they're ready for it, and excited about the opportunities.

Below is a tweaked version of the report to include the message in the paragraph above - the original version (which can be found on our Library slideshare page) is aimed at York staff and asks them to get in touch for information about upcoming events. Putting it on our Slideshare page will hopefully increase the profile of something very positive for the Library and IT - we've both found that there's been some reputational gain from helping people out with things they really value right now, rather than solely focusing on what we've always done. We've also both found that word is starting to spread and we're becoming go-to people within the University when help or advice is required in these areas, which is excellent.

There's more about the nature of the training itself in this blog post on the Networked Researcher suite of workshops, and this later post about how the training is shifting slightly.