VLE minimum standards evaluation – looking around

This second post to stem from our VITAL Baseline evaluation work this year tells some of the story about looking outwards to the sector for policy seeking to understand best-practice around minimum standards for VLEs.

As a part of our original work developing the VITAL baseline the working group thought it would be useful to see what, if anything, was happening with VLE minimum standards across the HE sector at that point in time. With a pressing timescale and clear directive that a) we had to have a standard and b) of what it should consist this time round, the most we could look at was who had some kind of minimum standard or expectation for their VLE and what was in it, so get some simple benchmarks and reassure ourselves that we hadn’t missed anything that we could include whilst we had this opportunity to shape our own set of standards at the development stage. As a low-cost way (for us) of gathering some rough and ready data a call went out to the ALT mailing list asking people to contribute to a Google Sheet to say whether their institution had any kind of minimum standard or expectation, or were looking to develop one, and what was in it. As a group we didn’t engage at any further level with this work (although one of us, Peter Reed, wrote up and presented some of his own analysis of the original data at a few conferences and usergroups for which you can find an entry point to here). I had always hoped to return to this work with some questions I had about the sector-wide landscape as a part of any evaluation and that what I learnt from this exercise would also inform any recommendations we would make for future development of the standard. The main areas I want now to examine are:

  • In terms of the original data gathered we should re-examine what we can understand from this. I was anxious about how crowdsourced data stands up and what the implications were of this approach for an evaluation so would like to review this methodology in some detail as well. 
  • I wondered what results we would get if we more closely defined the group(s) of institutions against which we wanted to benchmark and actively approach these. I felt at the time that the original data might encourage a view that minimum standards policies were a standard feature of the topography of Technology Enhanced Learning strategies and policies in HEIs and I wanted to test this. 
  • Then as well as asking who has a standard and what is in it the more interesting questions would include why one exists for that institution, how is its rationale contextualised (e.g. is this linked to a learning and teaching strategy?), what kinds of processes were involved in its development, is it compulsory and has any evaluation taken place?
  • For those institutions who don’t have one, what is the story, if there is one, here?
  • Beyond approaching institutions we would look at any literature, conference proceedings, blog posts and mailing list discussions around minimum standards as a part of the evaluation work more generally but specifically for this information about why institutions do or do not have a VLE baseline.

In picking this work up again for our Baseline evaluation project I thought an obvious although simplistic initial sampling strategy would be to survey the rest of the Russell group which would make for a manageable, quick desk-based web-search. I looked for what could be discovered from the websites of the Russell Group institutions for any indication of an institutional VLE standard in place and what further detail was available publicly. A first-glance review found that in December 2015:

  • Six definitely have an institutional standard with compulsory/required/expected elements.
  • Three offer institutionally recommended good practice guidance.
  • Fourteen seemed to have no institutional standard requirement or expectation.
  • For one institution I couldn’t find any detail one way or another.

I’ve collated a list with details of what each standard or recommendation above consists but one immediate thought on the above is that there might well be standards at faculty or school/departmental level where there is no institutional requirement. It would be useful to uncover this information as my own feeling is that handing over standards to more local levels is probably the way to move in the future, which we’ll discuss in a later post on any recommendations that emerge from our evaluation work. 

Also immediately what we would want to do next is to contact institutions directly and try to confirm whether what we have found accurately reflects their situation and to get a little more detail on the whys and hows as listed above in this post (and we’ll also first need to look into ethical approval around this if we are intending to publish our results) and if there might be more local standards that aren’t found on central websites and I’ll make this list available when it is as complete as possible.

Another question this simple snapshot of Russell Group institutions raises for me is whether Liverpool is leading in this area, as we appear to be in the minority here, or alternatively whether we were late-arrivers to the debate around minimum standards and have taken a different direction to the majority? Obviously the snapshot view is unconfirmed and looking at a bigger group of institutions would give better data but as I began to discuss in the first Baseline evaluation post we want to look at the evidence used that informed the decision to make this one of the first initiatives to comprise our institutional TEL strategy to assess its strengths and its limits. Does this snapshot view tell a contrasting story to that of other sources of evidence used to develop TEL strategy and the Baseline originally? Are there any similarities between Liverpool and the other Russell group institutions that have a standard in place? I’d be interested to discover the extent to which minimum standards policies feature in Technology Enhanced Learning strategies and policies in HEIs when measured in a larger sample group. Thinking further about the sample group I wondered whether we should follow this 2014 UCISA report and look at all pre-92 institutions for a greater mix of institution types and with time post-92 institutions. More work but valuable for developing our understanding.

In a similar fashion to that of the original Baseline development representing an opportunity to look at some of the internal evidence of the ways in which the VLE is being used within the institution, this outward-looking work also offered the opportunity to richly inform our thinking and strategic approach by looking to the sector and the experience and evidence here. We want to assess the extent these sources of evidence were exploited in the original development and how they could be in the future as part of our recommendations for any future institutional TEL initiatives.

As an almost tangential sign-off, when thinking about what I’d write for this post I realised that I have been using the terms ‘benchmark’ and ‘benchmarking’ unthinkingly. Then I panicked, what are they and why would you want to do this? This panic didn’t last long as found this useful-looking JISC resource what is benchmarking? If anyone knows more about this guide we’d appreciate your thoughts and advice but I think I am going to use it for writing up this sector scene-setting aspect of our evaluation work.

Dan

Learner experience research: report from our second ELESIG Northwest event (Oct 8 2014)

The early start of the day and autumn cold didn’t deter our ELESIG NW participants! ELESIG is a Special interest Group of those interested in Learner Experience Research with a focus on technology. There are a number of regional groups now: in London, Midlands, Scotland. Wales and with an ELESIG South forming in December this year. Thanks to all the speakers, participant contributions and Roger Harrison, ELESIG NW co-conveyor, University of Manchester, who ensured the smooth running of the day.

ELESIG NW - audience participation
ELESIG NW – audience participation

Note: presentations available under ‘Podcasts’ athttps://www.softchalkcloud.com/lesson/serve/XNvZFLDt5uzRfI/html

Presentation

Damien Keil & Adrian, MMU on their iBook development for sport science students presenting at ELESIG NW symposium
Damian Keil & Adam Palin, MMU, photo credit Sarah Copeland

Damian Keil and Adam Palin from MMU started off the day talking about their development of e-learning resources in a sport sciences course using iBooks. Each student worked with the electronic learning materials on an iPad. We got an insight into the development process, the scale of the investment and benefits for the students. These were evidenced by observing exam results, surveys and focus groups. Participants interested in developing quality resources or engaging students in a distance learning course all took an interest in this initiative.

Members’ corner sessions

In the Members’ corner section, a ten-minute appetiser format allowed ELESIG members to talk about their research plans or table ideas for feedback and discussion.

First, we heard from Huw Morgan in the Salford Business School, who developed video resources and adopted a flipped classroom approach with his students. We got an insight into student patterns using these videos for their learning. Jim Turner, our  #elesig tweeter on the day even had his tweet about Huw Morgan’s #elesig presentation retweeted by Eric Mazur! This was another example of developing an active learning approach with some useful learning points from Huw.

Second up was, Roger Harrison, who proposed the question ‘What strategies can we use to evaluate PG distance learning programmes?’  Finally, Carol Wakeford, University of Manchester, Life Sciences also put forward an evaluation challenge in their third year undergraduate module. This module has students design, create and evaluate an e-learning resource. Carol wanted to elicit strategies from ELESIG participants on overcoming the problem of not having enough student volunteers to do the evaluation of these resources. The discussion that followed the appetiser presentations showed that this was a helpful and engaging format. It’s always useful to hear what colleagues are working on and how they are formulating and overcoming challenges of curriculum design and evaluation of learner experiences.

Just-before our tea break, Roger managed to engage us in a vibrant discussion: we had to imagine how we would evaluate our ‘student’ (a dog) if they couldn’t speak? This inspired activity certainly made us think of a repertoire of evaluation strategies available to us!

Professor Allison Littlejohn – keynote

Professor Allison Littlejohn keynote
Professor Allison Littlejohn keynote

Professor Allison Littlejohn’s ‘Seeing the invisible: understanding learner experiences’ challenged us to think of the meaning-making process of student learning data. Utilising Zimermann’s theory of self-regulation, their team of researchers investigated the activities and strategies that adult learners use to self-regulate their learning in the context of a MOOC. Resources arising from a project on investigating professional learning in MOOCs are also available on their website – a useful resource for anyone interested in professional learning or in MOOCs (see also References below). Conclusions were drawn examining learning behaviour of those who perceived themselves high- and low self-regulators.  For instance, high regulators focused on the learning and performance, low regulators focused on the participation in the MOOC! The study also concluded that the learning environment had an effect on the way participants learned, irrespective whether they perceived themselves high- or low self-regulators. To me, the keynote was an excellent demonstration how quality insights can be gained from research underpinned by theory.

Professor Allison Littlejohn’s summary of the day:

“The NW ELESIG was an example of a network of practitioners striving to ‘do things better’ by capitalising on and contributing to knowledge of how students can take forward their own learning. Theories and concepts generated in other arenas can inform what we do in higher education, though they have to be tested and (sometimes) reimagined. The key message I hope people take from my presentation on ‘Seeing the Invisible’  is the importance of theory and methodology underpinning data gathering and interpretation. All too often rigour is missing from technology-enhanced learning, yet there are lots of theories, methods and conceptual tools for us to draw from.

For just the One Small Thing: Take a look at the design guide and recommendations for MOOC design from the PL-MOOC project, which was part of the Gates Foundation MOOC Research Initiative http://www.gcu.ac.uk/academy/pl-mooc/outputs/ ” 

Reflections from Jim Turner, ELESIG NW co-convenor, LJMU, on the day:

“The experience of helping to run and attend these this events have galvanised my initial reasons for getting involved. There is an incredible amount of innovative practice which could lead to a significant understanding and development within this area. However, the problems of time, evaluation expertise and organisation leads to a sporadic release of interesting yet disjointed body of evidence in this area. Herding cats comes to mind of course, and there is a limitation in trying to over manage the process. But if at least a few connections are made at these events I hope it leads to a growth in all our understandings. Perhaps the most radical step taken in this last event was to have presenters actually ask for help and suggestions in how to evaluate quite complex scenarios. I have attended many of these types of events over these, and welcome a greater openness and direct calls for help, rather than listen to experts present their answers, without seeing any of their ‘working out’. “

For your diary: next ELESIG NW event: 25 Feb 2015 – Keynote from Professor Martin Oliver, hosted by Liverpool John Moores University & Jim Turner, co-convenor of ELESIG NW.

If learner experience research is an area you are interested in, do join the ELESIG ning site and come along to one of our events. You can also follow @ELESIG on Twitter!

Tünde Varga-Atkins, co-convenor of ELESIG NW, University of Liverpool

Links & References 

ELESIG NW Mendeley group – we are adding useful references here, please do join the group and contribute to the resources too.

ELESIG – we have a ning site with resources and details of funding, do join and have a browse.

If you ever need to find a research participant: check out ‘Call for Participants.com’

If you are a supervisor, a postgraduate researcher or into research in one way or another, then this website may be of interest. It helps you find a participant for your research, whether it is a survey, interview or something else. The website is entitled ‘Call for Participants’. It’s been developed by students at Nottingham University and was supported by the JISC Summer of Innovation programme.
Call for Participants
Call for Participants
The live website will soon superseded by a more improved version which can be customisable to institutions or departments. So for instance, if you are a Psychology department at University X, you will be able to design your own landing page, listing all your current research projects’ “Call for participants” area. This site may also be useful if you just want to have a quick view on the current research going on in your field.
The departmental and institutional customisation is a new offering and is currently being piloted.  Matt Terrell who presented this project at the JISC learning and teaching experts forum, asked if any institutions are interested in piloting these new features. It is currently free, so why not try it out?
Tünde (Varga-Atkins)
If you are interested in hearing about university related developments in technology enhanced learning, please subscribe to our University of Liverpool eLearning Unit blog (bottom right).

An invite for the North West: Researching the learner experience – Inaugural Symposium

ELESIG North West

Researching the learner experience

Inaugural Symposium organised by ELESIG North West

Thursday 15 May 2014, 12.00 – 16.30

Are you interested in sharing knowledge and practice around
researching the learner experience and technology?

ELESIG North West is a regional group aiming to
promote networking and the sharing of good practice
in learner experience research within a collegial setting.
Come and find out more at our inaugural event.

Speakers include Dr Paul Ashwin, University of Lancaster.
Paul will lead an interactive session on researching the learner experience
to inform teaching and institutional practices and add to the existing literature on this topic. During the session, you will have a chance
develop your own idea and turn it into an action plan for research.

Venue: G-Flex Room, Central Teaching Laboratories (building number 802 on campus map), University of Liverpool, Liverpool, L69 7ZJ

Register: It is a free event open to staff at North West England HEIs/FE colleges. You can register at http://www.eventbrite.com/e/elesig-north-west-researching-the-learner-experience-tickets-10988965291 (For those outside the North West, we will post details of the event after.)

Twitter: #elesig

Full programme detailsELESIG North West Symposium flyer.

Contact: Tunde Varga-Atkins, tva@liv.ac.uk

An Eye on the Future

Orthoptics LabThe eLearning Unit have been working closely with the Directorate of Orthoptics and Vision Science within the School of Health Sciences over the last few months.

The overall aim of the collaboration is to enhance the student experience within the department with the effective use of technology. We hope this will have an impact on attracting new students to the Orthoptics course and produce a more stimulating and innovative teaching experience for existing undergraduates.

During an Orthoptics post application visit day event in February pen drives were distributed which Touchscreen Computerscontained a variety of files and resources highlighting the enhanced learning experience offered at Liverpool. Prospective students will be able to view a video advert, an eye test flash animation, a recent range of photographs taken in the Orthoptics lab (some of the images displayed in this post) and a presentation which contains interviews with current students and alumni from the course.

The interactive animations currently being developed will allow students to practise and simulate the range of tests performed to detect a wide variety of eye defects and conditions. Some of the conditions can be quite rare so a student may never have an opportunity for the real life testing experience during placements. The animations will assist a student in refining their core skills and becoming familiar with rarer conditions.

The animations have been designed in consultation with Dr Anna O’Connor who eye animation demoapproached the eLearning Unit for support after seeing a range of oncological surgery animations produced for a postgraduate module. A test version of the Orthoptics animation is available here. The resource is still under development but this version should give an indication of the range of interaction and functionality aimed for.

Future plans include potential extra funding to support the production of more advanced animations, 3D eye modelling creation and a NHS bid to fund the purchase of a suite of tablets which will help to enhance undergraduate student placements.

The resources produced could become commercially viable as theeye website planned functionality would be unique in the HE sector. At the moment there are only a few resources publically available such as this website which simulates eye motion and demonstrates the effects of disabling one or more of the eyes muscles and one or more of the cranial nerves that control eye motion. However, resources like this only highlight a few elements of the eye movement disorders and do not completely reflect a true clinical picture. Any online resources produced at Liverpool would be a welcome addition to the current material available supporting academic staff and undergraduate students.

If you are interested in further details about these developments or if you would like to share an idea, request support or ask a related question please get in touch with the eLearning Unit at elearning@liv.ac.uk.

Ophthalmoscope Digital Letter Chart 9 times moving light_7 big eyes

(Photographs by Phil Walker)

Here are some further images captured by the eLearning Unit, in the Orthoptics lab on the Liverpool campus, which will be used as the background for future interactive animations.

Paul Duvall

Digital pens and multimodality, revisited

At the 6th International Conference on Multimodality I gave a paper on behalf of Muriah Umoquit and myself on using digital pens in a research context for drawing.

Prezi
Using digital pens – link to prezi

The paper summarised our experiences with using digital pens. Three conference highlights related to educational technology were:

  • Cheryl Ball talked about a multimodal journal, Kairos, which, breaking with the tradition of publishing articles with static writing and images, uses multimodal forms such as hypertext and multimedia such as videos, links and web-pages. The premise is that each article chooses the ‘right’ kind of modes, which can best communicate the given argument, rather than the journal’s format restricting the mode of the argument. They work with authors to use various digital tools to produce their multimodal texts. See for instance one of the articles organised as a website using the metaphor of the Shakespeare’s rose relating to the theme of definitions.
  • Marthe Burgess from Norway talked about a school project in which students were asked to construct a video narrative (as opposed to a traditional essay). In this case, students producing ‘multimodal’ texts a bit like when our students in Engineering or in Music are asked to produce wikis ( websites ), in which the form (site design) of the site is an integral element in the way the content is constructed and understood.
  • One of the articles that I read in preparation for the presentation by Luff et al (2007) entitled, “Augmented Paper: Developing Relationships between Digital Content and Paper“, which examines the ‘affordances of paper’ as a form of technology, arguing how versatile it is and why we still don’t have a paperless office. One of its features is that it is very mobile . Luff and colleagues’ project looked at, probably before the appearance of QR codes, how paper can be augmented to link to digital media. I just love the arguments as the digital pen operates the same way, it combines the graphical with the digital in a symbiotic way. (You get a drawing or written note, which can be tapped by the pen which evokes a digital audio file. Or you can digitise the written note or drawing in the form of a pencast, which can be played on the computer.)

Tünde

Digital pencasts, or a seminar report on multimodal transcription

Using diagrams in an interview, Varga-Atkins, Mark O'Brien 2009
Using diagrams in an interview, Varga-Atkins, Mark O’Brien 2009

I’m interested in using pencasts (drawings or text annotated by synchronous audio comments) for exploring perceptions around learning (and e-learning) experiences, and recently attended a seminar given on on multimodal transcription by MODE members, Diane Mavers and Jeff Bezemer.

The interest in drawing and diagrams as a research tool came from my involvement in a previous research project when we asked participants to draw diagrams or drawings to talk about their experiences of professional development in schools. Mark O’Brien and I wrote up our experiences on using diagrams and drawings for an article for a journal (IJRME) in 2009. Sparked by this article, Canadian colleagues Muriah Umoquit and Peggy Tso contacted us and since then we have been fruitful in collaborating on the use of diagrams as an interview technique (one output of this is soon to appear in a special issue in IJRME).

Drawings or diagrams, at least in the way I have used them in interviews, allow participants to recount their experiences (whether of an e-learning experience or other) in a different ‘mode’, the visual (and more precisely, graphic), as opposed to the usual interview mode: verbal (speech), which can help participants focus on the topic in question and more usually, see their experiences in a new light. In any case, the discussion of the produced visual artefact is part of the interview, the interesting bit is not necessarily always the drawing (or diagram) itself, but what is said about it! One of the questions for the researcher is therefore what becomes of your data: the drawing/diagram, or the transcript of the interview, or both? But your transcript and the drawing are two separate ‘objects’. So how do you manage them both?

The exciting development of digital pens is that this separation ceases to exist: a pencast is the synchronous combination of the penstroke (of the drawing) and the audio commentary. A bit like this: a video of the drawing (you will need Adobe Acrobat – latest version and your headphones to listen). Surely, I thought,  this is a great opportunity for this kind of research? So having set out to research the use of pencasts, interesting challenges come to the fore. For instance, how do you transcribe data that is multimodal?

This is what Diane and Jeff helped us to discuss so well at their seminar in Sheffield: first we discussed what multimodal means. Modes that can be either ’embodied’ modes (speech, gestures, gaze etc.) or those appearing on a surface (such as drawings, photos or text). Anything can be a mode, even the way a room is arranged! In the case of pencasts two modes combine  = drawing (visual) + audio commentary (speech).  We then looked at the issue of transcribing multimodal data. For instance, how do you transcribe a video of a surgical procedure taking place in an operating theatre? What is your unit of analysis & how do you go about transcribing your data? These issues were fleshed out in an engaging way by Diane and Jeff.

There are lots of interesting research projects happening at MODE, such as one on Digital Technologies in the operating theatre or Researching embodiment with digital technologies.  I was sitting next to Tatjana from Oxford, whose research was comparing lecturers’ language (and other modes) used in podcasts to a real live lecturing session, with a view to identify aspects that make a real live lecture engaging for students, and what of these aspects lecturers could use when making a podcast.

As for pencasts, there are numerous educational applications. Digital pens are great tools for note-taking for both staff and students; they could also be used for creating various teaching resources, especially when it is useful to draw or write numbers, equations as well as explaining them as you write them. See an example pencast from Chemistry for instance.

For more information on multimodal research, check out the 6th International Conference on Multimodality, Aug 2012 at the IOE, London. I hope to find out more and see how we can enrich our research into e-learning using (or researching) multimodal research methods, as well as present on using pencasts as a research method. If anyone is interested in having a more detailed chat about either of those research methods mentioned above or the educational applications of digital pens, feel free to contact me at      tva at liv dot ac dot uk.

Tünde (Varga-Atkins)