This second post to stem from our VITAL Baseline evaluation work this year tells some of the story about looking outwards to the sector for policy seeking to understand best-practice around minimum standards for VLEs.
As a part of our original work developing the VITAL baseline the working group thought it would be useful to see what, if anything, was happening with VLE minimum standards across the HE sector at that point in time. With a pressing timescale and clear directive that a) we had to have a standard and b) of what it should consist this time round, the most we could look at was who had some kind of minimum standard or expectation for their VLE and what was in it, so get some simple benchmarks and reassure ourselves that we hadn’t missed anything that we could include whilst we had this opportunity to shape our own set of standards at the development stage. As a low-cost way (for us) of gathering some rough and ready data a call went out to the ALT mailing list asking people to contribute to a Google Sheet to say whether their institution had any kind of minimum standard or expectation, or were looking to develop one, and what was in it. As a group we didn’t engage at any further level with this work (although one of us, Peter Reed, wrote up and presented some of his own analysis of the original data at a few conferences and usergroups for which you can find an entry point to here). I had always hoped to return to this work with some questions I had about the sector-wide landscape as a part of any evaluation and that what I learnt from this exercise would also inform any recommendations we would make for future development of the standard. The main areas I want now to examine are:
- In terms of the original data gathered we should re-examine what we can understand from this. I was anxious about how crowdsourced data stands up and what the implications were of this approach for an evaluation so would like to review this methodology in some detail as well.
- I wondered what results we would get if we more closely defined the group(s) of institutions against which we wanted to benchmark and actively approach these. I felt at the time that the original data might encourage a view that minimum standards policies were a standard feature of the topography of Technology Enhanced Learning strategies and policies in HEIs and I wanted to test this.
- Then as well as asking who has a standard and what is in it the more interesting questions would include why one exists for that institution, how is its rationale contextualised (e.g. is this linked to a learning and teaching strategy?), what kinds of processes were involved in its development, is it compulsory and has any evaluation taken place?
- For those institutions who don’t have one, what is the story, if there is one, here?
- Beyond approaching institutions we would look at any literature, conference proceedings, blog posts and mailing list discussions around minimum standards as a part of the evaluation work more generally but specifically for this information about why institutions do or do not have a VLE baseline.
In picking this work up again for our Baseline evaluation project I thought an obvious although simplistic initial sampling strategy would be to survey the rest of the Russell group which would make for a manageable, quick desk-based web-search. I looked for what could be discovered from the websites of the Russell Group institutions for any indication of an institutional VLE standard in place and what further detail was available publicly. A first-glance review found that in December 2015:
- Six definitely have an institutional standard with compulsory/required/expected elements.
- Three offer institutionally recommended good practice guidance.
- Fourteen seemed to have no institutional standard requirement or expectation.
- For one institution I couldn’t find any detail one way or another.
I’ve collated a list with details of what each standard or recommendation above consists but one immediate thought on the above is that there might well be standards at faculty or school/departmental level where there is no institutional requirement. It would be useful to uncover this information as my own feeling is that handing over standards to more local levels is probably the way to move in the future, which we’ll discuss in a later post on any recommendations that emerge from our evaluation work.
Also immediately what we would want to do next is to contact institutions directly and try to confirm whether what we have found accurately reflects their situation and to get a little more detail on the whys and hows as listed above in this post (and we’ll also first need to look into ethical approval around this if we are intending to publish our results) and if there might be more local standards that aren’t found on central websites and I’ll make this list available when it is as complete as possible.
Another question this simple snapshot of Russell Group institutions raises for me is whether Liverpool is leading in this area, as we appear to be in the minority here, or alternatively whether we were late-arrivers to the debate around minimum standards and have taken a different direction to the majority? Obviously the snapshot view is unconfirmed and looking at a bigger group of institutions would give better data but as I began to discuss in the first Baseline evaluation post we want to look at the evidence used that informed the decision to make this one of the first initiatives to comprise our institutional TEL strategy to assess its strengths and its limits. Does this snapshot view tell a contrasting story to that of other sources of evidence used to develop TEL strategy and the Baseline originally? Are there any similarities between Liverpool and the other Russell group institutions that have a standard in place? I’d be interested to discover the extent to which minimum standards policies feature in Technology Enhanced Learning strategies and policies in HEIs when measured in a larger sample group. Thinking further about the sample group I wondered whether we should follow this 2014 UCISA report and look at all pre-92 institutions for a greater mix of institution types and with time post-92 institutions. More work but valuable for developing our understanding.
In a similar fashion to that of the original Baseline development representing an opportunity to look at some of the internal evidence of the ways in which the VLE is being used within the institution, this outward-looking work also offered the opportunity to richly inform our thinking and strategic approach by looking to the sector and the experience and evidence here. We want to assess the extent these sources of evidence were exploited in the original development and how they could be in the future as part of our recommendations for any future institutional TEL initiatives.
As an almost tangential sign-off, when thinking about what I’d write for this post I realised that I have been using the terms ‘benchmark’ and ‘benchmarking’ unthinkingly. Then I panicked, what are they and why would you want to do this? This panic didn’t last long as found this useful-looking JISC resource what is benchmarking? If anyone knows more about this guide we’d appreciate your thoughts and advice but I think I am going to use it for writing up this sector scene-setting aspect of our evaluation work.