Lorna Campbell » educational content http://blogs.cetis.org.uk/lmc Cetis Blog Tue, 27 Aug 2013 10:29:30 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.22 Innovation, sustainability and community – reflections on #cetis13 http://blogs.cetis.org.uk/lmc/2013/03/15/innovation-sustainability-and-community-reflections-on-cetis13/ http://blogs.cetis.org.uk/lmc/2013/03/15/innovation-sustainability-and-community-reflections-on-cetis13/#comments Fri, 15 Mar 2013 11:39:48 +0000 http://blogs.cetis.org.uk/lmc/?p=756 The theme of this years CETIS conference was Open for Education: Technology Innovation in Universities and Colleges, as usual we had a wide and diverse range of sessions but if there was one theme that underpinned them all it was how can we sustain innovation in the face of the challenges currently facing the sector?

Sustainability was the explicit theme of the Open Practice and OER Sustainability session Phil and I ran. Three years of HEFCE UKOER funding came to an end last autumn and, while there’s no denying that the programmes produced a significant quantity of open educational resources, did they also succeed in changing practice and embedding open education innovation across the English HE sector? Judging by the number of speakers and participants at the session I think it’s fair to say that the answer is a resounding “Yes”. At least in the short term. Patrick MacAndrew, who has been involved in organising this year’s OER13 conference, pointed out that while they expected a drop in numbers this year, as UKOER funding has ended and the event is not running in conjunction with OCWC, in actual fact numbers have risen significantly. Practice has changed and many institutions really are more aware of the potential and benefits of open educational resources and open educational practices. Though as several participants pointed out, MOOCs have rather eclipsed OERs over the last 12 months and the relationship between the two is ambiguous to say the least. As Amber Thomas put it: “MOOCs stole OERs girlfriend”.

seesaw

David Kernohan used the memorable image of a teddy bear lecturer playing happily on a seesaw with his friends and with lots of open educational resources and innovative technologies until all the money ran out and all that was left was the teddy bear and the resources. However I can’t help thinking that the real threat to OER sustainability is that the next thing to disappear might be the teddy bear, and after all it’s the teddy bears, or rather the people, that sustain communities of innovation and practice. With this in mind, there was some discussion of the importance of subject communities in sustaining innovative educational practice and Suzanne Hardy of Newcastle reminded us that Humbox, an excellent example of an innovative and sustainable development presented by Yvonne Howard of Southampton, was originally a collaboration between four HEA subject centres. The legacy of the subject centres is certainly still visible in the sector, however as many talented people have had to move into other roles and those that have managed to hang on are increasingly under threat, how much longer will the community of open educational innovation be able to sustain itself?

The latter half of Scott Wilson’s session on Open Innovation and Open Development also focused on sustainability and again the discussion circled round to how can we sustain the community of developers that drive innovation forward? It’s more years than I can recall since their demise, but the CETIS SIGS were put forward yet again as a good model for sustaining innovative communities of developers and practitioners. I also suggested that it was still possible to see the legacy of the SHEFC Use of the MANs Initiative in the sector as a surprising number of people still working in educational technology innovation first cut their teeth on UMI projects.

There was some discussion of the emergence of “boundary spanning people and blended professionals” but also a fear that institutions are increasingly falling back on very traditional and strictly delineated professional roles. At a time when innovation is increasingly important, many institutions are shedding the very people who have been responsible for driving innovation forward in the sector. At the end of the session, Scott asked what is the one thing that organisations such as Cetis and OSSwatch should do over the next six months to help sustain open innovation and open development? The answer that came back was Survive! Just survive, stay alive, keep the innovation going, don’t loose the people. The fact that Scott was wearing a zombie t-shirt while facilitating the session was verging on the poignant :}

Meanwhile over in Martin Hawksey and David Sherlock’s Analytics and Institutional Capabilities session Ranjit Sidhu of SiD was laying into all manner of institutional nonsense including the sector wide panic that followed clearing, the brutal reality of the competitive education market, the millions spent on google advertising, the big data projects that are little more than a big waste of money and, last but not least, the KIS. Ranjit showed the following slide which drew a collective murmur of horror, though not surprise, from the audience.

Unistats

If you look carefully you’ll notice that the number of daily request to Unistats for data is….9. Yep. 9. It hasn’t even hit double figures. One colleague who was responsible KIS returns recently estimated that the cost to their institution was in the region of a hundred thousand. Multiply that across the sector…Does anyone know what the total cost of the KIS has been? And the return on investment? As one participant commented in response to Ranjit’s presentation, KIS is not a tool for students, it’s a tool to beat VCs over the head with. I’ll leave you to draw your own conclusions…

I think it’s fair to say that a lot of us went to CETIS13 not knowing quite what to expect and even fewer of us know what the future holds. Despite these uncertainties the conference had a noticeably positive vibe, which more than a few people remarked on over the course of the event. We’re all living in “interesting times” but the brutal reality of the crisis facing HE has done little to dent people’s belief that sustaining open innovation, and the community of open innovators, is a fundamental necessity if the sector is to face these challenges. I certainly felt there was a real spirit of determination at CETIS13, here’s hoping it will see us through the “interesting times”.

]]>
http://blogs.cetis.org.uk/lmc/2013/03/15/innovation-sustainability-and-community-reflections-on-cetis13/feed/ 2
Taking up the challenge… http://blogs.cetis.org.uk/lmc/2013/02/28/taking-up-the-challenge/ http://blogs.cetis.org.uk/lmc/2013/02/28/taking-up-the-challenge/#comments Thu, 28 Feb 2013 19:11:54 +0000 http://blogs.cetis.org.uk/lmc/?p=725 Yesterday, David Kernohan challenged the ukoer community on the oer-discuss mailing list to write a blog post in response to a spectacularly wrongheaded Educasue post titled;

Ten Years Later: Why Open Educational Resources Have Not Noticeably Affected Higher Education, and Why We Should Care

I had read the post the previous day and had already decided not to respond because tbh I just wouldn’t know where to begin.

However since David is offering “a large drink of the author’s choice” as the prize for the best response, I have been persuaded to take up the challenge. Which just goes to show there’s no better way to motivate me folk then by offering drink. (Mine’s a G&T David, or a red wine, possibly both, though not in the same glass.)

I am still at a loss to offer a serious critique of this article so in the best spirit of OER, I am going to recycle what everyone else has already said. Reuse FTW!

The article can basically be summarised as follows:

It’s 10 years since MIT launched OpenCourseware. Since then OERs have FAILED because they have not transformed and disrupted higher education. List of reasons for their failure: discoverability, quality control, “The Last Mile”, acquisition. The solution to these problems is to built a “global enterprise-level system” aka a “supersized CMS”. And look, here’s one I built earlier! It’s called LON CAPA.

PS. “The entity that provides the marketplace, the service, and the support and keeps the whole enterprise moving forward is probably best implemented as a traditional company.”

I should point out that I am not familiar with LON-CAPA. I’m sure it’s a very good system as far as it goes, but I don’t think a “global enterprise-level system” is the answer to anything.

David Kernohan himself was quick off the mark when the article first started circulating, after tweeting a couple of its finer points:

“OERs have not noticeably disrupted the traditional business model of higher education”

“It is naïve to believe that OERs can be free for everybody involved.”

He concluded:

So the basic message of that paper is “OER IS BROKEN” and “NEED MOAR USER DATA”. Lovely.

Because, clearly, if we can’t measure the impact of something it is valueless.

Which is indeed a good point. Actually I think there are many ways you can measure the impact of OER but I’m not at all convinced that “disrupting traditional business models” is the only valid measure of success. After all, OER is just content + open licence at the end of the day. And we can’t expect content alone to change the world, can we?

This is the point that Pat Lockley was getting at when he tweeted:

My Blog will be coming soon “Why OER haven’t affected the growth of grass”

Facetious perhaps, but a very pertinent point. There has been so much hyperbole surrounding OER from certain quarters of the media that it’s all too easy to say “Ha! It’s all just a waste of money. OER will never change the world.” Well no, maybe not, but most right minded people never claimed it would. What we do have though, is access to a lot more freely available (both gratis and libre) clearly licenced educational resources out there on the open web. Surely that can’t be a bad thing, can it? If nothing else, OER has increased educators’ awareness and understanding of the importance of clearly licencing the content they create and use, and that is definitely a good thing.

Pat also commented:

I’m just tired of OER being about “research into OER”. The cart is so far before the horse.

Which is another very valid point. I probably shouldn’t repeat Pat’s later tweet when he reached the end of the article and discovered that the author was pimping his own system. It involved axes and jumberjacking. Nuff said.

Jim Groom was similarly concise in his criticism:

“For content to be truly reusable and remixable, it needs to be context-free.” Problematic.

What’s the problem with OER ten years on? Metadata. Hmmm, maybe it is actually imagination, or lack thereof. #killoerdead

While I don’t always agree with Mr Groom, I certainly do agree that such a partial analysis lacks imagination.

As is so often the case, it was left to Amber Thomas to see past the superficial bad and wrongness of the article to get at the issues underneath.

“The right questions, patchy evidence base, wrong solutions. And I still think oer is a descriptor not a distinct content type.”

And as is also often the case, I agree with Amber wholeheartedly. There are actually many valid points lurking within this article but, honestly, it’s like the last ten years never happened. For example, discussing discoverability, which I agree can be problematic, the author suggests:

The solution for this problem could be surprisingly simple: dynamic metadata based on crowdsourcing. As educators identify and sequence content resources for their teaching venues, this information is stored alongside the resources, e.g., “this resource was used before this other resource in this context and in this course.” This usage-based dynamic metadata is gathered without any additional work for the educator or the author. The repository “learns” its content, and the next educator using the system gets recommendations based on other educators’ choices: “people who bought this also bought that.”

Yes! I agree!

Simple? No, currently impossible, because the deployment of a resource is usually disconnected from the repository: content is downloaded from a repository and uploaded into a course management system (CMS), where it is sequenced and deployed.

Erm…impossible? Really? Experimental maybe, difficult even, but impossible? No. Why no mention here of activity data, paradata, analytics? Like I said, it’s like the last ten years never happened.

Anyway I had better stop there before I say something unprofessional. One last comment though, Martin Hawksey pointed out this morning that there is not a single comment on the Educause website about this article, and asked:

Censorship? (That’s the danger of CMSs configured this way, someone else controls the information.)

I can’t comment on whether there has been censorship, but there has certainly been control. (Is there a difference? Discuss.) In order to comment on the Educause site you have to register, which I did yesterday afternoon and got a response informing me that it would take “several business hours” to approve my registration. I finally received the approval notification at nine o’clock at night, by which point I had better things to do with my time than comment on “global enterprise-level systems” and “supersized CMS”.

So there you have it David. Do I get that G&T?

ETA The author of this article, Gerd Kortemeyer may just have pipped us all to the G&T with a measured and considered defence of his post over at oer-discuss. While his e-mail provides some much needed context to the original article, particularly in terms of clarifying the specfic type of educational institutions and usage scenarios he is referring to, many of the criticism remain. It’s well worth reading Gerd’s response to the challenge here. Andy Lane has also written a very thoughtful and detailed critique of the article here which I can highly recommend.

]]>
http://blogs.cetis.org.uk/lmc/2013/02/28/taking-up-the-challenge/feed/ 2
#chatopen Open Access and Open Education http://blogs.cetis.org.uk/lmc/2013/01/29/chatopen-open-access-and-open-education/ http://blogs.cetis.org.uk/lmc/2013/01/29/chatopen-open-access-and-open-education/#comments Tue, 29 Jan 2013 11:24:58 +0000 http://blogs.cetis.org.uk/lmc/?p=706 Do open access and open education need to work together more? That was the question posed by Pat Lockley and discussed on twitter on Friday evening by a group of open education folks using the hashtag #chatopen.

Open access in this instance was taken to refer to open access repositories of peer-reviewed papers and other scholarly works and associated open access policies and agendas. There was general agreement that open access and open education proponents should work together but also recognition that it was important to be aware of different agendas, workflows, technical requirements, etc. Suzanne Hardy of the University of Newcastle added that it was equally important to take heed of open research data too.

Although the group acknowledged that open access still faced considerable challenges, there was a general consensus that it was more mature, both in terms of longevity and uptake, and that it was embedded more widely in institutions. Amongst other factors, the relative success of open access was attributed to the fact that most universities already had policies and repositories for publishing and managing scholarly outputs, while few had comparable strategies for managing teaching and learning materials. Phil Barker added that research outputs were always intended for publication whereas teaching and learning materials were generally kept within the institution. Nick Sheppard of Leeds Met also pointed out that most institutional repositories could not handle teaching and learning resources and research data without significant modification. This led to the suggestion that while institutional repositories fit the culture of scholarly works and open access well, research data and OERs are much harder to manage and share.

In terms of uptake and maturity, although there was general agreement that open access was some way ahead of open education, it appears that open data is catching up fast due to institutional drivers such as the REF, high level policy support and initiatives such as opendata.gov. Funding council mandates were also recognised as being an important driver in this regard.

Different interpretations of the term ‘open” were discussed as the open in open access and open education were felt to be quite different. The distinction between gratis and libre was felt to be useful, though it is important to recognise more subtle variations of open.

There was some consensus that teaching and learning resources tend to be regarded as being of lesser importance to institutions than scholarly works and research data and that this was reflected in policy developments, staff appointments and promotion criteria. Furthermore, until impact measures, funding and business models change this is likely to remain the case. Open access and open education both reflect institutional culture but they are separate processes and this separation reflects university polices, priorities and funding streams.

The group also felt that different communities had emerged around open access and open education, with open access mainly being the concern of librarians and open education the domain of eLearning staff. Phil refined this distinction by suggesting that open access is driven by researchers but managed by librarians. However Nick Sheppard of Leeds Met suggested that the zeitgeist was changing and that open access, open education and open research data are starting to converge.

In response to the question “what open education could learn form open access?” one lesson may be that top down policy can help. Although open education processes are more complex and diverse than open access, the success of open access could aid open education.

Pat wrapped up the session by asking where next for open education? What do we do? Lis Parcell of RSC Wales cautioned against open education becoming the domain of “experts” and emphasised the importance of enabling new audiences to join the open debate, by using plain language where possible, meeting people where they are and providing routes to help them get a step on the ladder. There was also some appetite for open hackdays and codebashes that would bring teachers, researchers and developers together to build OA/OER mashups. Nick put forward the following usecase:

“I want to read a research paper, text mined & processed, AI takes me to relevant OER to consolidate learning!”

Finally everyone agreed that it’s important to keep talking, to keep open education on the agenda and try to transform open practice into open policy.

So there you have it! A brief summary of a wide-ranging debate conducted using only 140 characters! Who says you can’t have a proper conversation on twitter?! If you’re interested in reading the full transcript of the discussion, Martin Hawksey has helpfully set up a TAGS Viewer archive of the #chatopen here.

If you want to follow up any of the points or opinions raised here than feel free to comment below or send a mail to oer-discuss@jiscmail.ac.uk

Many thanks once again to Pat Lockley for setting up the discussion and to all those who participated.

]]>
http://blogs.cetis.org.uk/lmc/2013/01/29/chatopen-open-access-and-open-education/feed/ 1
Back to the Future – revisiting the CETIS codebashes http://blogs.cetis.org.uk/lmc/2012/12/05/codebashes/ http://blogs.cetis.org.uk/lmc/2012/12/05/codebashes/#comments Wed, 05 Dec 2012 15:08:36 +0000 http://blogs.cetis.org.uk/lmc/?p=700 As a result of a request from the Cabinet Office to contribute to a paper on the use of hackdays during the procurement process, CETIS have been revisiting the “Codebash” events that we ran between 2002 and 2007. The codebashes were a series of developer events that focused on testing the practical interoperability of implementations of a wide range of content specifications current at the time, including IMS Content Packaging, Question and Test Interoperability, Simple Sequencing (I’d forgotten that even existed!), Learning Design and Learning Resource Meta-data, IEEE LOM, Dublin Core Metadata and ADL SCORM. The term “codebash” was coined to distinguish the CETIS events from the ADL Plugfests, which tested the interoperability and conformance of SCORM implementations. Over a five year period CETIS ran four content codebashes that attracted participants from 45 companies and 8 countries. In addition to the content codebashes, CETIS also additional events focused on individual specifications such as IMS QTI, or the outputs puts of specific JISC programmes such as the Designbashes and Widgetbash facilitated by Sheila MacNeill. As there was considerable interest in the codebashes and we were frequently asked for guidance on running events of this kind, I wrote and circulated a Codebash Facilitation document. It’s years since I’ve revisited this document, but I looked it out for Scott Wilson a couple of weeks ago as potential input for the Cabinet Office paper he was in the process of drafting together with a group of independents consultants. The resulting paper Hackdays – Levelling the Playing Field can be read and downloaded here.

The CETIS codebashes have been rather eclipsed by hackdays and connectathons in recent years, however it appears that these very practical, focused events still have something to offer the community so I thought it might be worth summarising the Codebash Facilitation document here.

Codebash Aims and Objectives

The primary aim of CETIS codebashes was to test the functional interoperability of systems and applications that implemented open learning technology interoperability standards, specifications and application profiles. In reality that meant bringing together the developers of systems and applications to test whether it was possible to exchange content and data between their products.

A secondary objective of the codebashes was to identify problems, inconsistencies and ambiguities in published standards and specifications. These were then fed back to the appropriate maintenance body in order that they could be rectified in subsequent releases of the standard or specification. In this way codebashes offered developers a channel through which they could contribute to the specification development process.

A tertiary aim of these events was to identify and share common practice in the implementation of standards and specifications and to foster communities of practice where developers could discuss how and why they had taken specific implementation decisions. A subsidiary benefit of the codebashes was that they acted as useful networking events for technical developers from a wide range of backgrounds.

The CETIS codebashes were promoted as closed technical interoperability testing events, though every effort was made to accommodate all developers who wished to participate. The events were aimed specifically at technical developers and we tried to discourage companies from sending marketing or sales representatives, though I should add that we were not always scucessful! Managers who played a strategic role in overseeing the development and implementation of systems and specifications were encouraged to participate however.

Capturing the Evidence

Capturing evidence of interoperability during early codebashes proved to be extremely difficult so Wilbert Kraan developed a dedicated website built on a Zope application server to facilitate the recording process. Participants were able to register the tools applications that they were testing and to upload content or data generated by these application. Other participants could then take this content test it in their own applications, allowing “daisy chains” of interoperability to be recorded. In addition, developers had the option of making their contributions openly available to the general public or visible only to other codebash participants. All participants were encouraged to register their applications prior to the event and to identify specific bugs and issues that they hoped to address. Developers who could not attend in person were able to participate remotely via the codebash website.

IPR, Copyright and Dissemination

The IPR and copyright of all resources produced during the CETIS codebashes remained with the original authors, and developers were neither required nor expected to expose the source code of their tools and applications to other participants.

Although CETIS disseminated the outputs of all the codebashes, and identified all those that had taken part, the specific performance of individual participants was never revealed. Bug reports and technical issues were fed back to relevant standards and specifications bodies and a general overview on the levels of interoperability achieved was disseminated to the developer community. All participants were free to publishing their own reports on the codebashes, however they were strongly discouraged from publicising the performance of other vendors and potential competitors. At the time, we did not require participants to sign non-disclosure agreements, and relied entirely on developers’ sense of fair play not to reveal their competitors performance. Thankfully no problems arose in this regard, although one or two of the bigger commercial VLE developers were very protective of their code.

Conformance and Interoperability

It’s important to note that the aim of the CETIS codebashes was to facilitate increased interoperability across the developer community, rather than to evaluate implementations or test conformance. Conformance testing can be difficult and costly to facilitate and govern and does not necessarily guarantee interoperability, particularly if applications implement different profiles of a specification or standard. Events that enable developers to establish and demonstrate practical interoperability are arguably of considerably greater value to the community.

Although CETIS codebashes had a very technical focus they were facilitated as social events and this social interaction proved to be a crucial component in encouraging participants to work closely together to achieve interoperability.

Legacy

These days the value of technical developer events in the domain of education is well established, and a wide range of specialist events have emerged as a result. Some are general in focus such as the hugely successful DevCSI hackdays, others are more specific such as the CETIS Widgetbash, the CETIS / DecCSI OER Hackday and the EDINA Wills World Hack running this week which aims to build a Shakespeare Registry of metadata of digital resources relating to Shakespeare covering anything from its work and live to modern performance, interpretation or geographical and historical contextual information. At the time however, aside from the ADL Plugfests, the CETIS codebashes were unique in offering technical developers an informal forum to test the interoperability of their tools and applications and I think it’s fair to say that they had a positive impact not just on developers and vendors but also on the specification development process and the education technology community more widely.

Links

Facilitating CETIS CodeBashes paper
Codebash 1-3 Reports, 2002 – 2005
Codebash 4, 2007
Codebash 4 blog post, 2007
Designbash, 2009
Designbash, 2010
Designbash, 2011
Widgetbash, 2011
OER Hackday, 2011
QTI Bash, 2012
Dev8eD Hackday, 2012

]]>
http://blogs.cetis.org.uk/lmc/2012/12/05/codebashes/feed/ 4
NPG adopts Creative Commons licence http://blogs.cetis.org.uk/lmc/2012/08/22/npg-adopts-creative-commons-licence/ http://blogs.cetis.org.uk/lmc/2012/08/22/npg-adopts-creative-commons-licence/#comments Wed, 22 Aug 2012 15:28:35 +0000 http://blogs.cetis.org.uk/lmc/?p=634 Last month the National Portrait Gallery changed their image licencing policy to allow free downloads for non-commercial and academic purposes.

Writing in Museums Journal today Rebecca Atkinson explained that:

The change means that more than 53,000 low-resolution images are now available free of charge to non-commercial users through a standard Creative Commons licence.

Atkinson quotes Tom Morgan, head of rights and reproductions at the NPG saying”

“Obviously this is quite complex – on one hand, if people are making money from a museum’s content then it’s right the museum should share that profit but we also want to support academic and education activity. So we took the opportunity to look at the way in which we could deliver this service and automate it.”

A new automated interface on all the NPG’s collection item pages now leads users to a “Use this image page” with links to request three different licences. Each license is accompanied by clear and concise information on how the image can be used:

Professional licence: can be used in books, films, TV, merchandise, commercial and promotional activities, display and exhibition.

Academic licence: can be used in your research paper, classroom or scholarly publication.

Creative Commons licence: can be used in non-commercial, amateur projects (e.g. blogs, local societies and family history).

In order to apply for a Professional or Academic licence users must register to use the NPG’s lightbox and then apply for the appropriate license. For print works, the academic license covers images for non-commercial publications with a print run of less than 4000, images must also be used inside the publication.

To access the lower resolution Creative Common’s licensed image, users are not required to register, but they must submit a valid e-mail address before they can download the image in the form of zip file. The images themselves do not appear to carry any embedded license information or watermarks, but they are accompanied by the following text file

Please find, attached, a copy of the image, which I am happy to supply to you with permission to use solely according to your licence, detailed at http://creativecommons.org/licenses/by-nc-nd/3.0/

It is essential that you ensure images are captioned and credited as they are on the Gallery’s own website (search/find each item by NPG number at http://www.npg.org.uk/collections/search/advanced-search.php).

This has been supplied to you free of charge. I would be grateful if you would please consider making a donation at http://www.npg.org.uk/support/donation/general-donation.php in support of our work and the service we provide.

Now I should probably point out that I have a personal interest in this change of policy as I recently contacted the NPG to request permission to use some of their images in an academic publication. I was delighted when they pointed me to the new automated licence interface and confirmed that the images in question could be used free of charge. What really struck me at the time though was what a valuable resource this could prove to be for open education, as the NPG has effectively released 53,000 free and clearly licensed potential open educational resources into the public domain. The CC license chosen by the gallery may be on the restrictive side, but it certainly demonstrates a growing and very welcome commitment to openness from the cultural heritage sector that could be of direct benefit to education.

]]>
http://blogs.cetis.org.uk/lmc/2012/08/22/npg-adopts-creative-commons-licence/feed/ 1
#cam12 Keynotes, backchannels and undercurrents http://blogs.cetis.org.uk/lmc/2012/04/23/cam12-keynotes-backchannels-and-undercurrents/ http://blogs.cetis.org.uk/lmc/2012/04/23/cam12-keynotes-backchannels-and-undercurrents/#comments Mon, 23 Apr 2012 15:24:57 +0000 http://blogs.cetis.org.uk/lmc/?p=587 A few thoughts from the OER 12 conference held in Cambridge last week. Sadly I wasn’t able to stay for the whole conference but the first two days left me with plenty of food for thought. This year the event was held in conjunction with the annual OCWC conference and as a result one of the themes to emerge was the convergence, or not, of top down and bottom up approaches to open educational resources and practices.

In the opening keynote Richardus Eko Indrajit, of the ABFI Institute Perbanas, Jakarta, outlined Indonesia’s impressively coordinated top down approach to opening access to education and the adoption of open education practice. One of Indonesia’s more radical policies in this space is the use of google ranking to measure the academic impact of scholarly works and research outputs.

The second keynote of the event by Sir John Daniel, President of the Commonwealth of Learning, also focused on strategic top down initiatives and in particular UNESCOs current Fostering Governmental Support for OER Internationally project. Sir John reported that 17 European nations have already responded to UNESCO’s survey about national policies and intentions regarding OER. Given three years of HEFCE funding for open educational resources, and the undoubted success of the JISC / HEA OER Programmes, it was disappointing to see that the UK did not appear on the list of respondents. This prompted some discussion on the backchannel about who would be the appropriate agency to formally respond to this questionnaire on behalf of the UK. Joe Wilson of SQA suggested that ALT might be an appropriate body to lobby for a response and wasted no time in contacting them directly. I’ll be interested to see it they, or anyone else, manage to provoke a response.

In the interests of practising what we preach, kudos goes to David Kernohan of JISC and Simon Thompson of Leeds Metropolitan University for politely challenging Sir John for not having any license or attribution information on the images he used in his presentation.

One hoary old issue that came up several times was whether there is any real evidence that open educational resources are actually being used, reused and re-shared. Despite some delegates providing pretty compelling evidence that open educational resources are indeed been used, others cited concerns over quality, sustainability and even potential loss of revenue as barriers to the release and adoption of open educational resources. When these objections were raised in the “Embed, don’t Bolt-on: promoting OER use in UK universities” panel on Tuesday afternoon I tweeted, slightly cynically:

@LornaMCampbell Quality, control, walled gardens, potential loss of income. #redherrings? #cam12

It appears that the issue was raised again the following day and was challenged by Patrick McAndrew (@openpad) of the Open University.

@dkernohan: quality, sustainabilty, reuse? Red herrings, says @openpad – challenges for all content not just OER #cam12

One very pertinent comment that was widely re-tweeted was that “many people are doing OER but they don’t call it that”. This prompted some discussion about the benefits of supporting and mainstreaming open practice rather than highlighting open practice as being different and distinct. During a very engaging chat over coffee, Emily Puckett Rodgers of the Open Michigan initiative mentioned that she tends not to use the term “OER” with faculty as this suggests something new and different that they need to engage with, instead she prefers to use the more familiar term “learning materials”. Personally I’m inclined to agree with Emily, like all buzzwords, the term “OER” has accumulated an awful lot of baggage over the years which may be less than helpful going forwards.

Another back channel conversation worth noting was the disappointing failure of the Scottish Government and Funding Council to engage with the open education agenda and open educational resource initiatives at a strategic level. This lack of engagement was highlighted again this week when Martin Hawksey noted that the Scottish Government’s new Professional Standards for Lecturers in Scotland’s Colleges document fails to make any mention of open educational resources or practices.

@mhawksey Shame open education/oer don’t get a mention in new Professional Standards for Lecturers in Scotland’s Colleges #ukoer

While there are certainly a few grass roots open education initiatives across the country, such as Edinburgh University’s eLearning@Ed Conference, Scotland clearly has some way to go when it comes to embedding the concept of openness in education.

From the back channel to the undercurrent….David Kernohan tweeted:

@dkernohan: MOOCs and DIYU – very much an undercurrent of nervousness at #cam12

Sadly I missed the final plenary session where I believe some of these undercurrents started to surface. However Laura Czerniewicz of the University of Cape Town concluded her post-conference blog post Open Education: part of the broader open scholarship terrain.

“I do think that there is beginning to be some fluidity and cross over, (such as the focus on open practices and the interest in the open education landscape at JISC), and this is great. Let’s consciously do more of this.”

An admirable goal for us all to aim for.

And lastly, two final highlights worthy of mention….

Nick Pearce’s Famous Monkeys

@drnickpearce Slides from my talk about developing students and staff as scavengers (hyenas!) http://slidesha.re/HNEiEb #cam12#famousmonkeys

And Guy Barrett and Jenny Gray’s fabulous poster :)

by Guy Barrett and Jenny Gray

by Guy Barrett and Jenny Gray

]]>
http://blogs.cetis.org.uk/lmc/2012/04/23/cam12-keynotes-backchannels-and-undercurrents/feed/ 1
Come to Dev8D and tell JISC what you think! http://blogs.cetis.org.uk/lmc/2012/02/09/come-to-dev8d-and-tell-jisc-what-you-think/ http://blogs.cetis.org.uk/lmc/2012/02/09/come-to-dev8d-and-tell-jisc-what-you-think/#comments Thu, 09 Feb 2012 15:21:04 +0000 http://blogs.cetis.org.uk/lmc/?p=551 Are you going to Dev8D next week? Would you like to give JISC a piece of your mind?

On Wednesday 15th there will be an opportunity to tell JISC what you think the key opportunities and challenges are in supporting the creation, sharing and management of learning materials. Lorna Campbell (JISC CETIS) and Amber Thomas (JISC Programme Manager) will be circulating on the day to gather views from delegates.

We want to know from you:

  • What are the most common requests you get from the people you develop for and support?
  • What are their greatest needs?
  • What software and formats would you relegate to Room 101?
  • What would be the killer app for learning content?

Stop us for a chat anytime throughout the day, or pop along to see us at the Digital Infrastructure Directions for Educational Content drop-in from 2-4 on Wednesday 15th February and help to shape JISC’s priorities for the future.

Alternatively if you are so brimful of thoughts and ideas you can post them in the comments below or blog them with the tag #deved.

Look forward to seeing you in at Dev8D!

]]>
http://blogs.cetis.org.uk/lmc/2012/02/09/come-to-dev8d-and-tell-jisc-what-you-think/feed/ 0
JLeRN Hackday – Issues Identified http://blogs.cetis.org.uk/lmc/2012/02/01/jlern-hackday-issues-identified/ http://blogs.cetis.org.uk/lmc/2012/02/01/jlern-hackday-issues-identified/#comments Wed, 01 Feb 2012 19:38:09 +0000 http://blogs.cetis.org.uk/lmc/?p=533 Last week I went to the hackday organised by the JLeRN team and CETIS to kick off Mimas’ JLeRN Experiment. It you haven’t come across JLeRN before, it’s a JISC funded exploratory project to build an experimental Learning Registry node. The event, which was organised by JLeRN’s Sarah Currier and CETIS’ dear departed John Robertson, brought a small but enthusiastic bunch of developers together to discuss how they might use and interact with the JLeRN test node and the Learning Registry more generally.

One of the aims of the day was to attempt to scope some usecases for the JLeRN Experiment, while the technical developers were discussing the implementation of the node and exploring potential development projects. We didn’t exactly come up with usecases per se, but we did discuss a wide range of issues. JLeRN are limited in what they can do by the relatively short timescale of the project, so the list below represents issues we would like to see addressed in the longer term.

Accessibility

The Learning Registry (LR) could provide a valuable opportunity to gather accessibility stories. For example it could enable a partially-sighted user to find resources that had been used by other partially-sighted users. But accessibility information is complex, how could it be captured and fed into the LR? Is this really a user profiling issue? If so, what are the implications for data privacy? If you are recording usage data you need to notify users what you are doing.

Capturing, Inputting and Accessing Paradata

We need to consider how systems generate paradata, how that information can be captured and fed back to the LR. The Dynamic Learning Maps curricular mapping system generates huge amounts of data from each course; this could be a valuable source of paradata. Course blogs can also generate more subjective paradata.

A desktop widget or browser plugin with a simple interface, that captures information about users, resources, content, context of use, etc would be very useful. Users need simplified services to get data in and out of the LR.

Once systems can input paradata, what will they get back from the LR? We need to produce concrete usecases that demonstrate what users can do with the paradata they generate and input. And we need to start defining the structure of the paradata for various usecases.

There are good reasons why the concept of “actor” has been kept simple in the LR spec but we may need to have a closer look at the relationship between actors and paradata.

De-duplication is going to become a serious issue and it’s unclear how this will be addressed. Data will need to be normalised. Will the Learning Registry team in the US deal with the big global problems of de-duplication and identifiers? This would leave developers to deal with smaller issues. If the de-duplication issue was sorted it would be easy to write server side javascripts.

Setting Up and Running a Node

It’s difficult for developers to find the information they need in order to set up a node as it tends to be buried in the LR mailing lists. The relevant information isn’t easily accessible at present. The “20 minute” guides are simple to read but complex to implement. It’s also difficult to find the tools that already exist. Developers and users need simple tools and services and simplified APIs for brokerage services.

Is it likely that HE users will want to build their own nodes? What is the business model for running a node? Running a node is a cost. Institutions are unlikely to be able to capitalise on running a node, however they could capitalise by building services on top of the node. Nodes run as services are likely to be a more attractive option.

Suggestions for JISC

It would be very useful if JISC funded a series of simple tools to get data into and out of JLeRN. Something similar to the SWORD demonstrators would be helpful.

Fund a tool aimed at learning technologists and launch it at ALT-C for delegates to take back to their institutions and use.

A simple “accessibility like” button would be a good idea. This could possibly be a challenge for the forthcoming DevEd event.

Nodes essentially have to be sustainable services but the current funding model doesn’t allow for that. Funding tends to focus on innovation rather than sustainable services. Six months is not really long enough for JLeRN to show what can really be done. Three years would be better.

With thanks to…

Sarah Currier (MIMAS), Suzanne Hardy (University of Newcastle), Terry McAndrew (University of Leeds), Julian Tenney (University of Nottingham), Scott Wilson (University of Bolton).

]]>
http://blogs.cetis.org.uk/lmc/2012/02/01/jlern-hackday-issues-identified/feed/ 7
CETIS OER Visualisation Project http://blogs.cetis.org.uk/lmc/2011/12/06/cetis-oer-visualisation-project/ http://blogs.cetis.org.uk/lmc/2011/12/06/cetis-oer-visualisation-project/#comments Tue, 06 Dec 2011 15:52:19 +0000 http://blogs.cetis.org.uk/lmc/?p=480 As part of our work in the areas of open educational resources and data analysis CETIS are undertaking a new project to visualise the outputs of the JISC / HEA Open Educational Resource Programmes and we are very lucky to have recruited data wrangler extraordinaire Martin Hawksey to undertake this work. Martin’s job will be to firstly develop examples and workflows for visualising OER project data stored in the JISC CETIS PROD database, and secondly to produce visualisations around OER content and collections produced by the JISC / HEA programmes. Oh, and he’s only got 40 days to do it! You can read Martin’s thoughts on the task ahead over at his own blog MASHe:

40 days to let you see the impact of the OER Programme #ukoer

PROD Data Analysis

A core aspect of CETIS support for the OER Phase 1 and 2 Programmes has been the technical analysis of tools and systems used by the projects. The primary data collection tool used for this purpose is the PROD database. An initial synthesis of this data has already been completed by R. John Robertson, however there is potential for further analysis to uncover potentially richer information sets around the technologies used to create and share OERs.
This part of the project will aim to deliver:

  • Examples of enhanced data visualisations from OER Phase 1 and 2.
  • Recommendations on use and applicability of visualisation libraries with PROD data to enhance the existing OER dataset.
  • Recommendations and example workflows including sample data base queries used to create the enhanced visualisations.

And we also hope this work will uncover some general issues including:

  • Issues around potential workflows for mirroring data from our PROD database and linking it to other datasets in our Kasabi triple store.
  • Identification of other datasets that would enhance PROD queries, and some exploration of how transform and upload them.
  • General recommendations on wider issues of data, and observed data maintenance issues within PROD.

Visualising OER Content Outputs

The first two phases of the OER Programme produced a significant volume of content, however the programme requirements were deliberately agnostic about where that content should be stored, aside from a requirement to deposit or reference it in Jorum. This has enabled a range of authentic practices to surface regarding the management and hosting of open educational content; but it also means that there is no central directory of UKOER content, and no quick way to visualise the programme outputs. For example, the content in Jorum varies from a single record for a whole collection, to a record per item. Jorum is working on improved ways to surface content and JISC has funded the creation of a prototype UKOER showcase, in the meantime though it would be useful to be able to visualise the outputs of the Programmes in a compelling way. For example:

  • Collections mapped by geographical location of the host institution.
  • Collections mapped by subject focus.
  • Visualisations of the volume of collections.

We realise that the data that can be surfaced in such a limited period will be incomplete, and that as a result these visualisations will not be comprehensive, however we hope that the project will be able to produce compelling attractive images that can be used to represent the work of the programme.

The deliverables of this part of the project will be:

  • Blog posts on the experience of capturing and using the data.
  • A set of static or dynamic images that can be viewed without specialist software, with the raw data also available.
  • Documentation/recipes on the visualisations produced.
  • Recommendations to JISC and JISC CETIS on visualising content outputs.
]]>
http://blogs.cetis.org.uk/lmc/2011/12/06/cetis-oer-visualisation-project/feed/ 0
UKOER 3 Technical Reflections http://blogs.cetis.org.uk/lmc/2011/11/24/ukoer-3-technical-reflections/ http://blogs.cetis.org.uk/lmc/2011/11/24/ukoer-3-technical-reflections/#comments Thu, 24 Nov 2011 16:28:43 +0000 http://blogs.cetis.org.uk/lmc/?p=474 The Technical Requirements for the JISC / HEA OER 3 Programme remain unchanged from those established for UKOER 2. These requirements can be referred to here: OER 2 Technical Requirements. However, many projects now have considerable experience and we would anticipate that they would engage with some of the technical challenges currently ongoing in the resource sharing and description domains.

We still don’t mandate content standards, however given the number of projects in this phase that are releasing ebooks we would anticipate seeing a number of projects using ePub. We would be interested in:

  1. Your experiences of putting dynamic content into ePub format (e.g. animations, videos)
  2. Your investigations of workflows to create/ publish multiple ebook formats at once, and of content management systems that support this.

Points to reflect on:

  • Resources should be self described, i.e. should have something like a title page (at the front) or credits page (at the back) that clearly state information such as author, origin, licence, title. For some purposes (e.g. search engine optimisation) this is preferable to encoded metadata hidden away in the file (e.g. EXIF metadata emdedded in an image) or as a detached record on a repository page or separate XML file. Note such a human readable title or credits page could be marked up as machine-readable metadata using RDFa/ microformats/ microdata see schema.org.
  • Feeds. We also encourage projects to disseminate metadata through, e.g. RSS, ATOM or OAI-PMH. “The RSS / ATOM feed should list and describe the resources produced by the project, and should itself be easy to find.“ It would be useful to provide descriptions of all the OERs released by a project in this way, not just the most recent. Projects should consider how this can be achieved even if they release large numbers of OERs.
  • For auditing and showcasing reasons it is really useful to be able to identify the resources that have been released through the projects in this programme. The required project tag is an element in this, but platforms that allow the creation of collections can also be used.
  • Activity data and paradata. Projects should consider what ‘secondary’ information they have access to about the use of / or interest in their resources, and how they can capture and share such information.
  • Tracking OER use and reuse. We don’t understand why projects aren’t worrying more about this.
]]>
http://blogs.cetis.org.uk/lmc/2011/11/24/ukoer-3-technical-reflections/feed/ 1