Badges at the CETIS conference 2012

Mozilla open badges that is.

Simon and I organised a session “Are open badges the future for recognition of skills?” for the CETIS conference last week, with more than a little help from Doug Belshaw. As described in more detail on the session’s wiki page, the programme was simple: presentations from Doug and Simon followed by discussion structured around a SWOT analysis for the use of badges in two scenarios.

Doug’s conference blog has his slides, audio recording and his own reflections. One of the highlights for me was almost incidental to badges: I hadn’t come across the idea of “stealth assessment” before. Simply put, stealth assessment involves monitoring what people achieve and then telling them what it qualifies them for. So a young child might be told that they have just swum 10m and now qualify for an achievement badge (kinder than putting children through the stress of pre-arranged assessments).

If Doug’s presentation was about “why?” Simon’s was about “how?”. His presented some requirements for badge systems, which also considers how close the Mozilla open badge framework comes to fulfilling these requirements.

The second half of the session was spent in group discussion structured around a SWOT analysis of two scenarios outlined by the groups:

Scenario 1, formative assessment in a high stakes field (medical education)

Strengths:

  • Assessment can be continuous, accreditation expires if not renewed.
  • Badges are machine processable as well as human readable.
  • Cumulative, can show progress being made towards degree

Weaknesses:

  • New, and therefore not trusted

Opportunities:

  • Works well with highly competitive students (e.g. medics)
  • Could be transferred between institutions

Threats:

  • Perception of being trivial
  • Unwelcome addition to current systems

Scenario 2, within a community of practice

Strengths:

  • Recognition by community of practice
  • Transferability to other communities

Weaknesses:

  • Lack of context (range, scope etc.)
  • People unwilling to dig into detail provided
  • Unclear governance
  • Proliferation

Opportunities:

  • Currency outside the community
  • Could include qualifications
  • Branding opportunities
  • Invitation to examine evidence in detail

Threats

  • Over simplification
  • Brand recognition dominates quality
  • Issued by inappropriate bodies

There was a lot of discussion, and I can’t really do it justice here; I shall mention only a couple of comments. First one reported on Doug’s blog:

“We’re sick to death of hearing that X, Y or Z is going to change the world. Accept that it isn’t and move on.”

Hmm, yes, that may be a fair point. OTOH sometimes something does change the world, or at least parts of it, and it is important to keep a watch for the things that might, so I’ve no regrets about being involved in this session on that count.

The other comment came in the form of of tweets from Lawrie Phipps:

Edging toward believing that "badges" may be a solution to a problem we've almost fixed with other things #cetis12

thinking about open badges, surely once they're accepted by everyone, they bcome institutionalised, and look like something we have now?

Sentiments that I have some sympathy with, however it has happened that you think you have solved a problem locally only for some external solution to come along and get adopted widely enough to be significant. So, while we don’t have an answer to the question we posed in the session title, I think open badges are looking relevant enough for it to be important that CETIS keep a watching brief on them.

A lesson in tagging for UKOER

We’ve been encouraging projects in the HE Academy / JISC OER programmes to use platforms that help get the resources out onto the open web and into the places where people look, rather than expecting people to come to them. YouTube is one such place. However, we also wanted to be able to find all the outputs from the various projects wherever they had been put, without relying on a central registry, so one of the technical recommendations for the programme was that resources are tagged UKOER.

So, I went to YouTube and searched for UKOER, and this was the top hit. Well, it’s a lesson in tagging I suppose. I don’t think it invalidates the approach, we never expected 100% fidelity and this seems to be a one-off among the first 100 or so of the 500+ results. And it’s great to see results from Chemistry.FM and CoreMaterials topping 10,000 views.

Text and Data Mining workshop, London 21 Oct 2011

There were two themes running through this workshop organised by the Strategic Content Alliance: technical potential and legal barriers. An important piece of background is the Hargreaves report.

The potential of text and data mining is probably well understood in technical circles, and were well articulated by JohnMcNaught of NaCTeM. Briefly the potential lies in the extraction of new knowledge from old through the ability to surface implicit knowledge and show semantic relationships. This is something that could not be done by humans, not even crowds, because of volume of information involved. Full text access is crucial, John cited a finding that only 7% of the subject information extracted from research papers was mentioned in the abstract. There was a strong emphasis, from for example Jeff Lynn of the Coalition for a digital economy and Philip Ditchfield of GSK, on the need for business and enterprise to be able to realise this potential if it were to remain competetive.

While these speakers touched on the legal barriers it was Naomi Korn who gave them a full airing. They start in the process of publishing (or before) when publishers acquire copyright, or a licence to publish with enough restriction to be equivalent. The problem is that the first step of text mining is to make a copy of the work in a suitable format. Even for works licensed under the most liberal open access licence academic authors are likely to use, CC-by, this requires attribution. Naomi spoke of attribution stacking, a problem John had mentioned when a result is found by mining 1000s of papers: do you have to attribute all of them? This sort of problem occurs at every step of the text mining process. In UK law there are no copyright exceptions that can apply: it is not covered by fair dealling (though it is covered by fair use in the US and similar exceptions in Norwegian and Japanese law, nowhere else); the exceptions for transient copies (such as in a computers memory when readng on line) only apply if that copy has not intrinsic value.

The Hargreaves report seeks to redress this situation. Copyright and other IP law is meant to promote innovation not stifle it, and copyright is meant to cover creative expressions, not the sort of raw factual information that data mining processes. Ben White of the British Library suggested an extension of fair dealling to permit data mining of legally obtained publications. The important thing is that, as parliament acts on the Hargreaves review people who understand text mining and care about legal issues make sure that any legislation is sufficient to allow innovation, otherwise innovators will have to move to those jurisdictions like the US, Japan and Norway where the legal barriers are lower (I’ll call them ‘text havens’).

Thanks to JISC and the SCA on organising this event; there’s obviously plenty more for them to do.

LRMI: after the meeting

Last week I was at the first face to face meeting of the Learning Resource Metadata Initiative technical working group, here are my reflections on it. In short, what I said in previous post was about right, and the discussion went the way I hoped. One addition, though, that I didn’t cover in that post, was some discussion of accessibility conditions. That was one of a number of issues that was set aside as being of more general importance than learning resources and best dealt with that wider scope in mind; the resources of the LRMI project being better spent on those issues that are specific to learning materials.

An interesting take on the scope of the project that someone (I forget who) raised during the meeting concerns working within the constraints of the search engine interface and results page. Yes, Google, Bing and Yahoo have advanced search interfaces that allow check-box selection of conditions such as licence requirements, they also provide and support for specialist search e.g. Google Scholar and custom search engines; however real success will come if the information that can be marked up as a result of LRMI is effective for people using the default search engine. What this means is that the actions that result from a use case or scenario should be condensed down to a few key words typed into a search box,

Bing search box

Bing search box


and the information displayed as an outcome should fit into an inch or two of screen space on a search engine results page.
Bing search result

Bing search result

That’s quite useful in terms of focussing on what is really important, but of course it won’t meet everyone’s ambitions for learning resource metadata. The question this raises is to what extent should the schema.org vocabulary attempt to meet these requirements? That, I think, is still an open question, but I am sure that embedded metadata markup such as schema.org has limitations and external metadata such as is provided by the IEEE LOM, Dublin Core and ISO MLR is a complemetary approach that may be necessary in meeting some of the more extensive use cases for learning resource metadata. Indeed, one requirement of LRMI which was raised during the meeting is to provide a means of linking to external metadata. One more observation on this line: at least from the basis of this meeting, it seems that the penetration of standards for educational metadata into the commercial educational publishing world (both online and more conventional) is not great.

A final issue concerning the scope of LRMI, and schema.org more generally, with respect to the use of other approaches to handling metadata is relevant to the idea of linking to external metadata, but is better illustrated by the issue of how to convey licence information. At the moment there is no schema.org term for indicating licence terms, however there is a perfectly good approach advocated by Creative Commons and recognised by Google and many other search and content providers (i.e. a link or anchor with attributes rel=”license” href=”licenceURL” optionally spanning a textual description of the licence–no prizes for guessing how I think this could be extended to links to external metadata). Is it helpful to reproduce this in schema.org? On the one hand, one of the aims of schema.org is to offer web masters a unified approach and a single coherent set of recommendations for embedding metadata; on the other hand this approach seems to be in accord with HTML in general and is aready widespread, so perhaps any clarification or coherence in terms of the schema.org offering would be at the expense of muddying and fragmentation of practice with respect to how to embed licence information in HTML more generally.

Testing Caprét

I’ve been testing the alpha release of CaPRéT , a tool that aids attribution and tracking of openly licensed content from web sites. According to the Caprét website.

When a user cuts and pastes text from a CaPRéT-enabled site:

  • The user gets the text as originally cut, and if their application supports the pasted text will also automatically include attribution and licensing information.
  • The OER site can also track what text was cut, allowing them to better understand how users are using their site.

I tested Caprét on a single page, my institutional home page and on this blog. To enable Caprét for material on a website you need to include links to four javascript files in your webpages. I went with the files hosted on the Caprét site so all I had to do was put this into my homepage’s <head> (The testing on my home page is easier to describe, since the options for WordPress will depend on the theme you have installed.)


<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js" type="text/javascript"></script>
<script src="http://capret.mitoeit.org/js/jquery.plugin.clipboard.js" type="text/javascript"></script>
<script src="http://capret.mitoeit.org/js/oer_license_parser.js" type="text/javascript"></script>
<script src="http://capret.mitoeit.org/js/capret.js" type="text/javascript"></script>

Then you need to put the relevant information, properly marked up into the webpage. Currently caprét cites the Title, source URL, Author, and Licence URI of the page from which the text was copied. The easiest way to get this information into your page is to use a platform which generates it automatically, e.g. WordPress or Drupal with the OpenAttribute plug-in installed. The next easiest way is to fill out the form at the Creative Commons License generator. Be sure to supply the additional information if you use that form.

If you’re into manual, this is what does the work:

Title, is picked up from any text marked as
<span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Text" property="dct:title" rel="dct:type"><span> or, if that’s not found, the page <title> in the <head>

Source URL comes from page url

Author name, is picked up from contents of <a xmlns:cc="http://creativecommons.org/ns#" href="http://jisc.cetis.org.uk/contact/philb" property="cc:attributionName" rel="cc:attributionURL"></a> (actually, the author attribution URL in the href attribute isn’t currently used, so this could just as well be a span)

Licence URI, is picked up from the href attribute of <a rel="license" href="http://creativecommons.org/licenses/by/3.0/">

You might want to suggest other things that could be in the attribution/citation.

Reflections
As far as attribution goes it seems to work. Copy something from my home page or this blog and paste it elsewhere and the attribution information should magically appear. What’s also there is an embedded tracking gif, but I haven’t tested whether that is working.

What I like about this approach is that it converts self-description into embedded metadata. Self description is practice of including within a resource that information which is important for describing it: the title, author, date etc. Putting this information into the resource isn’t rocket science, it’s just good practice. To convert this information into metadata it needs to be encoded in such a way that a machine can read it. That’s where the RDFa comes in. What I like about RDFa (and microformats and microdata) as a way of publishing metadata is that it builds the actual descriptions are the very same ones that it’s just good practice to include in the resource. Having them on view in the resource is likely to help with quality assurance, and, while the markup is fiddly (and best dealt with by the content management system in use, not created by hand) creating the metadata should be no extra effort over what you should do anyway.

Caprét is being developed by MIT OEIT and Tatemae (OERGlue) as part of the JISC CETIS mini projects initiative; it builds on the browser plug-in developed independently by the OpenAttribute team.

Amazon kindle and textbooks

Amazon are renting textbooks for the kindle. Over the last couple of months I’ve been using a Kindle. We bought it with the idea of seeing how it might be useful for educational content, eTextbooks at the most basic level, though I’ve already written about my misgivings on that score. Well, we quickly came to the conclusion that the Kindle device wasn’t much good for eTextbooks: no colour, screen refresh too slow for dynamic content, not good for non-linear content (breakout boxes, footnotes, even multiple columns)–sure it displays pdfs and HTML, but it’s difficult to get difficult to get a magnification that works well and navigating around the page is clunky, and it doesn’t do ePub. But it’s fine for novels and there may be some educational utility in the bookmarking, note making and the sharing of these that is possible (though making notes using the kindle isn’t a great user experience). Anyway, given all that it’s interesting to note that the textbooks shown in the Amazon ad I mention above rely on colour, are non-linear, and both of them would be really engaging if the diagrams were interactive or even just animated. Neither of them are being displayed on a Kindle.

[Aside: textbook rental might be an attractive idea for some, but pricing based on a typical rental of 30 days!?]

The hunting of the OER

“As internet resources are being moved, they can no longer be traced.” I read in a press release from Knowledge Exchange. This struck me as important for OERs since part of their “openness” is the licence to copy them, and I have recently been on something of an OER hunt, which highlights the importance of using identifiers correctly and of “curatorial responsibility”.

The OER I was hunting was an “Interactive timeline on Anglo-Dutch relations (50 BC to 1830)” from the UKOER Open Dutch project. It was recommended at a year or so ago as great output which pretty much anyone could see the utility of that used the MIT SIMILE timeline software to create a really engaging interface. I liked it, but more importantly for what I’m considering now I used it as an example when investigating whether putting resources into a repository enhanced their visibility on Google (in this case it did).

Well, that was a year+ ago. The other week I wanted to find it again. So I went to Google and searched for “anglo dutch timeline” (without the quotes). Sure enough, I got three results for the one I am looking for on the first page (of course, your results my vary; Google’s like that now-a-days). These were, from the bottom up:

  1. A link to a record in the NDLR (the Irish National Digital Learning Resources Repository) which gave the link URL as http://open.jorum.ac.uk:80/xmlui/handle/123456789/517 (see below)
  2. A link to a resource page in HumBox, which turned out to be a manifest-only content package (i.e. metadata in a zip file). Looking into it, there’s no resource location given in the metadata, and the pointer to the content (which should be the resource being described) actually points to the Open Dutch home page.
  3. Finally, a link to a resource page in JORUM. This also describes the resource I was looking for but actually points to Open Dutch project page. The URL for Jorum page describing the resource is given as the persistent link–I believe that the NDLR harvests metadata from Jorum, so my guess is that that is why NDLR list this as the location of the resource.

Finding descriptions of a resource isn’t really helpful to many people. OK, I now know the full name and the author of the resource, which might help me track down the resource, but at this point I couldn’t. Furthermore, nobody wants to find a description of a resource that links to a description of the resource. I think one lesson concerns the importance of identifiers: “describe the thing you identify; identify the thing you describe.”

This story (and I very much suspect it is not an isolated case) has significance for debates about whether repositories should accept metadata-only “representations” of resources. Whether or not it is a good idea to deposit resources you are releasing as OERs in a third-party repository will depend on what you want to achieve by releasing them; whether or not it is a good idea for a repository to take and store resources from third parties will depend on what the repository’s sponsors wish to facilitate. Either way, someone needs to take some curatorial responsibility for the resource and for the metadata about it. That means on the one hand making sure that the resource stays on the web and on the other hand making sure that the metadata record continues to point to the right resource (automatic link checking for HTTP 404 responses etc. helps but, as this post on link rot notes, it’s not always that simple).

By the way, thanks to the incomparable David Kernohan, I now know that the timeline is currently at http://www.ucl.ac.uk/alternative-languages/OER/timeline/.

What I didn’t tweet from #OpenN11

For various reasons I didn’t get around to tweeting from the Open Nottingham 2011 seminar last Thursday, but that just gives me the excuse to record my impressions of it here, perhaps not in 140 chars per thought but certainly without much by way of discursive narrative.

Lack of travel options meant that I arrived late and missed the first couple of presentations.

Prof. Wyn Morgan (Director of Learning and Teaching, University of Nottingham): Nottingham started its OER initiative in 2006-7, it launched U-Now about the same time as the OU’s OpenLearn, and well before HEFCE funding. Before that they were “hiding behind passwords and VLEs”. Motivations included: corporate social respinsibility, widening participation, marketing/promotion, sharing materials with over-seas campuses, and cost savings.

Wayne Mackintosh (WikiEducator, OERU). Like many Wayne went into education in order to share knowledge: OER aligns with core values of those in HE. He compared the model of OERU to the University of London external examinations ca. 1850: decoupling learning from accreditation. While the open content is there, open curriculum development is something that needs working on.

Steve Stapleton, (Open Learning Support Officer The University of Nottingham): case studies on re-use. One case study showed that reuse is at micro-level, routine, not distinguished from other stuff on web, hence difficult to see what is being used. The other involved students remixing OERs for the next year of students to use: an example of open licensing enabling pedagogy.

Greg DeKoenigsberg (founder and first chairman of the Fedora Project Board at RedHat, now CTO of ISKME): What I learnt from open source. “People are the way we filter information” but we are interested in “tiny niche domains, the micro communities” which leads to the question “How do I find people like me?” Argued that the driver for open content may be the same as the driver for open source software: it allows you to stop competing on “non-differentiated value” and focus on what you do that is different.

Andy Lane (Director of OpenLearn, Open University): SCORE. Mentioned an interesting idea in passing, ‘born open': after 5 years open content is becoming mainstream at the OU; they no longer think of releasing existing content as open, rather they are developing open content.

Rob Pearce (HE Academy Engineering Subject Centre) spoke about simple tags (date-of-birth codes) that can be used to track resources by providing text within the resource that can be searched-for on Google.

Nathan Yergler gave his last presentation as CTO of Creative Commons. Key points: discovery on the web works best when it aligns with the the structure of the web, and the structure of the web is the links. Nathan suggested that the most important link for online learning is Attribution. The next step for discovery (he says) is the use of structured data to support search, e.g. RDFa in creative commons licences, and we need to develop practice of linking, attribution and annotation to support this.

Finally, in response to a question from Amber Thomas “what could we do to mess this up?” Nathan answered “check-box openness” that is stuff that was open just because it is a grant requirement, but with no real commitment. Which aligns nicely with an observation from Wyn’s presentation, that although there is support from the top for Open Nottingham, there is no mandate. Individuals get involved if they think it is worthwhile, which many of them do.

Many thanks to those who presented and organised this seminar.

Posted in oer

RDFa Rich snippets for educational content

Prompted by a comment from Andy Powell that

It would be interesting to think about how much of required resource description for UKOER can be carried in the RDFa vocabularies currently understood by Google. Probably quite a lot.

I had a look at Googles webmaster advice on Marking up products for rich snippets.

My straw man mapping from the UKOER description requirements to Rich Snippets was:

Mandated Metadata
Programme tag = Brand?
Project tag = Brand?
Title = name
Author / owner / contributor = seller?
Date =
URL = offerURL (but not on OER page itself)
Licence information [Use CC code] price=0

Suggested Metadata
Language =
Subject = category
Keywords = category?
Additonal tags = category?
Comments = a review
Description = description

I put this into a quick example, and you can see what Google makes of it using the rich snippet testing tool. [I’m not sure I’ve got the nesting of a Person as the seller right.]

So, interesting? I’m not sure that this example shows much that is interesting. Trying to shoe-horn information about an OER into a schema that was basically designed for adverts isn’t ideal, but they already done recipes as well, once they’ve got the important stuff like that done they might have a go at educational resources. But it is kind-of interesting that Google are using RDFa; there seems to be a slow increase in the number of tools/sites that are parsing and using RDFa.