Lorna Campbell » resource description http://blogs.cetis.org.uk/lmc Cetis Blog Tue, 27 Aug 2013 10:29:30 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.22 CETIS at OER13 http://blogs.cetis.org.uk/lmc/2013/03/21/cetis-at-oer13/ http://blogs.cetis.org.uk/lmc/2013/03/21/cetis-at-oer13/#comments Thu, 21 Mar 2013 10:59:18 +0000 http://blogs.cetis.org.uk/lmc/?p=772 I was really encouraged to hear from our CETIS13 keynote speaker Patrick McAndrew that next week’s OER13 conference in Nottingham is shaping up to be the biggest yet. In our Open Practice and OER Sustainability session Patrick mentioned that the organising committee had expected numbers to be down from last year as the 2012 conference had been run in conjunction with OCWC and attracted a considerable number of international delegates and UKOER funding has come to an end. In actually fact numbers have risen significantly. I can’t remember the exact figure Patrick quoted but I’m sure he said that over 200 delegates were expected to attend this year. This is good news as it does rather suggest that the UKOER programmes have had some success in developing and embedding open educational practice. It’s also good new for us because CETIS are presenting three (count ‘em!) presentations at this year’s conference :}

The Learning Registry: social networking for open educational resources?
Authors: Lorna M. Campbell, Phil Barker, CETIS; Sarah Currier, Nick Syrotiuk, Mimas,
Presenters: Lorna M. Campbell, Sarah Currier
Tuesday 26 March, 14:00-14:30, Room: B52
Full abstract here.

This presentation will reflect on CETIS’ involvement with the Learning Registry, JISC’s Learning Registry Node Experiment at Mimas (The JLeRN Experiment), and their potential application to OER initiatives. Initially funded by the US Departments of Education and Defense, the Learning Registry (LR) is an open source network for storing and distributing metadata and curriculum, activity and social usage data about learning resources across diverse educational systems. The JLeRN Experiment was commissioned by JISC to explore the affordances of the Learning Registry for the UK F/HE community within the context of the HEFCE funded UKOER programmes.

An overview of approaches to the description and discovery of Open Educational Resources
Authors: Phil Barker, Lorna M. Campbell and Martin Hawksey, CETIS
Presenter: Phil Barker
Tuesday 26 March, 14:30-15:00, Room: B52
Full abstract here.

This presentation will report and reflect on the innovative technical approaches adopted by UKOER projects to resource description, search engine optimisation and resource discovery. The HEFCE UKOER programmes ran for three years from 2009 – 2012 and funded a large number and variety of projects focused on releasing OERs and embedding open practice. The CETIS Innovation Support Centre was tasked by JISC with providing strategic advice, technical support and direction throughout the programme. One constant across the diverse UKOER projects was their desire to ensure the resources they released could be discovered by people who might benefit from them -i f no one can find an OER no one will use it. This presentation will focus on three specific approaches with potential to achieve this aim: search engine optimisation, embedding metadata in the form of schema.org microdata, and sharing “paradata” information about how resources are used.

Writing in Book Sprints
Authors: Phil Barker, Lorna M Campbell, Martin Hawksey, CETIS; Amber Thomas, University of Warwick.
Presenter: Phil Barker
Wednesday 27 March, 11:00-11:15, Room: A25
Full abstract here.

This lightning talk will outline a novel approach taken by JISC and CETIS to synthesise and disseminate the technical outputs and findings of three years of HEFCE funded UK OER Programmes. Rather than employing a consultant to produce a final synthesis report, the authors decided to undertake the task themselves by participating in a three-day book sprint facilitated by Adam Hyde of booksprints.net. Over the course of the three days the authors wrote and edited a complete draft of a 21,000 word book titled “Technology for Open Educational Resources: Into the Wild – Reflections of three years of the UK OER programmes”. While the authors all had considerable experience of the technical issues and challenges surfaced by the UK OER programmes, and had blogged extensively about these topics, it was a challenge to write a large coherent volume of text in such a short period. By employing the book sprint methodology and the Booktype open source book authoring platform the editorial team were able to rise to this challenge.

]]>
http://blogs.cetis.org.uk/lmc/2013/03/21/cetis-at-oer13/feed/ 0
Another perspective on inBloom http://blogs.cetis.org.uk/lmc/2013/03/05/another-perspective-on-inbloom/ http://blogs.cetis.org.uk/lmc/2013/03/05/another-perspective-on-inbloom/#comments Tue, 05 Mar 2013 15:20:30 +0000 http://blogs.cetis.org.uk/lmc/?p=744 Thanks to Pat Lockley for drawing my attention to Reuter’s interesting take on inBloom, the US K-12 development that I blogged about a couple of weeks ago. You can find the article here: K-12 student database jazzes tech startups, spooks parents. Just in case you missed it, inBloom is a new technology integration initiative for the US schools’ sector launched by the Shared Learning Collective and funded by the Carnegie Corporation and the Bill and Melinda Gates Foundation. One of the aims of InBloom is to create a:

Secure data management service that allows states and districts to bring together and manage student and school data and connect it to learning tools used in classrooms.

I should confess that my interest in inBloom is purely on the technical side as it builds on two core technologies that CETIS has had some involvement with; the Learning Registry and the Learning Resource Metadata Initiative. The Reuter’s article provides a rather different perspective on the development however, describing the initiative as:

a $100 million database built to chart the academic paths of public school students from kindergarten through high school.

In operation just three months, the database already holds files on millions of children identified by name, address and sometimes social security number. Learning disabilities are documented, test scores recorded, attendance noted. In some cases, the database tracks student hobbies, career goals, attitudes toward school – even homework completion.

Local education officials retain legal control over their students’ information. But federal law allows them to share files in their portion of the database with private companies selling educational products and services.

When reported in these terms, it’s easy to understand why some parents have raised concerns about the initiative. The report goes on to say

Federal officials say the database project complies with privacy laws. Schools do not need parental consent to share student records with any “school official” who has a “legitimate educational interest,” according to the Department of Education. The department defines “school official” to include private companies hired by the school, so long as they use the data only for the purposes spelled out in their contracts.

The database also gives school administrators full control over student files, so they could choose to share test scores with a vendor but withhold social security numbers or disability records.

That’s hardly reassuring to many parents.

And for good measure they then quote a concerned parent saying

“Once this information gets out there, it’s going to be abused. There’s no doubt in my mind.”

Parents from New York, Louisiana, the Massachusetts chapters of the American Civil Liberties Union and Parent-Teacher Association have also written to state officials “in protest” with the help of a civil liberties attorney in New York.

To be fair to Reuters it’s not all Fear, Uncertainty and Doubt, the article also puts forward some of the potential benefits of the development as well as expressing the drawbacks and concerns. I certainly felt it was quite a balanced article that raised some valid issues.

It also clarified one issue that had rather puzzled me about the TechCrunch’s original report on inBloom which quoted Rupert Murdoch as saying:

“When it comes to K-12 education, we see a $500 billion sector in the U.S. alone that is waiting desperately to be transformed by big breakthroughs that extend the reach of great teaching.”

At the time I couldn’t see the connection between inBloom and Rupert Murdoch, and TechCrunch didn’t make it explicit, however Reuters explains that the inBloom technical infrastructure was built by Amplify Education, a division of Rupert Murdoch’s News Corps. That explains that then.

Those of you who have been following the CETIS Analytics Series will be aware that such concerns about privacy, anonymity and large scale data integration and analysis initiatives are nothing new, however I thought this was an interesting example of the phenomenon.

It’s also worth adding that, as the parent of a primary school age child, it has never once occurred to me to enquire what kind of data the school records, who that data is shared with and in what form. To be honest I am pretty philosophical about these things. However it is interesting that people have a tendency not to ask questions about their data until a big / new / evil / transformative (delete according to preference) technology development like this comes along. So what do you think? Is it all FUD? Or is it time to get our tin hats out?

I’m still very interested to see if inBloom’s technical infrastructure and core technologies are up to the job, so I’ll continue to watch these developments with interest. And you never know, if my itchy nose gets the better of me I might even ask around to find out what happens to pupil data on this side of the pond.

]]>
http://blogs.cetis.org.uk/lmc/2013/03/05/another-perspective-on-inbloom/feed/ 0
Taking up the challenge… http://blogs.cetis.org.uk/lmc/2013/02/28/taking-up-the-challenge/ http://blogs.cetis.org.uk/lmc/2013/02/28/taking-up-the-challenge/#comments Thu, 28 Feb 2013 19:11:54 +0000 http://blogs.cetis.org.uk/lmc/?p=725 Yesterday, David Kernohan challenged the ukoer community on the oer-discuss mailing list to write a blog post in response to a spectacularly wrongheaded Educasue post titled;

Ten Years Later: Why Open Educational Resources Have Not Noticeably Affected Higher Education, and Why We Should Care

I had read the post the previous day and had already decided not to respond because tbh I just wouldn’t know where to begin.

However since David is offering “a large drink of the author’s choice” as the prize for the best response, I have been persuaded to take up the challenge. Which just goes to show there’s no better way to motivate me folk then by offering drink. (Mine’s a G&T David, or a red wine, possibly both, though not in the same glass.)

I am still at a loss to offer a serious critique of this article so in the best spirit of OER, I am going to recycle what everyone else has already said. Reuse FTW!

The article can basically be summarised as follows:

It’s 10 years since MIT launched OpenCourseware. Since then OERs have FAILED because they have not transformed and disrupted higher education. List of reasons for their failure: discoverability, quality control, “The Last Mile”, acquisition. The solution to these problems is to built a “global enterprise-level system” aka a “supersized CMS”. And look, here’s one I built earlier! It’s called LON CAPA.

PS. “The entity that provides the marketplace, the service, and the support and keeps the whole enterprise moving forward is probably best implemented as a traditional company.”

I should point out that I am not familiar with LON-CAPA. I’m sure it’s a very good system as far as it goes, but I don’t think a “global enterprise-level system” is the answer to anything.

David Kernohan himself was quick off the mark when the article first started circulating, after tweeting a couple of its finer points:

“OERs have not noticeably disrupted the traditional business model of higher education”

“It is naïve to believe that OERs can be free for everybody involved.”

He concluded:

So the basic message of that paper is “OER IS BROKEN” and “NEED MOAR USER DATA”. Lovely.

Because, clearly, if we can’t measure the impact of something it is valueless.

Which is indeed a good point. Actually I think there are many ways you can measure the impact of OER but I’m not at all convinced that “disrupting traditional business models” is the only valid measure of success. After all, OER is just content + open licence at the end of the day. And we can’t expect content alone to change the world, can we?

This is the point that Pat Lockley was getting at when he tweeted:

My Blog will be coming soon “Why OER haven’t affected the growth of grass”

Facetious perhaps, but a very pertinent point. There has been so much hyperbole surrounding OER from certain quarters of the media that it’s all too easy to say “Ha! It’s all just a waste of money. OER will never change the world.” Well no, maybe not, but most right minded people never claimed it would. What we do have though, is access to a lot more freely available (both gratis and libre) clearly licenced educational resources out there on the open web. Surely that can’t be a bad thing, can it? If nothing else, OER has increased educators’ awareness and understanding of the importance of clearly licencing the content they create and use, and that is definitely a good thing.

Pat also commented:

I’m just tired of OER being about “research into OER”. The cart is so far before the horse.

Which is another very valid point. I probably shouldn’t repeat Pat’s later tweet when he reached the end of the article and discovered that the author was pimping his own system. It involved axes and jumberjacking. Nuff said.

Jim Groom was similarly concise in his criticism:

“For content to be truly reusable and remixable, it needs to be context-free.” Problematic.

What’s the problem with OER ten years on? Metadata. Hmmm, maybe it is actually imagination, or lack thereof. #killoerdead

While I don’t always agree with Mr Groom, I certainly do agree that such a partial analysis lacks imagination.

As is so often the case, it was left to Amber Thomas to see past the superficial bad and wrongness of the article to get at the issues underneath.

“The right questions, patchy evidence base, wrong solutions. And I still think oer is a descriptor not a distinct content type.”

And as is also often the case, I agree with Amber wholeheartedly. There are actually many valid points lurking within this article but, honestly, it’s like the last ten years never happened. For example, discussing discoverability, which I agree can be problematic, the author suggests:

The solution for this problem could be surprisingly simple: dynamic metadata based on crowdsourcing. As educators identify and sequence content resources for their teaching venues, this information is stored alongside the resources, e.g., “this resource was used before this other resource in this context and in this course.” This usage-based dynamic metadata is gathered without any additional work for the educator or the author. The repository “learns” its content, and the next educator using the system gets recommendations based on other educators’ choices: “people who bought this also bought that.”

Yes! I agree!

Simple? No, currently impossible, because the deployment of a resource is usually disconnected from the repository: content is downloaded from a repository and uploaded into a course management system (CMS), where it is sequenced and deployed.

Erm…impossible? Really? Experimental maybe, difficult even, but impossible? No. Why no mention here of activity data, paradata, analytics? Like I said, it’s like the last ten years never happened.

Anyway I had better stop there before I say something unprofessional. One last comment though, Martin Hawksey pointed out this morning that there is not a single comment on the Educause website about this article, and asked:

Censorship? (That’s the danger of CMSs configured this way, someone else controls the information.)

I can’t comment on whether there has been censorship, but there has certainly been control. (Is there a difference? Discuss.) In order to comment on the Educause site you have to register, which I did yesterday afternoon and got a response informing me that it would take “several business hours” to approve my registration. I finally received the approval notification at nine o’clock at night, by which point I had better things to do with my time than comment on “global enterprise-level systems” and “supersized CMS”.

So there you have it David. Do I get that G&T?

ETA The author of this article, Gerd Kortemeyer may just have pipped us all to the G&T with a measured and considered defence of his post over at oer-discuss. While his e-mail provides some much needed context to the original article, particularly in terms of clarifying the specfic type of educational institutions and usage scenarios he is referring to, many of the criticism remain. It’s well worth reading Gerd’s response to the challenge here. Andy Lane has also written a very thoughtful and detailed critique of the article here which I can highly recommend.

]]>
http://blogs.cetis.org.uk/lmc/2013/02/28/taking-up-the-challenge/feed/ 2
inBloom to implement Learning Registry and LRMI http://blogs.cetis.org.uk/lmc/2013/02/08/inbloom-to-implement-learning-registry-and-lrmi/ http://blogs.cetis.org.uk/lmc/2013/02/08/inbloom-to-implement-learning-registry-and-lrmi/#comments Fri, 08 Feb 2013 10:36:09 +0000 http://blogs.cetis.org.uk/lmc/?p=710 There have been a number of reports in the tech press this week about inBloom a new technology integration initiative for the US schools’ sector launched by the Shared Learning Collective. inBloom is “a nonprofit provider of technology services aimed at connecting data, applications and people that work together to create better opportunities for students and educators,” and it’s backed by a cool $100 million dollars of funding from the Carnegie Corporation and the Bill and Melinda Gates Foundation. In the press release, Iwan Streichenberger, CEO of inBloom Inc, is quoted as saying:

“Education technology and data need to work better together to fulfill their potential for students and teachers. Until now, tackling this problem has often been too expensive for states and districts, but inBloom is easing that burden and ushering in a new era of personalized learning.”

This initiative first came to my attention when Sheila circulated a TechCruch article earlier in the week. Normally any article that quotes both Jeb Bush and Rupert Murdoch would have me running for the hills but Sheila is made of sterner stuff and dug a bit deeper to find the inBloom Learning Standards Alignment whitepaper. And this is where things get interesting, because inBloom incorporates two core technologies that CETIS has had considerable involvement with over the last while, the Learning Registry and the Learning Resource Metadata Initiative, which Phil Barker has contributed to as co-author and Technical Working Group member.

I’m not going to attempt to summaries the entire technical architecture of inBloom, however the core components are:

  • Data Store: Secure data management service that allows states and districts to bring together and manage student and school data and connect it to learning tools used in classrooms.
  • APIs: Provide authorized applications and school data systems with access to the Data Store.
  • Sandbox: A publicly-available testing version of the inBloom service where developers can test new applications with dummy data.
  • inBloom Index: Provides valuable data about learning resources and learning objectives to inBloom-compatible applications.
  • Optional Starter Apps: A handful of apps to get educators, content developers and system administrators started with inBloom, including a basic dashboard and data and content management tools.

Of the above components, it’s the inBloom index that is of most interest to me, as it appears to be a service built on top of a dedicated inBloom Learning Registry node, which in turn connects to the Learning Registry more widely as illustrated below.

inBloom Learning Resource Advertisement and Discovery

inBloom Learning Resource Advertisement and Discovery

According to the Standards Alignment whitepaper, the inBloom index will work as follows (Apologies for long techy quote, it’s interesting, I promise you!):

The inBloom Index establishes a link between applications and learning resources by storing and cataloging resource descriptions, allowing the described resources to be located quickly by the users who seek them, based in part on the resources’ alignment with learning standards. (Note, in this context, learning standards refers to curriculum standards such as the Common Core.)

inBloom’s Learning Registry participant node listens to assertions published to the Learning Registry network, consolidating them in the inBloom Index for easy access by applications. The usefulness of the information collected depends upon content publishers, who must populate the Learning Registry with properly formatted and accurately “tagged” descriptions of their available resources. This information enables applications to discover the content most relevant to their users.

Content descriptions are introduced into the Learning Registry via “announcement” messages sent through a publishing node. Learning Registry nodes, including inBloom’s Learning Registry participant node, may keep the published learning resource descriptions in local data stores, for later recall. The registry will include metadata such as resource locations, LRMI-specified classification tags, and activity-related tags, as described in Section 3.1.

The inBloom Index has an API, called the Learning Object Dereferencing Service, which is used by inBloom technology-compatible applications to search for and retrieve learning object descriptions (of both objectives and resources). This interface provides a powerful vocabulary that supports expression of either precise or broad search parameters. It allows applications, and therefore users, to find resources that are most appropriate within a given context or expected usage.

inBloom’s Learning Registry participant node is peered with other Learning Registry nodes so that it can receive resource description publications, and filters out announcements received from the network that are not relevant.

In addition, it is expected that some inBloom technology-compatible applications, depending on their intended functionality, will contribute information to the Learning Registry network as a whole, and therefore indirectly feed useful data back into the inBloom Index. In this capacity, such applications would require the use of the Learning Registry participant node.

One reason that this is so interesting is that this is exactly the way that the Learning Registry was designed to work. It was always intended that the Learning Registry would provide a layer of “plumbing” to allow the data to flow, education providers would push any kind of data into the Learning Registry network and developers would create services built on top of it to process and expose the data in ways that are meaningful to their stakeholders. Phil and I have both written a number of blog posts on the potential of this approach for dealing with messy educational content data, but one of our reservations has been that this approach has never been tested at scale. If inBloom succeeds in implementing their proposed technical architecture it should address these reservations, however I can’t help noticing that, to some extent, this model is predicated on there being an existing network of Learning Registry nodes populated with a considerable volume of educational content data, and as far as I’m aware, that isn’t yet the case.

I’m also rather curious about the whitepaper’s assertion that:

“The usefulness of the information collected depends upon content publishers, who must populate the Learning Registry with properly formatted and accurately “tagged” descriptions of their available resources.”

While this is certainly true, it’s also rather contrary to one of the original goals of the Learning Registry, which was to be able to ingest data in any format, regardless of schema. Of course the result of this “anything goes” approach to data aggregation is that the bulk of the processing is pushed up to the services and applications layer. So any service built on top of the Learning Registry will have to do the bulk of the data processing to spit out meaningful information. The JLeRN Experiment at Mimas highlighted this as one of their concerns about the Learning Registry approach, so it’s interesting to note that inBloom appears to be pushing some of that processing, not down to the node level, but out to the data providers. I can understand why they are doing this, but it potentially means that they will loose some of the flexibility that the Learning Registry was designed to accommodate.

Another interesting aspect of the inBloom implementation is that the more detailed technical architecture in the voluminous Developer Documentation indicates that at least one component of the Data Store, the Persistent Database, will be running on MongoDB, as opposed to CouchDB which is used by the Learning Registry. Both are schema free databases but tbh I don’t know how their functionality varies.

inBloom Technical Architecture

inBloom Technical Architecture

In terms of the metadata, inBloom appears to be mandating the adoption of LRMI as their primary metadata schema.

When scaling up teams and tools to tag or re-tag content for alignment to the Common Core, state and local education agencies should require that LRMI-compatible tagging tools and structures be used, to ensure compatibility with the data and applications made available through the inBloom technology.

A profile of the Learning Registry paradata specification will also be adopted but as far as I can make out this has not yet been developed.

It is important to note that while the Paradata Specification provides a framework for expressing usage information, it may not specify a standardized set of actors or verbs, or inBloom.org may produce a set that falls short of enabling inBloom’s most compelling use cases. inBloom will produce guidelines for expression of additional properties, or tags, which fulfill its users’ needs, and will specify how such metadata and paradata will conform to the LRMI and Learning Registry standards, as well as to other relevant or necessary content description standards.

All very interesting. I suspect with the volume of Gates and Carnegie funding backing inBloom, we’ll be hearing a lot more about this development and, although it may have no direct impact to the UK F//HE sector, it is going to be very interesting to see whether the technologies inBloom adopts, and the Learning Registry in particular, can really work at scale.

PS I haven’t had a look at the parts of the inBloom spec that cover assessment but Wilbert has noted that it seems to be “a straight competitor to the Assessment Interoperability Framework that the Obama administration Race To The Top projects are supposed to be building now…”

]]>
http://blogs.cetis.org.uk/lmc/2013/02/08/inbloom-to-implement-learning-registry-and-lrmi/feed/ 1
The great UKOER tag debate http://blogs.cetis.org.uk/lmc/2012/11/14/the-great-ukoer-tag-debate/ http://blogs.cetis.org.uk/lmc/2012/11/14/the-great-ukoer-tag-debate/#comments Wed, 14 Nov 2012 17:37:41 +0000 http://blogs.cetis.org.uk/lmc/?p=682 After three years of innovation focused on the sustainable release of open educational resources, the JISC HEA UK OER Programme is drawing to a close and yesterday Martin and I went along to the final programme meeting in London. Phil wasn’t able to attend the meeting and instead posted the following e-mail to the oer-discuss mailing list:

Hello all, I can’t be in London today, so I’m kind of joining the end of programme discussion from afar. The last three years have been great. At one of the early planning meetings someone (Andy Powell, I think) said that one measure of whether the programme was successful could be the widespread recognition of UKOER / OER as an idea within UK F&HE and the existence of a community around it. I’m pretty sure that has happened, not just because of UKOER but we were there and helped. So well done all of us :)

But what now? The programme has always aimed at sustainable release of resources, change of culture and practice, not just a short burst of activity leading to a one-off dumping of resources. What will happen over the next few years by way of sustained release and which practices are sustainable? Also, of course, from a CETIS point of view, what technologies can help?

Happy diwali, keep the OER light shining.

Phil’s mail prompted Nick Sheppard to ask the apparently innocent question:

Possibly a silly question…but I should stop tagging new resources ukoer?!

This seemingly innocuous enquiry prompted the kind of mailing list explosion normally only seen on Friday afternoon, and it wasn’t long before the discussion had it’s own twitter tag: #oergate. I haven’t counted the number of replies but if the thread has reached double figures it wouldn’t surprise me. If you’re feeling brave, you can read the whole thread here.

Some colleagues were all in favour of continuing to use the ukoer tag, arguing that it now represents an active community which is powerful evidence to the sustainability of the funded programmes’ legacy. Others argued that continued use of the tag would muddy the waters for collection managers and make it difficult to identify resources produced through the funded phase of the programme.

Amber has now managed to capture the discussion in an excellent blog post UKOER: What’s in a tag?*. Although there is no conclusive consensus as to how to answer Nick’s original question, one thing that this discussion has clearly demonstrated is that there does appear to be a lively and active community that has grown up around the funded programmes and the ukoer tag, and that definitely has to be a good thing!

*Amber’s blog post was written with input from Sarah Currier (Jorum), David Kernohan (JISC), Martin Hawksey (CETIS), Lorna Campbell (CETIS), Jackie Carter (Jorum).

ETA It now appears that the #oergate debate borked JISCmail! It seems that the list exceeded posting limits or some such, and no further comments were posted to the list after 15.10 on Wednesday afternoon. I’m delighted to say that I got the last word in ;)

]]>
http://blogs.cetis.org.uk/lmc/2012/11/14/the-great-ukoer-tag-debate/feed/ 0
The Learning Registry at #cetis12 http://blogs.cetis.org.uk/lmc/2012/03/09/the-learning-registry-at-cetis12/ http://blogs.cetis.org.uk/lmc/2012/03/09/the-learning-registry-at-cetis12/#comments Fri, 09 Mar 2012 09:18:29 +0000 http://blogs.cetis.org.uk/lmc/?p=580 Usually after our annual CETIS conference we each write a blog post that attempts to summarise each session and distil three hours of wide ranging discussion into a succinct synthesis and analysis. This year however Phil and I have been extremely fortunate as Sarah Currier of the JLeRN Experiment has done the job for us! Over at the JLERN Experiment blog Sarah has written a detailed and thought provoking summary of the Learning Registry: Capturing Conversations About Learning Resources session. Rather than attempting to replicate Sarah’s excellent write up we’re just going to point you over there, so here it is: The Learning Registry and JLeRN at the CETIS Conference: Report and Reflections. Job done!

Well, not quite. Phil and I do have one or two thoughts and reflections on the session. There still seems to be growing interest and enthusiasm in the UK ed tech community (if such a thing exists) for both the Learning Registry development in the US and the JLeRN Experiment at Mimas. However in some instances the interest and expectations are a little way head of the actual projects themselves. So it perhaps bears repeating at this stage that the Learning Registry is still very much under development. As a result the technical documentation may be a little raw, and although tools are starting to be developed, it may not be immediately obvious where to find them or figure out how they fit together. Having said that, there is a small but growing pool of keen developers working and experimenting with the Learning Registry so expertise growing.

That cautionary note aside one of the really interesting things about the Learning Registry is that people are already coming up with a wide range of potential use cases. As Sarah’s conference summary shows we had Terry McAndrew of TechDis suggesting that Learning Registry nodes could be used for capturing accessibility data about resources, Scott Wilson of CETIS and the University of Bolton thought the LR would be useful for sharing user ratings between distributed widget stores, a group from the Open University of Catalunya were interested in the possibility of using the LR as a decentralised way of sharing LTI information and Suzanne Hardy of the University of Newcastle was keen to see what might happen if Dynamic Learning Maps data was fed into an LR node.

Paradata is a topic that also appears to get people rather over excitable. Some people, me included, are enthusiastic about the potential ability to capture all kinds of activity data about how teachers and learners use and interact with resources. Others seem inclined to write paradata off as unnecessary coinage. “Why bother to develop yet another metadata standard?” is a question I’ve already heard a couple of times. Bearing this in mind it was very useful to have Learning Registry developer Walt Grata over from the US to remind us that although there is indeed a Learning Registry paradata specification, it is not mandated, and that users can express their data any way they want, as long as it’s a string and as long as it’s JSON.

We’re aware that the JLeRN Experiment were hoping to get a strong steer from the conference session as to where they should go next and I had hoped to round off this post with a few ideas that Phil and I had prioritised out of the many discussed. However Phil and I have completely failed to come to any kind of agreement on this so that will have to be another blog post for another day!

Finally we’d like to thank all those who contributed to a the Learning Registry Session at CETIS12 and in particular our speakers; Stephen Cook, Sarah Currier, Walt Grata, Bharti Gupta, Pat Lockley, Terry McAndrew, Nick Syrotiuk and Scott Wilson. Many thanks also to Dan Rehak for providing his slides and for allowing Phil to impersonate him!

]]>
http://blogs.cetis.org.uk/lmc/2012/03/09/the-learning-registry-at-cetis12/feed/ 1
JLeRN Hackday – Issues Identified http://blogs.cetis.org.uk/lmc/2012/02/01/jlern-hackday-issues-identified/ http://blogs.cetis.org.uk/lmc/2012/02/01/jlern-hackday-issues-identified/#comments Wed, 01 Feb 2012 19:38:09 +0000 http://blogs.cetis.org.uk/lmc/?p=533 Last week I went to the hackday organised by the JLeRN team and CETIS to kick off Mimas’ JLeRN Experiment. It you haven’t come across JLeRN before, it’s a JISC funded exploratory project to build an experimental Learning Registry node. The event, which was organised by JLeRN’s Sarah Currier and CETIS’ dear departed John Robertson, brought a small but enthusiastic bunch of developers together to discuss how they might use and interact with the JLeRN test node and the Learning Registry more generally.

One of the aims of the day was to attempt to scope some usecases for the JLeRN Experiment, while the technical developers were discussing the implementation of the node and exploring potential development projects. We didn’t exactly come up with usecases per se, but we did discuss a wide range of issues. JLeRN are limited in what they can do by the relatively short timescale of the project, so the list below represents issues we would like to see addressed in the longer term.

Accessibility

The Learning Registry (LR) could provide a valuable opportunity to gather accessibility stories. For example it could enable a partially-sighted user to find resources that had been used by other partially-sighted users. But accessibility information is complex, how could it be captured and fed into the LR? Is this really a user profiling issue? If so, what are the implications for data privacy? If you are recording usage data you need to notify users what you are doing.

Capturing, Inputting and Accessing Paradata

We need to consider how systems generate paradata, how that information can be captured and fed back to the LR. The Dynamic Learning Maps curricular mapping system generates huge amounts of data from each course; this could be a valuable source of paradata. Course blogs can also generate more subjective paradata.

A desktop widget or browser plugin with a simple interface, that captures information about users, resources, content, context of use, etc would be very useful. Users need simplified services to get data in and out of the LR.

Once systems can input paradata, what will they get back from the LR? We need to produce concrete usecases that demonstrate what users can do with the paradata they generate and input. And we need to start defining the structure of the paradata for various usecases.

There are good reasons why the concept of “actor” has been kept simple in the LR spec but we may need to have a closer look at the relationship between actors and paradata.

De-duplication is going to become a serious issue and it’s unclear how this will be addressed. Data will need to be normalised. Will the Learning Registry team in the US deal with the big global problems of de-duplication and identifiers? This would leave developers to deal with smaller issues. If the de-duplication issue was sorted it would be easy to write server side javascripts.

Setting Up and Running a Node

It’s difficult for developers to find the information they need in order to set up a node as it tends to be buried in the LR mailing lists. The relevant information isn’t easily accessible at present. The “20 minute” guides are simple to read but complex to implement. It’s also difficult to find the tools that already exist. Developers and users need simple tools and services and simplified APIs for brokerage services.

Is it likely that HE users will want to build their own nodes? What is the business model for running a node? Running a node is a cost. Institutions are unlikely to be able to capitalise on running a node, however they could capitalise by building services on top of the node. Nodes run as services are likely to be a more attractive option.

Suggestions for JISC

It would be very useful if JISC funded a series of simple tools to get data into and out of JLeRN. Something similar to the SWORD demonstrators would be helpful.

Fund a tool aimed at learning technologists and launch it at ALT-C for delegates to take back to their institutions and use.

A simple “accessibility like” button would be a good idea. This could possibly be a challenge for the forthcoming DevEd event.

Nodes essentially have to be sustainable services but the current funding model doesn’t allow for that. Funding tends to focus on innovation rather than sustainable services. Six months is not really long enough for JLeRN to show what can really be done. Three years would be better.

With thanks to…

Sarah Currier (MIMAS), Suzanne Hardy (University of Newcastle), Terry McAndrew (University of Leeds), Julian Tenney (University of Nottingham), Scott Wilson (University of Bolton).

]]>
http://blogs.cetis.org.uk/lmc/2012/02/01/jlern-hackday-issues-identified/feed/ 7
The JLeRN Experiment http://blogs.cetis.org.uk/lmc/2012/01/13/the-jlern-experiment/ http://blogs.cetis.org.uk/lmc/2012/01/13/the-jlern-experiment/#comments Fri, 13 Jan 2012 17:00:36 +0000 http://blogs.cetis.org.uk/lmc/?p=487 Towards the end of last year we reported that JISC had approved funding for the development of an experimental Learning Registry node here in the UK, the first node of its kind to be developed outwith the US. The JLeRN Experiment, which is being undertaken by Mimas at the University of Manchester, with input from CETIS and JISC, launched in early December. The JLeRN team is being led by Sarah Currier with the technical development being undertaken by Nick Syrotiuk and Bharti Gupta.

JLeRN / UK Contributors Learning Registry Hackday

The aim of this proof of concept project is to explore the practicalities of configuring and running a Learning Registry node and to explore the practicalities of getting data in and out of the network. The team are actively seeking any technical developers who would like to experiment with the node and, in order to facilitate this collaboration, CETIS and JLeRN are hosting a technical development day in Manchester on the 23rd of January. This event is aimed at developers contributing (or intending to contribute) data to the Learning Registry or hoping to build services based on the data it provides access to.

If you are interested in attending this event, you can register here. If you’re hoping to come along please also add a note to this Google Doc about what you’re doing, or hoping to do, and any of the issues you’ve encountered so far. If you can’t come along but are interested, please comment / leave a note as well.

JLeRN Blog

The JLeRN Experiment team have a blog (jlernexperiment.wordpress.com) up and running which they will use to disseminate regular progress reports, or as Sarah explained:

“to share all of our adventures, mis-steps, solutions, and creative ideas while working on the Learning Registry. It’s open notebook science in action!”

And the team have already been as good as their word. Nick has written a post on the Node of Mimas, a test node he installed on “a spare machine (he) had lying around” along with samples of the JSON documents the node outputs to illustrate what Learning Registry data looks like. And Bharti has posted a note on Some more exploring… which mentions the challenges of establishing a test node on a Windows Server 2008 machine and issues with getting Nginx setup correctly.

In parallel with the JLeRN experiment, CETIS will also continue to maintain a watching brief on the Learning Registry initiative in the US and will post updates of relevant developments on the CETIS blogs, so watch this space!

]]>
http://blogs.cetis.org.uk/lmc/2012/01/13/the-jlern-experiment/feed/ 2
UKOER 3 Technical Reflections http://blogs.cetis.org.uk/lmc/2011/11/24/ukoer-3-technical-reflections/ http://blogs.cetis.org.uk/lmc/2011/11/24/ukoer-3-technical-reflections/#comments Thu, 24 Nov 2011 16:28:43 +0000 http://blogs.cetis.org.uk/lmc/?p=474 The Technical Requirements for the JISC / HEA OER 3 Programme remain unchanged from those established for UKOER 2. These requirements can be referred to here: OER 2 Technical Requirements. However, many projects now have considerable experience and we would anticipate that they would engage with some of the technical challenges currently ongoing in the resource sharing and description domains.

We still don’t mandate content standards, however given the number of projects in this phase that are releasing ebooks we would anticipate seeing a number of projects using ePub. We would be interested in:

  1. Your experiences of putting dynamic content into ePub format (e.g. animations, videos)
  2. Your investigations of workflows to create/ publish multiple ebook formats at once, and of content management systems that support this.

Points to reflect on:

  • Resources should be self described, i.e. should have something like a title page (at the front) or credits page (at the back) that clearly state information such as author, origin, licence, title. For some purposes (e.g. search engine optimisation) this is preferable to encoded metadata hidden away in the file (e.g. EXIF metadata emdedded in an image) or as a detached record on a repository page or separate XML file. Note such a human readable title or credits page could be marked up as machine-readable metadata using RDFa/ microformats/ microdata see schema.org.
  • Feeds. We also encourage projects to disseminate metadata through, e.g. RSS, ATOM or OAI-PMH. “The RSS / ATOM feed should list and describe the resources produced by the project, and should itself be easy to find.“ It would be useful to provide descriptions of all the OERs released by a project in this way, not just the most recent. Projects should consider how this can be achieved even if they release large numbers of OERs.
  • For auditing and showcasing reasons it is really useful to be able to identify the resources that have been released through the projects in this programme. The required project tag is an element in this, but platforms that allow the creation of collections can also be used.
  • Activity data and paradata. Projects should consider what ‘secondary’ information they have access to about the use of / or interest in their resources, and how they can capture and share such information.
  • Tracking OER use and reuse. We don’t understand why projects aren’t worrying more about this.
]]>
http://blogs.cetis.org.uk/lmc/2011/11/24/ukoer-3-technical-reflections/feed/ 1
JISC Learning Registry Node Experiment http://blogs.cetis.org.uk/lmc/2011/11/07/jisc-learningreg-node/ http://blogs.cetis.org.uk/lmc/2011/11/07/jisc-learningreg-node/#comments Mon, 07 Nov 2011 09:32:47 +0000 http://blogs.cetis.org.uk/lmc/?p=463 Over the last decade the volume and range of educational content available on the Internet has grown exponentially, boosted by the recent proliferation of open educational resources. While search engines such as Google have made it easier to discover all kinds of content, one critical factor is missing where educational resources are concerned – context. Whether you are a teacher, learner or content provider, when it comes to discovering and using educational resources, context is key. Search engines may help you to find educational resources but they will tell you little of how those resources have been used, by whom, in what context and with which outcome.

Formal educational metadata standards have gone some way to addressing this problem, but it has proved to be extremely difficult to capture the educational characteristics of resources and the nuances of educational context within the constraints of a formal metadata standard. Indeed it is notoriously difficult to formally describe what a learning resource is, never mind how and by whom it may be used. Despite the not inconsiderable effort that has gone into the development of formal metadata standards, data models, bindings, application profiles and crosswalks the ability to quickly and easily find educational resources that match a specific educational context, competency level or pedagogic style has remained something of a holy grail.

A new approach to this problem is currently being explored by the Learning Registry, an innovative project being led and funded by the U.S. Department of Education and U.S. Department of Defense. In a guest blog post for CETIS in March this year ADL Senior Technical Advisor Dan Rehak explained that the Learning Registry intends to offer an alternative approach to learning resource discovery, sharing and usage tracking by prioritising sharing of second-party usage data and analytics over first party metadata.

Dan set out the Learning Registry’s use case as follows:

“Let’s assume you found several animations on orbital mechanics. Can you tell which of these are right for your students (without having to preview each)? Is there any information about who else has used them and how effective they were? How can you provide your feedback about the resources you used, both to other teachers and to the organizations that published or curated them? Is there any way to aggregate this feedback to improve discoverability?

The Learning Registry is defining and building an infrastructure to help answer these questions. It provides a means for anyone to ‘publish’ information about learning resources. Beyond metadata and descriptions, this information includes usage data, feedback, rankings, likes, etc.; we call this ‘paradata’”

Paradata is essentially a stream of activity data about a learning resource that effectively provides a dynamic timeline of how that resource has been used. As more usage data is collaboratively gathered and published the paradata timeline grows and evolves, amplifying the available knowledge about what educational resources are effective in which learning contexts. The Learning Registry team refer to this approach as “social networking for metadata”.

The Learning Registry itself is not a search engine, a repository, or a registry in the conventional sense. Instead the project aims to produce a core transport network infrastructure and will rely on the community to develop their own discovery tools and services, such as search engines, community portals, recommender systems, on top of this infrastructure. Dan commented; “We assume some smart people will do some interesting (and unanticipated) things with the timeline data stream.”

The Learning Registry infrastructure is built on couchDb, a noSQL style “document oriented database” providing a RESTful JSON API. The initial Learning Registry development implementation, or node, is available as an Amazon Machine Instance, hosted on Amazon EC2. This enables anyone to set up their own node on the Amazon cloud quickly and easily. As CouchDb is a cross-platform application, nodes can be run on most systems (e.g. Windows, Mac, Linux). The Learning Registry plan to produce zero-config installers to simplify the process of adding nodes to the network with the aim that developers should be able to set up their own node within a day. These nodes will form a decentralised network with each participant configuring their own rules regarding access permissions and what data they gather and share.

Although the Learning Registry will encourage users to produce their own tools and services on top of the network of nodes, the development team have defined a small set of non-core APIs for integration with existing edge services, e.g. SWORD for repository publishing and OAI-PMH for harvesting from the network to local stores.

A key feature of the Learning Registry is that it is metadata agnostic; it will accept legacy metadata in any format and will not attempt to harmonise the metadata it consumes. The team have also developed a specification for sharing and exchanging paradata which is inspired by the Activity Steams format.

As a leading innovator in digital infrastructure for resource discovery JISC have followed the development of the Learning Registry with interest, and in keeping with our remit as a JISC Innovation Support Centre CETIS have fostered a strategic working relationship with the Learning Registry team. In addition to maintaining a watching brief on the project, participating in the technical development working group, and submitting position papers to the Learning Registry summit, CETIS have also liaised directly with the project’s developers and technical advisor and communicated relevant strategic and technical developments back to JISC and the community. The Learning Registry team have also engaged closely with the JISC, CETIS and the UK technical development community by participating in two DevCSI hackdays, contributing to several CETIS events, and attending a number of JISC strategic planning meetings.

JISC have now extended this innovative collaboration with the announcement that they will fund the development of a Learning Registry test node, the first to be developed outwith the US. The node will be developed at MIMAS with input and support from JISC CETIS.

In a press release JISC’s Amber Thomas commented,

“This international collaboration will see us contributing the UK’s expertise to the Learning Registry. We are working with Mimas and JISC Cetis to support the Registry’s vision of gathering together the conversations, ratings, recommendations and usage data around digital content.”

And Steve Midgley, Deputy Director, Office of Education Technology at the US Department of Education added,

“I am greatly encouraged by the collaboration and opportunity presented by our work with JISC on the Learning Registry.”

The Learning Registry project has already generated considerable interest in the UK. We believe that technical developers, infrastructure managers and resource providers will have much to learn from the JISC Learning Registry test node development and we hope that ultimately educational communities in both the US and the UK will benefit from this innovative project.

Further Reading

]]>
http://blogs.cetis.org.uk/lmc/2011/11/07/jisc-learningreg-node/feed/ 8