Lorna Campbell » learning registry http://blogs.cetis.org.uk/lmc Cetis Blog Tue, 27 Aug 2013 10:29:30 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.22 inBloom to implement Learning Registry and LRMI http://blogs.cetis.org.uk/lmc/2013/02/08/inbloom-to-implement-learning-registry-and-lrmi/ http://blogs.cetis.org.uk/lmc/2013/02/08/inbloom-to-implement-learning-registry-and-lrmi/#comments Fri, 08 Feb 2013 10:36:09 +0000 http://blogs.cetis.org.uk/lmc/?p=710 There have been a number of reports in the tech press this week about inBloom a new technology integration initiative for the US schools’ sector launched by the Shared Learning Collective. inBloom is “a nonprofit provider of technology services aimed at connecting data, applications and people that work together to create better opportunities for students and educators,” and it’s backed by a cool $100 million dollars of funding from the Carnegie Corporation and the Bill and Melinda Gates Foundation. In the press release, Iwan Streichenberger, CEO of inBloom Inc, is quoted as saying:

“Education technology and data need to work better together to fulfill their potential for students and teachers. Until now, tackling this problem has often been too expensive for states and districts, but inBloom is easing that burden and ushering in a new era of personalized learning.”

This initiative first came to my attention when Sheila circulated a TechCruch article earlier in the week. Normally any article that quotes both Jeb Bush and Rupert Murdoch would have me running for the hills but Sheila is made of sterner stuff and dug a bit deeper to find the inBloom Learning Standards Alignment whitepaper. And this is where things get interesting, because inBloom incorporates two core technologies that CETIS has had considerable involvement with over the last while, the Learning Registry and the Learning Resource Metadata Initiative, which Phil Barker has contributed to as co-author and Technical Working Group member.

I’m not going to attempt to summaries the entire technical architecture of inBloom, however the core components are:

  • Data Store: Secure data management service that allows states and districts to bring together and manage student and school data and connect it to learning tools used in classrooms.
  • APIs: Provide authorized applications and school data systems with access to the Data Store.
  • Sandbox: A publicly-available testing version of the inBloom service where developers can test new applications with dummy data.
  • inBloom Index: Provides valuable data about learning resources and learning objectives to inBloom-compatible applications.
  • Optional Starter Apps: A handful of apps to get educators, content developers and system administrators started with inBloom, including a basic dashboard and data and content management tools.

Of the above components, it’s the inBloom index that is of most interest to me, as it appears to be a service built on top of a dedicated inBloom Learning Registry node, which in turn connects to the Learning Registry more widely as illustrated below.

inBloom Learning Resource Advertisement and Discovery

inBloom Learning Resource Advertisement and Discovery

According to the Standards Alignment whitepaper, the inBloom index will work as follows (Apologies for long techy quote, it’s interesting, I promise you!):

The inBloom Index establishes a link between applications and learning resources by storing and cataloging resource descriptions, allowing the described resources to be located quickly by the users who seek them, based in part on the resources’ alignment with learning standards. (Note, in this context, learning standards refers to curriculum standards such as the Common Core.)

inBloom’s Learning Registry participant node listens to assertions published to the Learning Registry network, consolidating them in the inBloom Index for easy access by applications. The usefulness of the information collected depends upon content publishers, who must populate the Learning Registry with properly formatted and accurately “tagged” descriptions of their available resources. This information enables applications to discover the content most relevant to their users.

Content descriptions are introduced into the Learning Registry via “announcement” messages sent through a publishing node. Learning Registry nodes, including inBloom’s Learning Registry participant node, may keep the published learning resource descriptions in local data stores, for later recall. The registry will include metadata such as resource locations, LRMI-specified classification tags, and activity-related tags, as described in Section 3.1.

The inBloom Index has an API, called the Learning Object Dereferencing Service, which is used by inBloom technology-compatible applications to search for and retrieve learning object descriptions (of both objectives and resources). This interface provides a powerful vocabulary that supports expression of either precise or broad search parameters. It allows applications, and therefore users, to find resources that are most appropriate within a given context or expected usage.

inBloom’s Learning Registry participant node is peered with other Learning Registry nodes so that it can receive resource description publications, and filters out announcements received from the network that are not relevant.

In addition, it is expected that some inBloom technology-compatible applications, depending on their intended functionality, will contribute information to the Learning Registry network as a whole, and therefore indirectly feed useful data back into the inBloom Index. In this capacity, such applications would require the use of the Learning Registry participant node.

One reason that this is so interesting is that this is exactly the way that the Learning Registry was designed to work. It was always intended that the Learning Registry would provide a layer of “plumbing” to allow the data to flow, education providers would push any kind of data into the Learning Registry network and developers would create services built on top of it to process and expose the data in ways that are meaningful to their stakeholders. Phil and I have both written a number of blog posts on the potential of this approach for dealing with messy educational content data, but one of our reservations has been that this approach has never been tested at scale. If inBloom succeeds in implementing their proposed technical architecture it should address these reservations, however I can’t help noticing that, to some extent, this model is predicated on there being an existing network of Learning Registry nodes populated with a considerable volume of educational content data, and as far as I’m aware, that isn’t yet the case.

I’m also rather curious about the whitepaper’s assertion that:

“The usefulness of the information collected depends upon content publishers, who must populate the Learning Registry with properly formatted and accurately “tagged” descriptions of their available resources.”

While this is certainly true, it’s also rather contrary to one of the original goals of the Learning Registry, which was to be able to ingest data in any format, regardless of schema. Of course the result of this “anything goes” approach to data aggregation is that the bulk of the processing is pushed up to the services and applications layer. So any service built on top of the Learning Registry will have to do the bulk of the data processing to spit out meaningful information. The JLeRN Experiment at Mimas highlighted this as one of their concerns about the Learning Registry approach, so it’s interesting to note that inBloom appears to be pushing some of that processing, not down to the node level, but out to the data providers. I can understand why they are doing this, but it potentially means that they will loose some of the flexibility that the Learning Registry was designed to accommodate.

Another interesting aspect of the inBloom implementation is that the more detailed technical architecture in the voluminous Developer Documentation indicates that at least one component of the Data Store, the Persistent Database, will be running on MongoDB, as opposed to CouchDB which is used by the Learning Registry. Both are schema free databases but tbh I don’t know how their functionality varies.

inBloom Technical Architecture

inBloom Technical Architecture

In terms of the metadata, inBloom appears to be mandating the adoption of LRMI as their primary metadata schema.

When scaling up teams and tools to tag or re-tag content for alignment to the Common Core, state and local education agencies should require that LRMI-compatible tagging tools and structures be used, to ensure compatibility with the data and applications made available through the inBloom technology.

A profile of the Learning Registry paradata specification will also be adopted but as far as I can make out this has not yet been developed.

It is important to note that while the Paradata Specification provides a framework for expressing usage information, it may not specify a standardized set of actors or verbs, or inBloom.org may produce a set that falls short of enabling inBloom’s most compelling use cases. inBloom will produce guidelines for expression of additional properties, or tags, which fulfill its users’ needs, and will specify how such metadata and paradata will conform to the LRMI and Learning Registry standards, as well as to other relevant or necessary content description standards.

All very interesting. I suspect with the volume of Gates and Carnegie funding backing inBloom, we’ll be hearing a lot more about this development and, although it may have no direct impact to the UK F//HE sector, it is going to be very interesting to see whether the technologies inBloom adopts, and the Learning Registry in particular, can really work at scale.

PS I haven’t had a look at the parts of the inBloom spec that cover assessment but Wilbert has noted that it seems to be “a straight competitor to the Assessment Interoperability Framework that the Obama administration Race To The Top projects are supposed to be building now…”

]]>
http://blogs.cetis.org.uk/lmc/2013/02/08/inbloom-to-implement-learning-registry-and-lrmi/feed/ 1
JLeRN Experiment Final Meeting http://blogs.cetis.org.uk/lmc/2012/10/24/jlern-experiment-final-meeting/ http://blogs.cetis.org.uk/lmc/2012/10/24/jlern-experiment-final-meeting/#comments Wed, 24 Oct 2012 13:05:32 +0000 http://blogs.cetis.org.uk/lmc/?p=675 Earlier this week I went to the final meeting of the JLeRN Experiment Project ,which CETIS has been supporting over the last year. The aim of the event was to reflect on the project and to provide project partners with an opportunity to present and discuss their engagement with JLeRN and the Learning Registry.

JLeRN project manager Sarah Currier and developer Nick Syrotiuk opened the meeting by recapping the project’s progress and some of the issues they encountered. Nick explained that setting up a Learning Registry node had been relatively straightforward and that publishing data to the node was quite easy. The project had been unable to experiment with setting up a node in the cloud due to limitations within the university’s funding and procurement structures (Amber Thomas noted that this was a common finding of other JISC funded cloud service projects), however all the JLeRN node data is synchronised with iriscouch.com, a free CouchDB service in the cloud. Although getting data into the node is simple, there was no easy way to see what was in the node so Nick built a Node Explorer tool based on the LR slice API which is now available on Github.

Sarah also explained that the project had been unable to explore moving data between nodes and exploiting node networks and communities as there are currently very few Learning Registry nodes in existence. Sarah noted that while there had been considerable initial interest in both the Learning Registry and JLeRN, and quite a few projects and institutions had expressed an interest in getting involved, very few had actually engaged, apart from the JISC funded OER Rapid Innovation projects. Sarah attributed this lack of engagement to limited capacity and resources across the sector and also to the steep learning curve required to get involved. There had also been relatively little interest from the development community, beyond one or two enthusiastic and innovative individuals, such as Pat Lockley, and again Sarah attributed this to lack of skills and capacity. However she noted that although the Learning Registry is still relatively immature and remains to be tried and tested, there is still considerable interest in the technology and approaches adopted by the project to solve the problems of educational resource description and discovery.

“If we are to close the gap between the strategic enthusiasm for the potential wins of the Learning Registry, and the small-scale use case and prototype testing phase we are in, we will need a big push backed by a clear understanding that we will be walking into some of the same minefields we’ve trodden in, cyclically, for the past however many decades. And it is by no means clear yet that the will is there, in the community or at the strategic level.”

In order to gauge the appetite for further work in this area, JLeRN have commissioned a short report from David Kay of Sero Consulting to explore the potential affordances of JLeRN and the Learning Registry architecture and conceptual approach, within the broader information environment.

Following Sarah and Nick’s introduction Phil Barker presented an update on the status and future of the Learning Registry initiative in the US, which I’ll leave him to blog about :) The rest of the meeting was taken up with presentations from a range of projects and individuals that had engaged with JLeRN and the Learning Registry. I’m not even going to attempt to summarise the afternoon’s discussions which were lively and wide ranging and covered everything from triple stores to Tin Can API to chocolate coloured mini dresses and back again! You can read about some of these projects on the JLeRN blog here:

It’s worth highlighting a few points though…

Pat Lockley’s Pgogy tools gave a glimpse of the kind of innovative Learning Registry tools that can be built by a creative developer with a commitment to openness. Pat also gave a thought provoking presentation on how the nature of the learning registry offers a greater role for developers that most current repository ecosystems as the scope of the services that can be built is considerably richer. In his own blog post on the meeting Pat suggested:

“Also, perhaps, it is a developer’s repository as it is more “open”, and sharing and openness are now a more explicit part of developer culture than they are with repositories?”

Reflecting on the experience of the Sharing Paradata Across Widget Stores (SPAWS) project Scott Wilson reported that using the LR node had worked well for them. SPAWS had a fairly straightforward remit – build a system for syndicating data between widget stores. In this particular usecase the data in question was relatively simple and standardised. The project team liked that fact that the node was designed for high volume use, though they did foresee longer term issues with up scaling and download size, the APIs were fairly good, and the Activity Streams approach was a good fit for the project. Scott acknowledged that there were other solutions that the project could have adopted but that they would have been more time consuming and costly, after all “What’s not to like about a free archival database?!” Scott also added that the Learning Registry could have potential application to sharing data between software forges.

Another area where the Learning Registry approach is likely to be of particular benefit is the medicine, dentistry and veterinary medicine domains where curricula and learning outcomes are clearly mapped. Susanne Hardy and James Outterside from the University of Newcastle presented a comprehensive use case from the RIDLR project which built on the work of the Dynamic Learning Maps and FavOERites projects. Suzanne noted that there is huge appetite in the medical education sector for the idea of JLeRN type services.

Owen Stephens made a valuable contribution to discussions throughout the day by asking particularly insightful and incisive question about what projects had really gained by working with the Learning Registry rather than adopting other approaches such as those employed in the wider information management sector. I’m not sure how effectively we managed to answer Owen’s questions but there was a general feeling that the Learning Registry’s open approach to dealing with messy educational data somehow fitted better with the ethos of the teaching and learning sector.

One issue that surfaced repeatedly throughout the day was the fact that Learning Registry nodes are still rather thin on the ground, although there are several development nodes in existence, of which JLeRN is one, there is still only one single production node maintained by the Learning Registry development team in the US. As a result it has not been possible to test the capabilities and affordance of networked nodes and the potential network scale benefits of the Learning Registry approach remain unproven.

Regardless of these reservations, it was clear from the breadth and depth of the discussions at the meeting that there is indeed a will in some sectors of the HE community to continue exploring the Learning Registry and the technical approaches it has adopted. Personally, while I can see the real benefit of the Learning Registry to the US schools sector, I am unsure how much traction it is likely to gain in the UK F/HE domain at this point in time. Having said that, I think the technical approaches developed by the Learning Registry will have considerable impact on our thinking and approach to the messy problem of learning resource description and management.

For further thinky thoughts on the Learning Registry and the JLeRN experiment, I can highly recommend Amber Thomas blog post: Applying a new approach to an old problem.

]]>
http://blogs.cetis.org.uk/lmc/2012/10/24/jlern-experiment-final-meeting/feed/ 1