Last week I went to the hackday organised by the JLeRN team and CETIS to kick off Mimas’ JLeRN Experiment. It you haven’t come across JLeRN before, it’s a JISC funded exploratory project to build an experimental Learning Registry node. The event, which was organised by JLeRN’s Sarah Currier and CETIS’ dear departed John Robertson, brought a small but enthusiastic bunch of developers together to discuss how they might use and interact with the JLeRN test node and the Learning Registry more generally.
One of the aims of the day was to attempt to scope some usecases for the JLeRN Experiment, while the technical developers were discussing the implementation of the node and exploring potential development projects. We didn’t exactly come up with usecases per se, but we did discuss a wide range of issues. JLeRN are limited in what they can do by the relatively short timescale of the project, so the list below represents issues we would like to see addressed in the longer term.
Accessibility
The Learning Registry (LR) could provide a valuable opportunity to gather accessibility stories. For example it could enable a partially-sighted user to find resources that had been used by other partially-sighted users. But accessibility information is complex, how could it be captured and fed into the LR? Is this really a user profiling issue? If so, what are the implications for data privacy? If you are recording usage data you need to notify users what you are doing.
Capturing, Inputting and Accessing Paradata
We need to consider how systems generate paradata, how that information can be captured and fed back to the LR. The Dynamic Learning Maps curricular mapping system generates huge amounts of data from each course; this could be a valuable source of paradata. Course blogs can also generate more subjective paradata.
A desktop widget or browser plugin with a simple interface, that captures information about users, resources, content, context of use, etc would be very useful. Users need simplified services to get data in and out of the LR.
Once systems can input paradata, what will they get back from the LR? We need to produce concrete usecases that demonstrate what users can do with the paradata they generate and input. And we need to start defining the structure of the paradata for various usecases.
There are good reasons why the concept of “actor” has been kept simple in the LR spec but we may need to have a closer look at the relationship between actors and paradata.
De-duplication is going to become a serious issue and it’s unclear how this will be addressed. Data will need to be normalised. Will the Learning Registry team in the US deal with the big global problems of de-duplication and identifiers? This would leave developers to deal with smaller issues. If the de-duplication issue was sorted it would be easy to write server side javascripts.
Setting Up and Running a Node
It’s difficult for developers to find the information they need in order to set up a node as it tends to be buried in the LR mailing lists. The relevant information isn’t easily accessible at present. The “20 minute” guides are simple to read but complex to implement. It’s also difficult to find the tools that already exist. Developers and users need simple tools and services and simplified APIs for brokerage services.
Is it likely that HE users will want to build their own nodes? What is the business model for running a node? Running a node is a cost. Institutions are unlikely to be able to capitalise on running a node, however they could capitalise by building services on top of the node. Nodes run as services are likely to be a more attractive option.
Suggestions for JISC
It would be very useful if JISC funded a series of simple tools to get data into and out of JLeRN. Something similar to the SWORD demonstrators would be helpful.
Fund a tool aimed at learning technologists and launch it at ALT-C for delegates to take back to their institutions and use.
A simple “accessibility like” button would be a good idea. This could possibly be a challenge for the forthcoming DevEd event.
Nodes essentially have to be sustainable services but the current funding model doesn’t allow for that. Funding tends to focus on innovation rather than sustainable services. Six months is not really long enough for JLeRN to show what can really be done. Three years would be better.
With thanks to…
Sarah Currier (MIMAS), Suzanne Hardy (University of Newcastle), Terry McAndrew (University of Leeds), Julian Tenney (University of Nottingham), Scott Wilson (University of Bolton).
If it would be very useful to have “a series of simple tools to get data into and out of JLeRN”, why should JISC fund it? Why shouldn’t the people who would find it useful just go and build those tools, or even better, the people who would see value being realised from their operating of JLeRN?
Valid point Tony. Perhaps some people might have good ideas for tools and services but lack the resources to build them without JISC support? For the record, I hope people will just go ahead and start building tools, with or without JISC funding!
I think the issue is not one of funding tools, but of funding infrastructure.
JISC could fund the infrastructure directly, in which case developers will likely produce tools that take advantage of it. This would probably be the most effective approach. However, if the infrastructure is unproven in terms of the benefits to the sector, then why should JISC take the risk?
Conversely. if the infrastructure isn’t likely to be sustained beyond the short term, why would you risk building tools for it?
So the purpose of funding tools would I think be for building the business case for supporting the shared infrastructure, reducing the risk on both sides.
Personally I’d much rather JISC took a risk on investing for the longer-term in the infrastructure, but thats not my call to make.
Very neatly summarised Scott. JISC are understandably wary of funding unproven infrastructure technologies and services. But what incentive is there for developers and users to invest time and resources in infrastructure that is unlikely to be sustained? There’s no easy solution. I think you’re right though, any approach that helps to mitigate the risk on either side has to be a positive step in the right direction.
Good stuff here – seems you had an interesting and useful event.
Some thoughts:
For accessibility, can this just be paradata — someone states that a resource is good for a certain type of accessibility? Likewise someone states that a resource was used successfully in a learning situation that required a certain type of accessibility. The goal here would be to avoid having individual students making the statements, but someone else, trying to avoid some of the privacy issues and making it more anonymous.
The useful experiment, which should be simple and just something that someone in the community should just do, is explore how/if accessibility information can be captured in paradata.
I’d be interested to see how the dynamic learning maps can be used.
And also what desktop widgets would look like. Pat Lockley has been working on a word press plugin with others, and the CaPReT work might be examples.
And lastly, data normaliztaion is something that LR does NOT plan to do — just the opposite. We believe that different forms need to exist, and that normaliztion is too hard. We’ve tried to normalize and harmonize for years with no success — let’s try to just deal with the messy data, duplicates, no consistent ids, …
I Generally find the LR guides easy to follow, but they are time consuming
Pingback: The Hackday: Report and Reflections « The JLeRN Experiment