Lorna Campbell » metadata http://blogs.cetis.org.uk/lmc Cetis Blog Tue, 27 Aug 2013 10:29:30 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.22 New Activity Data and Paradata Briefing Paper http://blogs.cetis.org.uk/lmc/2013/05/01/new-activity-data-and-paradata-briefing-paper/ http://blogs.cetis.org.uk/lmc/2013/05/01/new-activity-data-and-paradata-briefing-paper/#comments Wed, 01 May 2013 14:53:08 +0000 http://blogs.cetis.org.uk/lmc/?p=806 Cetis have published a new briefing paper on Activity Data and Paradata. The paper presents a concise overview of a range of approaches and specifications for recording and exchanging data generated by the interactions of users with resources.

Such data is a form of Activity Data, which can be defined as “the record of any user action that can be logged on a computer”. Meaning can be derived from Activity Data by querying it to reveal patterns and context, this is often referred to as Analytics. Activity Data can be shared as an Activity Stream, a list of recent activities performed by an individual. Activity Streams are often specific to a particular platform or application, e.g. facebook, however initiatives such as OpenSocial, ActivityStreams and Tin Can API have produced specifications and APIs to share Activity Data across platforms and applications.

ParadataWhile Activity Streams record the actions of individual users and their interactions with multiple resources and services, other specifications have been developed to record the actions of multiple users on individual resources. This data about how and in what context resources are used is often referred to as Paradata. Paradata complements formal metadata by providing an additional layer of contextual information about how resources are being used. A specification for recording and exchanging paradata has been developed by the Learning Registry, an open source content-distribution network for storing and sharing information about learning resources.

The briefing paper provides an overview of each of these approaches and specifications along with examples of implementations and links to further information.

The Cetis Activity Data and Paradata briefing paper written by Lorna M. Campbell and Phil Barker can be downloaded from the Cetis website here: http://publications.cetis.org.uk/2013/808

]]>
http://blogs.cetis.org.uk/lmc/2013/05/01/new-activity-data-and-paradata-briefing-paper/feed/ 0
Another perspective on inBloom http://blogs.cetis.org.uk/lmc/2013/03/05/another-perspective-on-inbloom/ http://blogs.cetis.org.uk/lmc/2013/03/05/another-perspective-on-inbloom/#comments Tue, 05 Mar 2013 15:20:30 +0000 http://blogs.cetis.org.uk/lmc/?p=744 Thanks to Pat Lockley for drawing my attention to Reuter’s interesting take on inBloom, the US K-12 development that I blogged about a couple of weeks ago. You can find the article here: K-12 student database jazzes tech startups, spooks parents. Just in case you missed it, inBloom is a new technology integration initiative for the US schools’ sector launched by the Shared Learning Collective and funded by the Carnegie Corporation and the Bill and Melinda Gates Foundation. One of the aims of InBloom is to create a:

Secure data management service that allows states and districts to bring together and manage student and school data and connect it to learning tools used in classrooms.

I should confess that my interest in inBloom is purely on the technical side as it builds on two core technologies that CETIS has had some involvement with; the Learning Registry and the Learning Resource Metadata Initiative. The Reuter’s article provides a rather different perspective on the development however, describing the initiative as:

a $100 million database built to chart the academic paths of public school students from kindergarten through high school.

In operation just three months, the database already holds files on millions of children identified by name, address and sometimes social security number. Learning disabilities are documented, test scores recorded, attendance noted. In some cases, the database tracks student hobbies, career goals, attitudes toward school – even homework completion.

Local education officials retain legal control over their students’ information. But federal law allows them to share files in their portion of the database with private companies selling educational products and services.

When reported in these terms, it’s easy to understand why some parents have raised concerns about the initiative. The report goes on to say

Federal officials say the database project complies with privacy laws. Schools do not need parental consent to share student records with any “school official” who has a “legitimate educational interest,” according to the Department of Education. The department defines “school official” to include private companies hired by the school, so long as they use the data only for the purposes spelled out in their contracts.

The database also gives school administrators full control over student files, so they could choose to share test scores with a vendor but withhold social security numbers or disability records.

That’s hardly reassuring to many parents.

And for good measure they then quote a concerned parent saying

“Once this information gets out there, it’s going to be abused. There’s no doubt in my mind.”

Parents from New York, Louisiana, the Massachusetts chapters of the American Civil Liberties Union and Parent-Teacher Association have also written to state officials “in protest” with the help of a civil liberties attorney in New York.

To be fair to Reuters it’s not all Fear, Uncertainty and Doubt, the article also puts forward some of the potential benefits of the development as well as expressing the drawbacks and concerns. I certainly felt it was quite a balanced article that raised some valid issues.

It also clarified one issue that had rather puzzled me about the TechCrunch’s original report on inBloom which quoted Rupert Murdoch as saying:

“When it comes to K-12 education, we see a $500 billion sector in the U.S. alone that is waiting desperately to be transformed by big breakthroughs that extend the reach of great teaching.”

At the time I couldn’t see the connection between inBloom and Rupert Murdoch, and TechCrunch didn’t make it explicit, however Reuters explains that the inBloom technical infrastructure was built by Amplify Education, a division of Rupert Murdoch’s News Corps. That explains that then.

Those of you who have been following the CETIS Analytics Series will be aware that such concerns about privacy, anonymity and large scale data integration and analysis initiatives are nothing new, however I thought this was an interesting example of the phenomenon.

It’s also worth adding that, as the parent of a primary school age child, it has never once occurred to me to enquire what kind of data the school records, who that data is shared with and in what form. To be honest I am pretty philosophical about these things. However it is interesting that people have a tendency not to ask questions about their data until a big / new / evil / transformative (delete according to preference) technology development like this comes along. So what do you think? Is it all FUD? Or is it time to get our tin hats out?

I’m still very interested to see if inBloom’s technical infrastructure and core technologies are up to the job, so I’ll continue to watch these developments with interest. And you never know, if my itchy nose gets the better of me I might even ask around to find out what happens to pupil data on this side of the pond.

]]>
http://blogs.cetis.org.uk/lmc/2013/03/05/another-perspective-on-inbloom/feed/ 0
Taking up the challenge… http://blogs.cetis.org.uk/lmc/2013/02/28/taking-up-the-challenge/ http://blogs.cetis.org.uk/lmc/2013/02/28/taking-up-the-challenge/#comments Thu, 28 Feb 2013 19:11:54 +0000 http://blogs.cetis.org.uk/lmc/?p=725 Yesterday, David Kernohan challenged the ukoer community on the oer-discuss mailing list to write a blog post in response to a spectacularly wrongheaded Educasue post titled;

Ten Years Later: Why Open Educational Resources Have Not Noticeably Affected Higher Education, and Why We Should Care

I had read the post the previous day and had already decided not to respond because tbh I just wouldn’t know where to begin.

However since David is offering “a large drink of the author’s choice” as the prize for the best response, I have been persuaded to take up the challenge. Which just goes to show there’s no better way to motivate me folk then by offering drink. (Mine’s a G&T David, or a red wine, possibly both, though not in the same glass.)

I am still at a loss to offer a serious critique of this article so in the best spirit of OER, I am going to recycle what everyone else has already said. Reuse FTW!

The article can basically be summarised as follows:

It’s 10 years since MIT launched OpenCourseware. Since then OERs have FAILED because they have not transformed and disrupted higher education. List of reasons for their failure: discoverability, quality control, “The Last Mile”, acquisition. The solution to these problems is to built a “global enterprise-level system” aka a “supersized CMS”. And look, here’s one I built earlier! It’s called LON CAPA.

PS. “The entity that provides the marketplace, the service, and the support and keeps the whole enterprise moving forward is probably best implemented as a traditional company.”

I should point out that I am not familiar with LON-CAPA. I’m sure it’s a very good system as far as it goes, but I don’t think a “global enterprise-level system” is the answer to anything.

David Kernohan himself was quick off the mark when the article first started circulating, after tweeting a couple of its finer points:

“OERs have not noticeably disrupted the traditional business model of higher education”

“It is naïve to believe that OERs can be free for everybody involved.”

He concluded:

So the basic message of that paper is “OER IS BROKEN” and “NEED MOAR USER DATA”. Lovely.

Because, clearly, if we can’t measure the impact of something it is valueless.

Which is indeed a good point. Actually I think there are many ways you can measure the impact of OER but I’m not at all convinced that “disrupting traditional business models” is the only valid measure of success. After all, OER is just content + open licence at the end of the day. And we can’t expect content alone to change the world, can we?

This is the point that Pat Lockley was getting at when he tweeted:

My Blog will be coming soon “Why OER haven’t affected the growth of grass”

Facetious perhaps, but a very pertinent point. There has been so much hyperbole surrounding OER from certain quarters of the media that it’s all too easy to say “Ha! It’s all just a waste of money. OER will never change the world.” Well no, maybe not, but most right minded people never claimed it would. What we do have though, is access to a lot more freely available (both gratis and libre) clearly licenced educational resources out there on the open web. Surely that can’t be a bad thing, can it? If nothing else, OER has increased educators’ awareness and understanding of the importance of clearly licencing the content they create and use, and that is definitely a good thing.

Pat also commented:

I’m just tired of OER being about “research into OER”. The cart is so far before the horse.

Which is another very valid point. I probably shouldn’t repeat Pat’s later tweet when he reached the end of the article and discovered that the author was pimping his own system. It involved axes and jumberjacking. Nuff said.

Jim Groom was similarly concise in his criticism:

“For content to be truly reusable and remixable, it needs to be context-free.” Problematic.

What’s the problem with OER ten years on? Metadata. Hmmm, maybe it is actually imagination, or lack thereof. #killoerdead

While I don’t always agree with Mr Groom, I certainly do agree that such a partial analysis lacks imagination.

As is so often the case, it was left to Amber Thomas to see past the superficial bad and wrongness of the article to get at the issues underneath.

“The right questions, patchy evidence base, wrong solutions. And I still think oer is a descriptor not a distinct content type.”

And as is also often the case, I agree with Amber wholeheartedly. There are actually many valid points lurking within this article but, honestly, it’s like the last ten years never happened. For example, discussing discoverability, which I agree can be problematic, the author suggests:

The solution for this problem could be surprisingly simple: dynamic metadata based on crowdsourcing. As educators identify and sequence content resources for their teaching venues, this information is stored alongside the resources, e.g., “this resource was used before this other resource in this context and in this course.” This usage-based dynamic metadata is gathered without any additional work for the educator or the author. The repository “learns” its content, and the next educator using the system gets recommendations based on other educators’ choices: “people who bought this also bought that.”

Yes! I agree!

Simple? No, currently impossible, because the deployment of a resource is usually disconnected from the repository: content is downloaded from a repository and uploaded into a course management system (CMS), where it is sequenced and deployed.

Erm…impossible? Really? Experimental maybe, difficult even, but impossible? No. Why no mention here of activity data, paradata, analytics? Like I said, it’s like the last ten years never happened.

Anyway I had better stop there before I say something unprofessional. One last comment though, Martin Hawksey pointed out this morning that there is not a single comment on the Educause website about this article, and asked:

Censorship? (That’s the danger of CMSs configured this way, someone else controls the information.)

I can’t comment on whether there has been censorship, but there has certainly been control. (Is there a difference? Discuss.) In order to comment on the Educause site you have to register, which I did yesterday afternoon and got a response informing me that it would take “several business hours” to approve my registration. I finally received the approval notification at nine o’clock at night, by which point I had better things to do with my time than comment on “global enterprise-level systems” and “supersized CMS”.

So there you have it David. Do I get that G&T?

ETA The author of this article, Gerd Kortemeyer may just have pipped us all to the G&T with a measured and considered defence of his post over at oer-discuss. While his e-mail provides some much needed context to the original article, particularly in terms of clarifying the specfic type of educational institutions and usage scenarios he is referring to, many of the criticism remain. It’s well worth reading Gerd’s response to the challenge here. Andy Lane has also written a very thoughtful and detailed critique of the article here which I can highly recommend.

]]>
http://blogs.cetis.org.uk/lmc/2013/02/28/taking-up-the-challenge/feed/ 2
inBloom to implement Learning Registry and LRMI http://blogs.cetis.org.uk/lmc/2013/02/08/inbloom-to-implement-learning-registry-and-lrmi/ http://blogs.cetis.org.uk/lmc/2013/02/08/inbloom-to-implement-learning-registry-and-lrmi/#comments Fri, 08 Feb 2013 10:36:09 +0000 http://blogs.cetis.org.uk/lmc/?p=710 There have been a number of reports in the tech press this week about inBloom a new technology integration initiative for the US schools’ sector launched by the Shared Learning Collective. inBloom is “a nonprofit provider of technology services aimed at connecting data, applications and people that work together to create better opportunities for students and educators,” and it’s backed by a cool $100 million dollars of funding from the Carnegie Corporation and the Bill and Melinda Gates Foundation. In the press release, Iwan Streichenberger, CEO of inBloom Inc, is quoted as saying:

“Education technology and data need to work better together to fulfill their potential for students and teachers. Until now, tackling this problem has often been too expensive for states and districts, but inBloom is easing that burden and ushering in a new era of personalized learning.”

This initiative first came to my attention when Sheila circulated a TechCruch article earlier in the week. Normally any article that quotes both Jeb Bush and Rupert Murdoch would have me running for the hills but Sheila is made of sterner stuff and dug a bit deeper to find the inBloom Learning Standards Alignment whitepaper. And this is where things get interesting, because inBloom incorporates two core technologies that CETIS has had considerable involvement with over the last while, the Learning Registry and the Learning Resource Metadata Initiative, which Phil Barker has contributed to as co-author and Technical Working Group member.

I’m not going to attempt to summaries the entire technical architecture of inBloom, however the core components are:

  • Data Store: Secure data management service that allows states and districts to bring together and manage student and school data and connect it to learning tools used in classrooms.
  • APIs: Provide authorized applications and school data systems with access to the Data Store.
  • Sandbox: A publicly-available testing version of the inBloom service where developers can test new applications with dummy data.
  • inBloom Index: Provides valuable data about learning resources and learning objectives to inBloom-compatible applications.
  • Optional Starter Apps: A handful of apps to get educators, content developers and system administrators started with inBloom, including a basic dashboard and data and content management tools.

Of the above components, it’s the inBloom index that is of most interest to me, as it appears to be a service built on top of a dedicated inBloom Learning Registry node, which in turn connects to the Learning Registry more widely as illustrated below.

inBloom Learning Resource Advertisement and Discovery

inBloom Learning Resource Advertisement and Discovery

According to the Standards Alignment whitepaper, the inBloom index will work as follows (Apologies for long techy quote, it’s interesting, I promise you!):

The inBloom Index establishes a link between applications and learning resources by storing and cataloging resource descriptions, allowing the described resources to be located quickly by the users who seek them, based in part on the resources’ alignment with learning standards. (Note, in this context, learning standards refers to curriculum standards such as the Common Core.)

inBloom’s Learning Registry participant node listens to assertions published to the Learning Registry network, consolidating them in the inBloom Index for easy access by applications. The usefulness of the information collected depends upon content publishers, who must populate the Learning Registry with properly formatted and accurately “tagged” descriptions of their available resources. This information enables applications to discover the content most relevant to their users.

Content descriptions are introduced into the Learning Registry via “announcement” messages sent through a publishing node. Learning Registry nodes, including inBloom’s Learning Registry participant node, may keep the published learning resource descriptions in local data stores, for later recall. The registry will include metadata such as resource locations, LRMI-specified classification tags, and activity-related tags, as described in Section 3.1.

The inBloom Index has an API, called the Learning Object Dereferencing Service, which is used by inBloom technology-compatible applications to search for and retrieve learning object descriptions (of both objectives and resources). This interface provides a powerful vocabulary that supports expression of either precise or broad search parameters. It allows applications, and therefore users, to find resources that are most appropriate within a given context or expected usage.

inBloom’s Learning Registry participant node is peered with other Learning Registry nodes so that it can receive resource description publications, and filters out announcements received from the network that are not relevant.

In addition, it is expected that some inBloom technology-compatible applications, depending on their intended functionality, will contribute information to the Learning Registry network as a whole, and therefore indirectly feed useful data back into the inBloom Index. In this capacity, such applications would require the use of the Learning Registry participant node.

One reason that this is so interesting is that this is exactly the way that the Learning Registry was designed to work. It was always intended that the Learning Registry would provide a layer of “plumbing” to allow the data to flow, education providers would push any kind of data into the Learning Registry network and developers would create services built on top of it to process and expose the data in ways that are meaningful to their stakeholders. Phil and I have both written a number of blog posts on the potential of this approach for dealing with messy educational content data, but one of our reservations has been that this approach has never been tested at scale. If inBloom succeeds in implementing their proposed technical architecture it should address these reservations, however I can’t help noticing that, to some extent, this model is predicated on there being an existing network of Learning Registry nodes populated with a considerable volume of educational content data, and as far as I’m aware, that isn’t yet the case.

I’m also rather curious about the whitepaper’s assertion that:

“The usefulness of the information collected depends upon content publishers, who must populate the Learning Registry with properly formatted and accurately “tagged” descriptions of their available resources.”

While this is certainly true, it’s also rather contrary to one of the original goals of the Learning Registry, which was to be able to ingest data in any format, regardless of schema. Of course the result of this “anything goes” approach to data aggregation is that the bulk of the processing is pushed up to the services and applications layer. So any service built on top of the Learning Registry will have to do the bulk of the data processing to spit out meaningful information. The JLeRN Experiment at Mimas highlighted this as one of their concerns about the Learning Registry approach, so it’s interesting to note that inBloom appears to be pushing some of that processing, not down to the node level, but out to the data providers. I can understand why they are doing this, but it potentially means that they will loose some of the flexibility that the Learning Registry was designed to accommodate.

Another interesting aspect of the inBloom implementation is that the more detailed technical architecture in the voluminous Developer Documentation indicates that at least one component of the Data Store, the Persistent Database, will be running on MongoDB, as opposed to CouchDB which is used by the Learning Registry. Both are schema free databases but tbh I don’t know how their functionality varies.

inBloom Technical Architecture

inBloom Technical Architecture

In terms of the metadata, inBloom appears to be mandating the adoption of LRMI as their primary metadata schema.

When scaling up teams and tools to tag or re-tag content for alignment to the Common Core, state and local education agencies should require that LRMI-compatible tagging tools and structures be used, to ensure compatibility with the data and applications made available through the inBloom technology.

A profile of the Learning Registry paradata specification will also be adopted but as far as I can make out this has not yet been developed.

It is important to note that while the Paradata Specification provides a framework for expressing usage information, it may not specify a standardized set of actors or verbs, or inBloom.org may produce a set that falls short of enabling inBloom’s most compelling use cases. inBloom will produce guidelines for expression of additional properties, or tags, which fulfill its users’ needs, and will specify how such metadata and paradata will conform to the LRMI and Learning Registry standards, as well as to other relevant or necessary content description standards.

All very interesting. I suspect with the volume of Gates and Carnegie funding backing inBloom, we’ll be hearing a lot more about this development and, although it may have no direct impact to the UK F//HE sector, it is going to be very interesting to see whether the technologies inBloom adopts, and the Learning Registry in particular, can really work at scale.

PS I haven’t had a look at the parts of the inBloom spec that cover assessment but Wilbert has noted that it seems to be “a straight competitor to the Assessment Interoperability Framework that the Obama administration Race To The Top projects are supposed to be building now…”

]]>
http://blogs.cetis.org.uk/lmc/2013/02/08/inbloom-to-implement-learning-registry-and-lrmi/feed/ 1
CC UK Guest Blog: Learning Resource Metadata Initiative http://blogs.cetis.org.uk/lmc/2012/05/22/cc-uk-guest-blog-learning-resource-metadata-initiative/ http://blogs.cetis.org.uk/lmc/2012/05/22/cc-uk-guest-blog-learning-resource-metadata-initiative/#comments Tue, 22 May 2012 12:10:53 +0000 http://blogs.cetis.org.uk/lmc/?p=607 CETIS’ very own Phil Barker has been guest blogging over at Creative Commons UK about the Learning Resource Metadata Initiative. Phil explains how:

For the last six months or so the Learning Resource Metadata Initiative has been working to help teachers and learners find educationally useful resources by creating a standard metadata framework for tagging resources and resource descriptions on the web. The initiative, which is jointly run by Creative Commons and the US Association of Educational Publishers, with funding from the Gates Foundation, is a response to an unprecedented opportunity. In June 2011 Microsoft Bing, Google and Yahoo announced their intent to publish a common format, schema.org, for marking-up web pages so that they may be efficiently and accurately indexed by their search engines.

You can read the rest of Phil’s guest blog here: Learning Resource Metadata Initiative by Phil Barker, JISC CETIS.

]]>
http://blogs.cetis.org.uk/lmc/2012/05/22/cc-uk-guest-blog-learning-resource-metadata-initiative/feed/ 0
#cam12 Keynotes, backchannels and undercurrents http://blogs.cetis.org.uk/lmc/2012/04/23/cam12-keynotes-backchannels-and-undercurrents/ http://blogs.cetis.org.uk/lmc/2012/04/23/cam12-keynotes-backchannels-and-undercurrents/#comments Mon, 23 Apr 2012 15:24:57 +0000 http://blogs.cetis.org.uk/lmc/?p=587 A few thoughts from the OER 12 conference held in Cambridge last week. Sadly I wasn’t able to stay for the whole conference but the first two days left me with plenty of food for thought. This year the event was held in conjunction with the annual OCWC conference and as a result one of the themes to emerge was the convergence, or not, of top down and bottom up approaches to open educational resources and practices.

In the opening keynote Richardus Eko Indrajit, of the ABFI Institute Perbanas, Jakarta, outlined Indonesia’s impressively coordinated top down approach to opening access to education and the adoption of open education practice. One of Indonesia’s more radical policies in this space is the use of google ranking to measure the academic impact of scholarly works and research outputs.

The second keynote of the event by Sir John Daniel, President of the Commonwealth of Learning, also focused on strategic top down initiatives and in particular UNESCOs current Fostering Governmental Support for OER Internationally project. Sir John reported that 17 European nations have already responded to UNESCO’s survey about national policies and intentions regarding OER. Given three years of HEFCE funding for open educational resources, and the undoubted success of the JISC / HEA OER Programmes, it was disappointing to see that the UK did not appear on the list of respondents. This prompted some discussion on the backchannel about who would be the appropriate agency to formally respond to this questionnaire on behalf of the UK. Joe Wilson of SQA suggested that ALT might be an appropriate body to lobby for a response and wasted no time in contacting them directly. I’ll be interested to see it they, or anyone else, manage to provoke a response.

In the interests of practising what we preach, kudos goes to David Kernohan of JISC and Simon Thompson of Leeds Metropolitan University for politely challenging Sir John for not having any license or attribution information on the images he used in his presentation.

One hoary old issue that came up several times was whether there is any real evidence that open educational resources are actually being used, reused and re-shared. Despite some delegates providing pretty compelling evidence that open educational resources are indeed been used, others cited concerns over quality, sustainability and even potential loss of revenue as barriers to the release and adoption of open educational resources. When these objections were raised in the “Embed, don’t Bolt-on: promoting OER use in UK universities” panel on Tuesday afternoon I tweeted, slightly cynically:

@LornaMCampbell Quality, control, walled gardens, potential loss of income. #redherrings? #cam12

It appears that the issue was raised again the following day and was challenged by Patrick McAndrew (@openpad) of the Open University.

@dkernohan: quality, sustainabilty, reuse? Red herrings, says @openpad – challenges for all content not just OER #cam12

One very pertinent comment that was widely re-tweeted was that “many people are doing OER but they don’t call it that”. This prompted some discussion about the benefits of supporting and mainstreaming open practice rather than highlighting open practice as being different and distinct. During a very engaging chat over coffee, Emily Puckett Rodgers of the Open Michigan initiative mentioned that she tends not to use the term “OER” with faculty as this suggests something new and different that they need to engage with, instead she prefers to use the more familiar term “learning materials”. Personally I’m inclined to agree with Emily, like all buzzwords, the term “OER” has accumulated an awful lot of baggage over the years which may be less than helpful going forwards.

Another back channel conversation worth noting was the disappointing failure of the Scottish Government and Funding Council to engage with the open education agenda and open educational resource initiatives at a strategic level. This lack of engagement was highlighted again this week when Martin Hawksey noted that the Scottish Government’s new Professional Standards for Lecturers in Scotland’s Colleges document fails to make any mention of open educational resources or practices.

@mhawksey Shame open education/oer don’t get a mention in new Professional Standards for Lecturers in Scotland’s Colleges #ukoer

While there are certainly a few grass roots open education initiatives across the country, such as Edinburgh University’s eLearning@Ed Conference, Scotland clearly has some way to go when it comes to embedding the concept of openness in education.

From the back channel to the undercurrent….David Kernohan tweeted:

@dkernohan: MOOCs and DIYU – very much an undercurrent of nervousness at #cam12

Sadly I missed the final plenary session where I believe some of these undercurrents started to surface. However Laura Czerniewicz of the University of Cape Town concluded her post-conference blog post Open Education: part of the broader open scholarship terrain.

“I do think that there is beginning to be some fluidity and cross over, (such as the focus on open practices and the interest in the open education landscape at JISC), and this is great. Let’s consciously do more of this.”

An admirable goal for us all to aim for.

And lastly, two final highlights worthy of mention….

Nick Pearce’s Famous Monkeys

@drnickpearce Slides from my talk about developing students and staff as scavengers (hyenas!) http://slidesha.re/HNEiEb #cam12#famousmonkeys

And Guy Barrett and Jenny Gray’s fabulous poster :)

by Guy Barrett and Jenny Gray

by Guy Barrett and Jenny Gray

]]>
http://blogs.cetis.org.uk/lmc/2012/04/23/cam12-keynotes-backchannels-and-undercurrents/feed/ 1
JLeRN Hackday – Issues Identified http://blogs.cetis.org.uk/lmc/2012/02/01/jlern-hackday-issues-identified/ http://blogs.cetis.org.uk/lmc/2012/02/01/jlern-hackday-issues-identified/#comments Wed, 01 Feb 2012 19:38:09 +0000 http://blogs.cetis.org.uk/lmc/?p=533 Last week I went to the hackday organised by the JLeRN team and CETIS to kick off Mimas’ JLeRN Experiment. It you haven’t come across JLeRN before, it’s a JISC funded exploratory project to build an experimental Learning Registry node. The event, which was organised by JLeRN’s Sarah Currier and CETIS’ dear departed John Robertson, brought a small but enthusiastic bunch of developers together to discuss how they might use and interact with the JLeRN test node and the Learning Registry more generally.

One of the aims of the day was to attempt to scope some usecases for the JLeRN Experiment, while the technical developers were discussing the implementation of the node and exploring potential development projects. We didn’t exactly come up with usecases per se, but we did discuss a wide range of issues. JLeRN are limited in what they can do by the relatively short timescale of the project, so the list below represents issues we would like to see addressed in the longer term.

Accessibility

The Learning Registry (LR) could provide a valuable opportunity to gather accessibility stories. For example it could enable a partially-sighted user to find resources that had been used by other partially-sighted users. But accessibility information is complex, how could it be captured and fed into the LR? Is this really a user profiling issue? If so, what are the implications for data privacy? If you are recording usage data you need to notify users what you are doing.

Capturing, Inputting and Accessing Paradata

We need to consider how systems generate paradata, how that information can be captured and fed back to the LR. The Dynamic Learning Maps curricular mapping system generates huge amounts of data from each course; this could be a valuable source of paradata. Course blogs can also generate more subjective paradata.

A desktop widget or browser plugin with a simple interface, that captures information about users, resources, content, context of use, etc would be very useful. Users need simplified services to get data in and out of the LR.

Once systems can input paradata, what will they get back from the LR? We need to produce concrete usecases that demonstrate what users can do with the paradata they generate and input. And we need to start defining the structure of the paradata for various usecases.

There are good reasons why the concept of “actor” has been kept simple in the LR spec but we may need to have a closer look at the relationship between actors and paradata.

De-duplication is going to become a serious issue and it’s unclear how this will be addressed. Data will need to be normalised. Will the Learning Registry team in the US deal with the big global problems of de-duplication and identifiers? This would leave developers to deal with smaller issues. If the de-duplication issue was sorted it would be easy to write server side javascripts.

Setting Up and Running a Node

It’s difficult for developers to find the information they need in order to set up a node as it tends to be buried in the LR mailing lists. The relevant information isn’t easily accessible at present. The “20 minute” guides are simple to read but complex to implement. It’s also difficult to find the tools that already exist. Developers and users need simple tools and services and simplified APIs for brokerage services.

Is it likely that HE users will want to build their own nodes? What is the business model for running a node? Running a node is a cost. Institutions are unlikely to be able to capitalise on running a node, however they could capitalise by building services on top of the node. Nodes run as services are likely to be a more attractive option.

Suggestions for JISC

It would be very useful if JISC funded a series of simple tools to get data into and out of JLeRN. Something similar to the SWORD demonstrators would be helpful.

Fund a tool aimed at learning technologists and launch it at ALT-C for delegates to take back to their institutions and use.

A simple “accessibility like” button would be a good idea. This could possibly be a challenge for the forthcoming DevEd event.

Nodes essentially have to be sustainable services but the current funding model doesn’t allow for that. Funding tends to focus on innovation rather than sustainable services. Six months is not really long enough for JLeRN to show what can really be done. Three years would be better.

With thanks to…

Sarah Currier (MIMAS), Suzanne Hardy (University of Newcastle), Terry McAndrew (University of Leeds), Julian Tenney (University of Nottingham), Scott Wilson (University of Bolton).

]]>
http://blogs.cetis.org.uk/lmc/2012/02/01/jlern-hackday-issues-identified/feed/ 7
UKOER 3 Technical Reflections http://blogs.cetis.org.uk/lmc/2011/11/24/ukoer-3-technical-reflections/ http://blogs.cetis.org.uk/lmc/2011/11/24/ukoer-3-technical-reflections/#comments Thu, 24 Nov 2011 16:28:43 +0000 http://blogs.cetis.org.uk/lmc/?p=474 The Technical Requirements for the JISC / HEA OER 3 Programme remain unchanged from those established for UKOER 2. These requirements can be referred to here: OER 2 Technical Requirements. However, many projects now have considerable experience and we would anticipate that they would engage with some of the technical challenges currently ongoing in the resource sharing and description domains.

We still don’t mandate content standards, however given the number of projects in this phase that are releasing ebooks we would anticipate seeing a number of projects using ePub. We would be interested in:

  1. Your experiences of putting dynamic content into ePub format (e.g. animations, videos)
  2. Your investigations of workflows to create/ publish multiple ebook formats at once, and of content management systems that support this.

Points to reflect on:

  • Resources should be self described, i.e. should have something like a title page (at the front) or credits page (at the back) that clearly state information such as author, origin, licence, title. For some purposes (e.g. search engine optimisation) this is preferable to encoded metadata hidden away in the file (e.g. EXIF metadata emdedded in an image) or as a detached record on a repository page or separate XML file. Note such a human readable title or credits page could be marked up as machine-readable metadata using RDFa/ microformats/ microdata see schema.org.
  • Feeds. We also encourage projects to disseminate metadata through, e.g. RSS, ATOM or OAI-PMH. “The RSS / ATOM feed should list and describe the resources produced by the project, and should itself be easy to find.“ It would be useful to provide descriptions of all the OERs released by a project in this way, not just the most recent. Projects should consider how this can be achieved even if they release large numbers of OERs.
  • For auditing and showcasing reasons it is really useful to be able to identify the resources that have been released through the projects in this programme. The required project tag is an element in this, but platforms that allow the creation of collections can also be used.
  • Activity data and paradata. Projects should consider what ‘secondary’ information they have access to about the use of / or interest in their resources, and how they can capture and share such information.
  • Tracking OER use and reuse. We don’t understand why projects aren’t worrying more about this.
]]>
http://blogs.cetis.org.uk/lmc/2011/11/24/ukoer-3-technical-reflections/feed/ 1
JISC CETIS OER Technical Mini-Projects Proposals and Discussion http://blogs.cetis.org.uk/lmc/2011/04/14/jisc-cetis-oer-technical-mini-projects-proposals-and-discussion/ http://blogs.cetis.org.uk/lmc/2011/04/14/jisc-cetis-oer-technical-mini-projects-proposals-and-discussion/#comments Thu, 14 Apr 2011 10:55:05 +0000 http://blogs.cetis.org.uk/lmc/?p=428 The bids are in for the JISC CETIS OER Technical Mini Projects and there’s a lively discussion going on over at oer-discuss@jiscmail.com

We’ve taken a new approach to the Technical Mini Projects call that builds on rapid innovation funding models already employed by the JISC. Interested parties were asked to submit short 1500 words proposals to the mailing list so that all bids can be openly discussed by members of the CETIS OER Technical Interest Group and anyone else that happens to be interested.

Four diverse proposals were submitted for the open strand of the call, though unfortunately we didn’t get any bids for other two strands (more about that later…). The open strand bids are as follows:

1. Development of Visual Vocabulary Management Tools from Dr Ian Piper, Tellura Information Services Ltd
2. OER Bookmarking Initiative from Paul Horner, James Outterside, Suzanne Hardy and Simon Cotterill, University of Newcastle.
3. Representing Aggregations of Open Educational Resources Utilising OAI-ORE from Alex Lydiate, Vic Jenkins and Kyriaki Anagnostopoulou, University of Bath
4. CaPRéT Cut and PAste reuse and Tracking from Brandon Muramatsu, MIT OEIT and Justin Ball and Joel Duffin, Tatemae.

We’d welcome any constructive comments on these these proposals, either here or on the oer-discuss mailing list which is open to all. You can catch up on the discussions at the oer-discuss archive.

The outcome of the call will be decided by a pannel of JISC and CETIS staff on Tuesday the 19th of April. All comments posted by the end of the day on Monday the 18th will be considered.

Thanks to all those who were brave enough to submit public proposals to this experimental open call, and also to those who have already contributed to the discussion!

]]>
http://blogs.cetis.org.uk/lmc/2011/04/14/jisc-cetis-oer-technical-mini-projects-proposals-and-discussion/feed/ 0
A TAACCCTful mandate? OER, SCORM and the $2bn grant http://blogs.cetis.org.uk/lmc/2011/01/25/a-taaccctful-mandate-oer-scorm-and-the-2bn-grant/ http://blogs.cetis.org.uk/lmc/2011/01/25/a-taaccctful-mandate-oer-scorm-and-the-2bn-grant/#comments Tue, 25 Jan 2011 16:36:53 +0000 http://blogs.cetis.org.uk/lmc/?p=420 Last week’s announcement that the US Department of Labour is planning to allocate $2 billion in grant funds to the Trade Adjustment Assistance Community College and Career Training grants programme over the next four years, has already generated a huge response online. $2 billion is a lot of money #inthiscurrentclimate, or indeed in any climate, however the reason that this announcement has generated so much heat is that it has been billed as $2 billion for open educational resources and furthermore it mandates the use of SCORM. Although there has been almost universal approval that the TAACCCT call mandates the use of the CC-By license the inclusion of the SCORM mandate has stirred up a bit of a hornets nest. John Robertson of CETIS has helpfully curated the tweet storm as it escalated over the course of the day. You can follow it here To SCORM or not to SCORM.

Before attempting to summarise the arguments for and against this mandate, it is worth highlighting the following points from the Department of Labour’s Solicitation for Grant Applications:

The programme, which is releasing $500 million in the first instance, states its aim as follows:

“The TAACCCT provides community colleges and other eligible institutions of higher education with funds to expand and improve their ability to deliver education and career training programs that can be completed in two years or less, are suited for workers who are eligible for training under the Trade Adjustment Assistance for Workers program, and prepare program participants for employment in high-wage, high-skill occupations.”

It goes on to state that:

“The Department is interested in accessible online learning strategies that can effectively serve the targeted population. Online learning strategies can allow adults who are struggling to balance the competing demands of work and family to acquire new skills at a time, place and pace that are convenient for them.”

The SCORM mandate appears under the heading Funding Priorities:

“All successful applicants that propose online and technology-enabled learning projects will develop materials in compliance with SCORM, as referenced in Section I.B.4 of this SGA. These courses and materials will be made available to the Department for free public use and distribution, including the ability to re-use course modules, via an online repository for learning materials to be established by the Federal Government.”

And the Creative Commons mandate is covered in Funding Restrictions: Intellectual Property rights.

“In order to further the goal of career training and education and encourage innovation in the development of new learning materials, as a condition of the receipt of a Trade Adjustment Assistance Community College and Career Training Grant (“Grant”), the Grantee will be required to license to the public (not including the Federal Government) all work created with the support of the grant (“Work”) under a Creative Commons Attribution 3.0 License (“License”).”

It is interesting to note that although the call mandates license and content interoperability formats it does not mandate the use of a specific metadata standard:

“All grant products will be provided to the Department with meta-data (as described in Section III.G.4) in an open format mutually agreed-upon by the grantee and the Department.”

The section in question refers to an appendix of keywords and tags which grantees are advised to use. Although I am unclear from the call whether “grant products” refers to bids and documentation or actual educational resources.

To coincide with the publication of the call, Creative Commons issued a press release with the following endorsement from incoming CEO Cathy Casserly:

“This exciting program signifies a massive leap forward in the sharing of education and training materials. Resources licensed under CC BY can be freely used, remixed, translated, and built upon, and will enable collaboration between states, organizations, and businesses to create high quality OER. This announcement also communicates a commitment to international sharing and cooperation, as the materials will be available to audiences worldwide via the CC license.”

Some bloggers, including Dave Cormier, University of Prince Edward Island, and Stephen Downes, National Research Council of Canada, initially responded with cautious optimism, seeing this initiative as a possible step towards ending “the text book industry as we know it.”

Cormier commented:

“This kind of commitment from the government, money at that scale, that much commitment to the idea of creative commons… this tells me that we might be ready to rid ourselves of the $150 introductory textbook and move to open content.”

Downes concurred because:

“First, government support removes the risk from using a Creative Commons license. Second, it’s enough money. $2 billion will actually produce a measurable amount of educational content. And third, it’s not the only game in town.”

However Cormier was sufficiently incensed about the inclusion of the SCORM mandate to launch a petition on twitter titled “Educational professionals against the enforcement of SCORM by the US Department of Education.”

(In actual fact the TAACCCT call comes from the Department of Labour rather than the Department of Education.)

Rob Abel, CEO of IMS, also responded in no uncertain terms to the inclusion of the SCORM mandate. In a blog post and open letter of IMS members Abel quoted President Obama’s pledge to “remove outdated regulations that stifle job creation and make our economy less competitive,” adding that the inclusion of the SCORM mandate is a “clear violation” of this pledge. Abel claimed that the SCORM mandate is a “ticking time” bomb that will “add enormous cost to the creation of the courses and to the platforms that must deliver them” and “stifle the intended outcomes of the historic TAACCCT investment”. Abel provides a long and detailed critique of SCORM and points out that, “IMS has spent the last five years bringing to market standards that will actually deliver on what SCORM promised” namely Common Cartridge, Learning Tools Interoperability (LTI), and Learning Information Services (LIS).

Chuck Severence, University of Michigan School of Information and IMS consultant,
agreed with Abel’s comments while expanding his self-described rant into a critique of OER initiatives more generally. Severence argued that “this obsession with ‘making and publishing’ OER artefacts that are unsuitable for editing is why nearly all of this kind of work ends up dead and obsolete.” He adds that most OER initiatives “make some slick web site and then try to drive people to their site – virtually none of these efforts can demonstrate any real learning impact.” However Severence does believe that if educational resources are published in a remixable format with a creative commons license they can be of real value and cites his own book Python for Informatics by way of example. He also concedes that the problem is “difficult to solve” before concluding “IMS Common Cartridge is the best we have but it needs a lot more investment in both the specification and tools to support the specification fully.”

A rather more balanced argument was put forward by Michael Feldstein, Academic Enterprise Solutions, Oracle Corporation. While he agreed that mandating SCORM was a mistake he noted that SCORM and IMS CC have “substantially different affordances that are appropriate for substantially different use cases”. While recognizing that it is understandable that “the Federal government wants to mandate a particular standard for content reuse” he added that mandating any specific standard, whether SCORM, IMS CC, RSS or Atom, is likely to be problematic because “educational content re-use is highly context-dependent”. Instead Feldstein suggests:

“The better thing to do would be to require that grantees include in their proposal a plan for promoting re-use, which would include the selection of appropriate format standards.”

Which is exactly the approach taken by the JISC / Higher Education Academy OER Programmes.

Reflecting on these developments from across the pond I have to agree that mandating the use of SCORM for the creation of open educational resources does strike me as being somewhat curious to say the least. This is very much at odds with the approach taken by the JISC / HEA OER Programmes. UK OER does not mandate the use of any specific standards however there are detailed technical guidelines for the programme and CETIS provides technical support to all projects. However the TAACCCT programme is not an OER programme in the same sense as the JISC / HEA UK OER Programmes. It’s interesting to note that while the White House announcement by Hal Plotkin focuses squarely on open education resources, the call itself uses slightly different terminology, referring instead to “open-source courses”.

As CETIS Scott Wilson pointed out on twitter, given TAACCCT’s focus on adaptive self-paced interactive content, the initaitive appears to be more akin to the National Learning Network’s NLN Materials programme which ran for five years from 1999 and which also mandated the use of SCORM with a greater or lesser degree of success, depending on your perspective. This reflection led Amber Thomas of JISC to comment:

“It’s not that mandating standards for learning materials is always wrong, it was the right thing for the NLN Materials – its more nuanced than that. It’s about the point people are at and which direction things need taking in.”

At this stage and at this remove it’s difficult to comment further without knowing more about the rationale behind the Department of Labour’s decision to mandate the use of SCORM for this particular programme. Needless to say, CETIS will be following these developments with interest and will continue to disseminate any further developments.

]]>
http://blogs.cetis.org.uk/lmc/2011/01/25/a-taaccctful-mandate-oer-scorm-and-the-2bn-grant/feed/ 6