Lorna Campbell » aggregated content http://blogs.cetis.org.uk/lmc Cetis Blog Tue, 27 Aug 2013 10:29:30 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.22 Back to the Future – revisiting the CETIS codebashes http://blogs.cetis.org.uk/lmc/2012/12/05/codebashes/ http://blogs.cetis.org.uk/lmc/2012/12/05/codebashes/#comments Wed, 05 Dec 2012 15:08:36 +0000 http://blogs.cetis.org.uk/lmc/?p=700 As a result of a request from the Cabinet Office to contribute to a paper on the use of hackdays during the procurement process, CETIS have been revisiting the “Codebash” events that we ran between 2002 and 2007. The codebashes were a series of developer events that focused on testing the practical interoperability of implementations of a wide range of content specifications current at the time, including IMS Content Packaging, Question and Test Interoperability, Simple Sequencing (I’d forgotten that even existed!), Learning Design and Learning Resource Meta-data, IEEE LOM, Dublin Core Metadata and ADL SCORM. The term “codebash” was coined to distinguish the CETIS events from the ADL Plugfests, which tested the interoperability and conformance of SCORM implementations. Over a five year period CETIS ran four content codebashes that attracted participants from 45 companies and 8 countries. In addition to the content codebashes, CETIS also additional events focused on individual specifications such as IMS QTI, or the outputs puts of specific JISC programmes such as the Designbashes and Widgetbash facilitated by Sheila MacNeill. As there was considerable interest in the codebashes and we were frequently asked for guidance on running events of this kind, I wrote and circulated a Codebash Facilitation document. It’s years since I’ve revisited this document, but I looked it out for Scott Wilson a couple of weeks ago as potential input for the Cabinet Office paper he was in the process of drafting together with a group of independents consultants. The resulting paper Hackdays – Levelling the Playing Field can be read and downloaded here.

The CETIS codebashes have been rather eclipsed by hackdays and connectathons in recent years, however it appears that these very practical, focused events still have something to offer the community so I thought it might be worth summarising the Codebash Facilitation document here.

Codebash Aims and Objectives

The primary aim of CETIS codebashes was to test the functional interoperability of systems and applications that implemented open learning technology interoperability standards, specifications and application profiles. In reality that meant bringing together the developers of systems and applications to test whether it was possible to exchange content and data between their products.

A secondary objective of the codebashes was to identify problems, inconsistencies and ambiguities in published standards and specifications. These were then fed back to the appropriate maintenance body in order that they could be rectified in subsequent releases of the standard or specification. In this way codebashes offered developers a channel through which they could contribute to the specification development process.

A tertiary aim of these events was to identify and share common practice in the implementation of standards and specifications and to foster communities of practice where developers could discuss how and why they had taken specific implementation decisions. A subsidiary benefit of the codebashes was that they acted as useful networking events for technical developers from a wide range of backgrounds.

The CETIS codebashes were promoted as closed technical interoperability testing events, though every effort was made to accommodate all developers who wished to participate. The events were aimed specifically at technical developers and we tried to discourage companies from sending marketing or sales representatives, though I should add that we were not always scucessful! Managers who played a strategic role in overseeing the development and implementation of systems and specifications were encouraged to participate however.

Capturing the Evidence

Capturing evidence of interoperability during early codebashes proved to be extremely difficult so Wilbert Kraan developed a dedicated website built on a Zope application server to facilitate the recording process. Participants were able to register the tools applications that they were testing and to upload content or data generated by these application. Other participants could then take this content test it in their own applications, allowing “daisy chains” of interoperability to be recorded. In addition, developers had the option of making their contributions openly available to the general public or visible only to other codebash participants. All participants were encouraged to register their applications prior to the event and to identify specific bugs and issues that they hoped to address. Developers who could not attend in person were able to participate remotely via the codebash website.

IPR, Copyright and Dissemination

The IPR and copyright of all resources produced during the CETIS codebashes remained with the original authors, and developers were neither required nor expected to expose the source code of their tools and applications to other participants.

Although CETIS disseminated the outputs of all the codebashes, and identified all those that had taken part, the specific performance of individual participants was never revealed. Bug reports and technical issues were fed back to relevant standards and specifications bodies and a general overview on the levels of interoperability achieved was disseminated to the developer community. All participants were free to publishing their own reports on the codebashes, however they were strongly discouraged from publicising the performance of other vendors and potential competitors. At the time, we did not require participants to sign non-disclosure agreements, and relied entirely on developers’ sense of fair play not to reveal their competitors performance. Thankfully no problems arose in this regard, although one or two of the bigger commercial VLE developers were very protective of their code.

Conformance and Interoperability

It’s important to note that the aim of the CETIS codebashes was to facilitate increased interoperability across the developer community, rather than to evaluate implementations or test conformance. Conformance testing can be difficult and costly to facilitate and govern and does not necessarily guarantee interoperability, particularly if applications implement different profiles of a specification or standard. Events that enable developers to establish and demonstrate practical interoperability are arguably of considerably greater value to the community.

Although CETIS codebashes had a very technical focus they were facilitated as social events and this social interaction proved to be a crucial component in encouraging participants to work closely together to achieve interoperability.

Legacy

These days the value of technical developer events in the domain of education is well established, and a wide range of specialist events have emerged as a result. Some are general in focus such as the hugely successful DevCSI hackdays, others are more specific such as the CETIS Widgetbash, the CETIS / DecCSI OER Hackday and the EDINA Wills World Hack running this week which aims to build a Shakespeare Registry of metadata of digital resources relating to Shakespeare covering anything from its work and live to modern performance, interpretation or geographical and historical contextual information. At the time however, aside from the ADL Plugfests, the CETIS codebashes were unique in offering technical developers an informal forum to test the interoperability of their tools and applications and I think it’s fair to say that they had a positive impact not just on developers and vendors but also on the specification development process and the education technology community more widely.

Links

Facilitating CETIS CodeBashes paper
Codebash 1-3 Reports, 2002 – 2005
Codebash 4, 2007
Codebash 4 blog post, 2007
Designbash, 2009
Designbash, 2010
Designbash, 2011
Widgetbash, 2011
OER Hackday, 2011
QTI Bash, 2012
Dev8eD Hackday, 2012

]]>
http://blogs.cetis.org.uk/lmc/2012/12/05/codebashes/feed/ 4
CETIS OER Visualisation Project http://blogs.cetis.org.uk/lmc/2011/12/06/cetis-oer-visualisation-project/ http://blogs.cetis.org.uk/lmc/2011/12/06/cetis-oer-visualisation-project/#comments Tue, 06 Dec 2011 15:52:19 +0000 http://blogs.cetis.org.uk/lmc/?p=480 As part of our work in the areas of open educational resources and data analysis CETIS are undertaking a new project to visualise the outputs of the JISC / HEA Open Educational Resource Programmes and we are very lucky to have recruited data wrangler extraordinaire Martin Hawksey to undertake this work. Martin’s job will be to firstly develop examples and workflows for visualising OER project data stored in the JISC CETIS PROD database, and secondly to produce visualisations around OER content and collections produced by the JISC / HEA programmes. Oh, and he’s only got 40 days to do it! You can read Martin’s thoughts on the task ahead over at his own blog MASHe:

40 days to let you see the impact of the OER Programme #ukoer

PROD Data Analysis

A core aspect of CETIS support for the OER Phase 1 and 2 Programmes has been the technical analysis of tools and systems used by the projects. The primary data collection tool used for this purpose is the PROD database. An initial synthesis of this data has already been completed by R. John Robertson, however there is potential for further analysis to uncover potentially richer information sets around the technologies used to create and share OERs.
This part of the project will aim to deliver:

  • Examples of enhanced data visualisations from OER Phase 1 and 2.
  • Recommendations on use and applicability of visualisation libraries with PROD data to enhance the existing OER dataset.
  • Recommendations and example workflows including sample data base queries used to create the enhanced visualisations.

And we also hope this work will uncover some general issues including:

  • Issues around potential workflows for mirroring data from our PROD database and linking it to other datasets in our Kasabi triple store.
  • Identification of other datasets that would enhance PROD queries, and some exploration of how transform and upload them.
  • General recommendations on wider issues of data, and observed data maintenance issues within PROD.

Visualising OER Content Outputs

The first two phases of the OER Programme produced a significant volume of content, however the programme requirements were deliberately agnostic about where that content should be stored, aside from a requirement to deposit or reference it in Jorum. This has enabled a range of authentic practices to surface regarding the management and hosting of open educational content; but it also means that there is no central directory of UKOER content, and no quick way to visualise the programme outputs. For example, the content in Jorum varies from a single record for a whole collection, to a record per item. Jorum is working on improved ways to surface content and JISC has funded the creation of a prototype UKOER showcase, in the meantime though it would be useful to be able to visualise the outputs of the Programmes in a compelling way. For example:

  • Collections mapped by geographical location of the host institution.
  • Collections mapped by subject focus.
  • Visualisations of the volume of collections.

We realise that the data that can be surfaced in such a limited period will be incomplete, and that as a result these visualisations will not be comprehensive, however we hope that the project will be able to produce compelling attractive images that can be used to represent the work of the programme.

The deliverables of this part of the project will be:

  • Blog posts on the experience of capturing and using the data.
  • A set of static or dynamic images that can be viewed without specialist software, with the raw data also available.
  • Documentation/recipes on the visualisations produced.
  • Recommendations to JISC and JISC CETIS on visualising content outputs.
]]>
http://blogs.cetis.org.uk/lmc/2011/12/06/cetis-oer-visualisation-project/feed/ 0
The #cetis10 Locate, Collate and Aggregate extravaganza http://blogs.cetis.org.uk/lmc/2010/11/08/the-cetis10-locate-collate-and-aggregate-extravaganza/ http://blogs.cetis.org.uk/lmc/2010/11/08/the-cetis10-locate-collate-and-aggregate-extravaganza/#comments Mon, 08 Nov 2010 15:40:45 +0000 http://blogs.cetis.org.uk/lmc/?p=382 Next week Phil, John and I will be running a session at the JISC CETIS conference with the snappy title Locate Collate and Aggregate. The aim of this session is to explore innovative technical approaches related to, but not confined to, the JISC / HEA OER 2 Programme which are applicable to finding, using and managing content for teaching and learning, including:

  • Building collections of OERs.
  • Drawing together information about learning resources
  • Building rich descriptions from disparate sources of information

We’ve got an eclectic bunch of contributors lined up including David Kay, Sero; Vic Lyte, MIMAS; James Burke, deBurca; Chris Taylor, oErbital; Rob Pearce, Engineering a Lo-Carbon Future; Pierre Far, OCW Search; Pat Lockley, Xpert and some bloke called Phil Barker. Our contributors will be presenting and leading short discussions on a diverse range of topics including cross-silo semantic search opportunities, using mainstream and niche search engines to discover OERs and automatic selection of resources for a UKOER collection.

We’ve also been promised the world premiere of the long awaited dogme masterpiece The Plight of Metadata by acclaimed repository manager and film maker Pat Lockley. Mr Lockley assures us that the film will be “awesome, despite the limited CGI budget.”

So who should attend this Locate, Collate and Aggregate extravaganza? Anyone interested in open content, innovative use and management of teaching and learning resources, techies, geeks, rss wranglers, data miners and even the odd repository manager.

And what do we want? We want ideas! Lots of them! We want ideas, comments and input to other peoples ideas. We’re also looking for ideas for JISC CETIS technical mini-projects we can potentially take forward to run in parallel with the OER 2 Programme.

We’re not quite sure what the outputs of this session will be but we’re aiming to go beyond the boundaries of JISC programmes and domain focussed initiatives and we’re hoping for cross pollination and propagation of innovation throughout the nation.

]]>
http://blogs.cetis.org.uk/lmc/2010/11/08/the-cetis10-locate-collate-and-aggregate-extravaganza/feed/ 0
cetiswmd Activities http://blogs.cetis.org.uk/lmc/2010/10/29/cetiswmd-activites/ http://blogs.cetis.org.uk/lmc/2010/10/29/cetiswmd-activites/#comments Fri, 29 Oct 2010 09:39:28 +0000 http://blogs.cetis.org.uk/lmc/?p=370 Phil has already blogged a summary of last week’s memorably tagged What Metadata or cetiswmd meeting. During the latter part of the meeting we split up to discuss practical tasks and projects that the community could undertake with support from CETIS and JISC to explore the kind of issues that were raised at the meeting. We agreed to draft a rough outline of some of these potential activities and then feed them back to the community for comment and discussion. So if you have any thoughts or suggestions please let us know. CETIS are proposing to set up a task group or working group of some kind to develop this work and to provide a forum to explore technical issues relating to the resource description, management and discovery in the context of open educational resources.

I helped to facilitate the breakout group that focused on what we might be able to achieve by looking at existing metadata collections. Here’s an outline of the activity what we discussed.

Textual Analysis of Metadata Records

A large number of existing collections of metadata records were identified by participants including NDLR, JorumOpen, OU openlearn, US data.gov collections, all of which could be analysed to ascertain which fields are used most widely and how they are described. Clearly this metadata exists in a wide range of heterogeneous formats so the task is not as simple as comparing like with like. The “traditional” way to compare different metadata schema and records is through the use of cross-walks. However developing cross walks is a non-trivial task that in itself requires considerable time and resource.

An alternative approach was put forward by ADL’s Dan Rehak who suggested treating the metadata collections as text, stripping out fields and formatting and running the raw data through a semantic analysis tool such as Open Calais. Open Calais uses natural language processing, machine learning and other methods to analyse documents and find the entities within them. Calais claim to go “well beyond classic entity identification and return the facts and events hidden within your text as well.”

Applying data mining and semantic analysis techniques to a large corpus of educational metadata records would be an interesting exercise in itself but until we attempt such an analysis it’s hard to speculate what it might be possible to achieve with the output data. It would certainly be valuable to compare frequently occurring terms and relationships with an analysis of search web logs to see if the metadata records are actually describing the characteristics that users are searching for.

There was general agreement amongst participants that this would be an interesting and innovative project. Participants felt it would be advisable to start small with a comparison of two or three metadata collections, possibly those of JorumOpen, Xpert and the OU Openlearn before taking this forward further.

One thing I am slightly unsure about regarding this method is that Open Calais identifies the relationship between words but once we strip out the metadata encoding of our sample records this information will be lost. I don’t know enough about how these semantic analysis tools work to know whether this is a problem or if they are clever enough for this not to be an issue. I suppose the only way we’ll find out if the results are sensible or useful is to give it a try!

I’d also be very interested to hear how this approach compares with work being undertaken on a much larger scale by the Digging into Data Challenge projects and Mimas’ Bringing Meaning into Search initiative.

Other Activities

Phil has already summarised the other possible tasks and activities put forward by the other breakout groups which include:

  • Establishing a common format for sharing search logs.
  • Identify which fields are used on advanced forms and how many people use advanced search facilities.
  • Analysis of the relative proportion of users who search and browse for resources and how many people click onwards from the initial resources.
  • Further development of the search questionnaire used by David Davies. If sufficient responses could be gathered to the same questions this would facilitate meta analysis of the results.
  • Work with communities around specific repositories and find out what works and doesn’t work across individual platforms and installations.
  • Create a research question inventory on the CETIS wiki and invite people to put forward ideas.

If anyone has any comments or suggestions on any of the above ideas we’d love to hear from you!

]]>
http://blogs.cetis.org.uk/lmc/2010/10/29/cetiswmd-activites/feed/ 2
When is Linked Data not Linked Data? – A summary of the debate http://blogs.cetis.org.uk/lmc/2010/03/16/when-is-linked-data-not-linked-data-a-summary-of-the-debate/ http://blogs.cetis.org.uk/lmc/2010/03/16/when-is-linked-data-not-linked-data-a-summary-of-the-debate/#comments Tue, 16 Mar 2010 11:32:07 +0000 http://blogs.cetis.org.uk/lmc/?p=310 One of the activities identified during last December’s Semantic Technology Working Group meeting to be taken forward by CETIS was the production of a briefing paper that disambiguated some of the terminology for those that are less familiar with this domain. The following terms in particular were highlighted:

  • Semantic Web
  • semantic technologies
  • Linked Data
  • linked data
  • linkable data
  • Open Data

I’ve finally started drafting this briefing paper and unsurprisingly defining the above terms is proving to be a non-trivial task! Pinning down agreed definitions for Linked Data, linked data and linkable data is particularly problematic. And I’m not the only one having trouble. If you look up Semantic Web and Linked Data / linked data on wikipedia you will find entries flagged as having multiple issues. It does rather feel like we’re edging close to holy war territory here. But having said that I do enjoy a good holy war as long as I’m watching safely from the sidelines.

So what’s it all about? As far as I can make out much of the debate boils down to whether Linked Data must adhere to the four principles outlined in Tim Berners Lee’s Linked Data Design Issues, and in particular whether use of RDF and SPARQL is mandatory. Some argue that RDF is integral to Linked Data, other suggest that while it may be desirable, use of RDF is optional rather than mandatory. Some reserve the capitalized term Linked Data for data that is based on RDF and SPARQL, preferring lower case “linked data”, or “linkable data”, for data that uses other technologies.

The fact that the Linked Data Design Issues paper is a personal note by Tim Berners Lee, and is not formally endorsed by W3C also contributes to the ambiguity. The note states:

  1. Use URIs as names for things
  2. Use HTTP URIs so that people can look up those names.
  3. When someone looks up a URI, provide useful information, using the standards (RDF, SPARQL)
  4. Include links to other URIs. so that they can discover more things.

I’ll refer to the steps above as rules, but they are expectations of behaviour. Breaking them does not destroy anything, but misses an opportunity to make data interconnected. This in turn limits the ways it can later be reused in unexpected ways. It is the unexpected re-use of information which is the value added by the web. (Berners Lee, http://www.w3.org/DesignIssues/LinkedData.html)

In the course of trying to untangle some of the arguments both for and against the necessity of using RDF and SPARQL I’ve read a lot of very thoughtful blog posts which it may be useful to link to here for future reference. Clearly these are not the only, or indeed the most recent, posts that discuss this most topical of topics, these happen to be the ones I have read and which I believe present a balanced over view of the debate in such a way as to be of relevance to the JISC CETIS community.

Linked data vs. Web of data vs. …
– Andy Powell, Eduserv, July 2009

The first useful post I read on this particular aspect of the debate is Andy Powell’s from July 2009. This post resulted from the following question Andy raised on twitter;

is there an agreed name for an approach that adopts the 4 principles of #linkeddata minus the phrase, “using the standards (RDF, SPARQL)” ??

Andy was of the opinion that Linked Data “implies use of the RDF model – full stop” adding:

“it’s too late to re-appropriate the “Linked Data” label to mean anything other than “use http URIs and the RDF model”.”

However he is unable to provide a satisfactory answer to his own question, i.e. what do you call linked data that does not use the RDF model, and despite exploring alternative models he concludes by professing himself to be worried about this.

Andy returned to this theme in a more recent post in January 2010, Readability and linkability which ponders the relative emphasis given to readability and linkability by initiatives such as the JISC Information Environment. Andy’s general principles have not changed but he presents term machine readable data (MRD) as a potential answer to the question he originally asked in his earlier post.

Does Linked Data need RDF?
– Paul Miller, The Cloud of Data, July 2009

Paul Miller’s post is partially a response to Andy’s query. Paul begins by noting that while RDF is key to the Semantic Web and

“an obvious means of publishing — and consuming — Linked Data powerfully, flexibly, and interoperably.”

he is uneasy about conflating RDF with Linked Data and with assertions that

“‘Linked Data’ can only be Linked Data if expressed in RDF.”

Paul discusses the wording an status of Tim Berners Lee’s Linked Data Design Issues and suggest that it can be read either way. He then goes on to argue that by elevating RDF from the best mechanism for achieving Linked Data to the only permissible approach we risk barring a large group

“with data to share, a willingness to learn, and an enthusiasm to engage.”

Paul concludes by asking the question:

“What are we after? More Linked Data, or more RDF? I sincerely hope it’s the former.”

No data here – just Linked Concepts and Linked, open, semantic?
– Paul Walk, UKOLN, July & November 2009

Paul Walk has published two useful posts on this topic; the first summarising and commenting on the debate sparked by the two posts above, and the second following the Giant Global Graph session at the CETIS 2009 Conference. This latter post presents a very useful attempt at disambiguating the terms Open data , Linked Data and Semantic Web. Paul also tries to untangle the relationship between these three memes and helpfully notes:

  • data can be open, while not being linked
  • data can be linked, while not being open
  • data which is both open and linked is increasingly viable
  • the Semantic Web can only function with data which is both open and linked

So What Is It About Linked Data that Makes it Linked Data™?
– Tony Hirst, Open University, March 2010

Much more recently Tony Hirst published this post which begins with a version of the four Linked Data principles cut from wikipedia. This particular version makes no mention of either RDF or SPARQL. Tony goes on to present a very neat example of data linked using HTTP URI and Yahoo Pipes and asks

“So, the starter for ten: do we have an example of Linked Data™ here?”

Tony broadly believes the answer is yes and is of a similar opinion to Paul Miller that too rigid adherence to RDF and SPARQL

“will put a lot of folk who are really excited about the idea of trying to build services across distributed (linkable) datasets off…”

Perhaps more controversially Tony questions the necessity of universal unique URIs that resolve to content suggesting that:

“local identifiers can fulfil the same role if you can guarantee the context as in a Yahoo Pipe or a spreadsheet”

Tony signs off with:

“My name’s Tony Hirst, I like linking things together, but RDF and SPARQL just don’t cut it for me…”

Meshing up a JISC e-learning project timeline, or: It’s Linked Data on the Web, stupid
– Wilbert Kraan, JISC CETIS, March 2009

Back here at CETIS Wilbert Kraan has been experimenting with linked data meshups of JISC project data held in our PROD system. In contrast to the approach taken by Tony, Wilbert goes down the RDF and SPARQL route. Wilbert confesses that he originally believed that:

“SPARQL endpoints were these magic oracles that we could ask anything about anything.”

However his attempts to mesh up real data sets on the web highlighted the fact that SPARQL has no federated search facility.

“And that the most obvious way of querying across more than one dataset – pulling in datasets from outside via SPARQL’s FROM – is not allowed by many SPARQL endpoints. And that if they do allow FROM, they frequently cr*p out.”

Wilbert concludes that:

“The consequence is that exposing a data set as Linked Data is not so much a matter of installing a SPARQL endpoint, but of serving sensibly factored datasets in RDF with cool URLs, as outlined in Designing URI Sets for the UK Public Sector (pdf).”

And in response to a direct query regarding the necessity of RDF and SPARQL to Linked Data Wilbert answered

“SPARQL and RDF are a sine qua non of Linked Data, IMHO. You can keep the label, widen the definition out, and include other things, but then I’d have to find another label for what I’m interested in here.”

Which kind of brings us right back to the question that Andy Powell asked in July 2009!

So there you have it. A fascinating but currently inconclusive debate I believe. Apologies for the length of this post. Hopefully one day this will go on to accompany our “Semantic Web and Linked Data” briefing paper.

]]>
http://blogs.cetis.org.uk/lmc/2010/03/16/when-is-linked-data-not-linked-data-a-summary-of-the-debate/feed/ 19
JISC Persistent Identifiers Meeting: Teaching and Learning Materials http://blogs.cetis.org.uk/lmc/2010/02/09/jisc-persistent-identifiers-meeting-teaching-and-learning-materials/ http://blogs.cetis.org.uk/lmc/2010/02/09/jisc-persistent-identifiers-meeting-teaching-and-learning-materials/#comments Tue, 09 Feb 2010 15:54:59 +0000 http://blogs.cetis.org.uk/lmc/?p=295 During the second half the JISC Persistent Identifiers Meeting participants split into five groups to discuss identifier requirements for the following resource types: research papers, research data, learning materials, cultural heritage, administrative information.

Phil Barker, Matt Jukes, Chris Awre and I composed the small group that discussed teaching and learning materials and these were our conclusions.

Constraints

Much of the discourse of the day did not sit comfortably with the teaching and learning domain. There was an implicit assumption that resources reside in repositories of some kind and are accompanied by quality-controlled metadata.

In reality teaching and learning materials are stored in many different places that can not be regarded as repositories “no matter how big the quotation marks”. These resources tend to be unmanaged and are not persistent.

Learning materials have relationships to many other entities e.g. the concept being learned, educational activities, course instance, individual people and social networks. These entities are poorly understood and modelled and are difficult to identify.

There is still a “craft” view of the process and practice of teaching and consequently there is some resistance to formalising the management of resources and activities.

There is no clearly identifiable lifecycle for teaching and learning materials and frequently no formal mechanism for their management.

Learning materials are “made public” but they are not “published” in the formal sense and metadata is often poor or non existent.

Use Cases

Composite objects – learning materials are frequently composite objects that may be ordered in one or more ways. Identifiers need to be able to identify the component parts, specify the order and potentially also to recompose and reorder them.

Open educational resources – once resources are released under an open license there are likely to be multiple different copies, formats and versions all over the place. How do you express relationship between these multiple entities?

Resource / course relationship – what is the relationship between learning materials and concepts such as educational activity or educational activity? It is notoriously difficult to assign an educational level to a learning resources but it is often much easier to assign an educational level to a course. Is it possible to extrapolate from the course to the resource?

Drivers

Institutions are beginning to recognise that learning materials are valuable for the core business of higher education, i.e. teaching and learning; and that it may be beneficial to manage them for quality and efficiency gains.

The OER movement may be a significant driver for futher work in this area.

What approaches are being used at present?

There is no clearly identifiable workflow behind the use of learning materials. The url of a learning resource tends to become its identifier and is dependant on where the resource is stored e.g. vle, repository, slideshare. Clearly however the url refers to a specific instantiation of a resource in a specific location.

There is very little in the way of established practice in terms of management and identification of teaching and learning materials. Everything in flux. In the terminology of the Repository Ecology report things are still a “mess.” A mess being:

“a complex issue that is not well formulated or defined”

Issues regarding sustainability and scalability

Do teaching and learning materials actually need to persist? There are usecases for persistence e.g. non-repudiation. Also teachers have to be confident that a resource will be there next time they need to use it.

Does it actually matter if resources are scattered all over the place with metadata that is poor to nonexistent?

And finally…
…if you know the answer to that last question please comment below!

]]>
http://blogs.cetis.org.uk/lmc/2010/02/09/jisc-persistent-identifiers-meeting-teaching-and-learning-materials/feed/ 4
JISC Persistent Identifiers Meeting: General Discussion http://blogs.cetis.org.uk/lmc/2010/02/09/jisc-persistent-identifier-meeting-general-discussion/ http://blogs.cetis.org.uk/lmc/2010/02/09/jisc-persistent-identifier-meeting-general-discussion/#comments Tue, 09 Feb 2010 12:42:47 +0000 http://blogs.cetis.org.uk/lmc/?p=290 Last week I attended a very productive and unusually amicable meeting on identifiers run by JISC and ably facilitated by Chris Awre. Besides their obvious critical relevance, my interest in identifiers goes back to an international symposium on the topic that CETIS hosted way back in 2003. That particular event generated a voluminous report and a series of usecases that I believe are still relevant today. The Digital Curation Centre ran a subsequent identifiers event in 2005 which presented various identifier technologies, a series of case studies and sparked considerable debate. I was interested to attend last week’s meeting to see how the debate regarding identifier requirements and technologies had moved forward given the significant developments of the intervening years, including Web 2.0, social networking, and OER.

And you know what? I think the debate has matured significantly. There was much greater acceptance that one size will never fit all, that there will always be multiple technologies to choose from, that choice of identifier scheme frequently depends on choice of technology platform (e.g. if you run DSpace you will use Handles) and that the technology is the easy part to solve. Previous identifier events tended to degenerate into holy wars but there was admirably little crusading evident last week. Although there was some flak flying around on the back channel.

I was slightly frustrated that, as usual, much of the debate focused implicitly on scholarly works and a particular form of “publication”. However there was much that was of relevance to the teaching and learning domain too. Here are some of the statement from the event that I would endorse:

Chris Awre, University of Hull

The emphasis on identifiers themselves can be distracting, it’s better to focus on the role and purpose of identifiers.

Identifying digital content at different phases of its lifecycle is key to the management of that content.

Identifiers need to have an associated meaning. An identifier is only an identifier if it is associated with a thing, otherwise it is just a string.

Identifiers need to disambiguate what they are identifying.

Henry Thompson, University of Edinburgh

Any naming schemes for sharing on the web are only as good as the services behind them.

Persistence of activity is critical, not persistence of technology. There are no purely technical solutions to vulnerabilities.

The only naming scheme of any technical sophistication is the Linean taxonomic scheme. (!)

Make it easy for ordinary users to mint good URIs.

Les Carr, University of Southampton

Persistence of URIs can be made difficult by institutions view of the web purely as a marketing tool.

Bas Cordewener, SURF Foundation

DOI is the only system that has a business model, but it can be expensive for repositories to implement.

Commercial influences should be kept at bay but we need to recognise that there are many different systems meeting different requirements.

Hugh Glaser, University of Southampton

Authority is established not bestowed.

Conclusion and JISC Interventions

The general conclusion of this event was that technology is not the problem, sufficient infrastructure already exists and one size will never fit all.

There was some debate regarding appropriate JISC interventions in this space but there was some consensus that JISC could usefully work with bodies such as UCISA, SCONUL and the Research Councils to provide advice on policy and business cases illustrating the appropriate use of identifiers. Case studies and demonstrators that situate solutions in context, articulate specific workflows and promote good practice in managing identifiers would also be of considerable value.

I’ll post a second piece shortly summarising the breakout group that focused specifically on identifier requirements within the teaching and learning domain.

]]>
http://blogs.cetis.org.uk/lmc/2010/02/09/jisc-persistent-identifier-meeting-general-discussion/feed/ 1
Orders from the Roundtable http://blogs.cetis.org.uk/lmc/2009/11/13/orders-from-the-roundtable/ http://blogs.cetis.org.uk/lmc/2009/11/13/orders-from-the-roundtable/#comments Fri, 13 Nov 2009 18:26:51 +0000 http://blogs.cetis.org.uk/lmc/?p=219 The Roundtable.

The CETIS conference always strives to address current and cutting edge issues in the domain of education technology, however the OER Technical Roundtable session was arguably more timely than most given that it coincided with a Guardian article on open courseware and open educational resources: Any student, any subject, anywhere.

The session was attended by over thirty participants representing a wide range of projects and initiatives, all of whom brought a plethora of technical issues to the table. These issues were ably captured by my colleague R. John Robertson using some recalcitrant mind mapping software which he is still fighting with. John has already posted the raw list of issues on his blog and on Slideshare.

As expected the range of issues was considerable but the following broad themes did emerge:

  • Tracking – metrics, Google analytics, statistics to support advocacy.
  • Usability of repositories – deposit and the role of SWORD, discovery and use.
  • Streaming large media files.
  • Licensing and rights encoding.
  • Resource description – metadata and JorumOpen, portability and interoperability, tagging, automatic metadata generation, identification of derivative works, SEO, Google discovery, how to users search for resources?
  • Aggregators to manage distributed resources – metadata aggregation, resource aggregation, iTunes & iTunesU, OER broadcasting, batch upload, Flickr & Slideshare APIs.
  • Granularity – disaggregation and reuse, content packaging, dependencies between resources.

Participants voted with their feet and broke into groups to discuss tracking, resource description, aggregators and granularity. We’ll try to synthesis the outputs of these breakout groups in later blog posts but in the meantime here’s a summary of the potential activities the groups identified that JISC and CETIS could take forward to benefit both the OER Programme and the community more generally:

  • Develop an agreed RSS / Atom profile for open educational resources.
  • Undertake research to analyse how teachers and learning actually search for educational resources. What terms do they search for and what metadata is actually necessary to facilite their searches? Synthesise data from projects, including Jorum and Steeple that are already gathering information about search terms, techniques and characteristics.
  • Investigate how successful commercial systems such as Amazon and iTunes create and manage resource descriptions. What can we learn from them?
  • Opening access to analytics and anonymised user data. Encourage the sharing of Google analytics data between projects.
  • Set up shared Piwik or Google analytics accounts for each JISC programme.
  • Share and synthesise good practice in resource tracking. Record and disseminate case studies.
  • Identify requirements and minimum recommendations for resource tracking.
  • Fund mini-projects on esoteric approaches to tracking.

We intend to discuss these recommendations with JISC in the not too distant future with a view to taking some of them forward. Hopefully we’ll be in a position to discuss progress in some if not all of these areas at #cetis10!

]]>
http://blogs.cetis.org.uk/lmc/2009/11/13/orders-from-the-roundtable/feed/ 1