Sheila Macneill » cetis-standards http://blogs.cetis.org.uk/sheilamacneill Cetis blog Wed, 25 Sep 2013 09:58:15 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.22 Design bash 11 pre-event ponderings and questions http://blogs.cetis.org.uk/sheilamacneill/2011/09/08/design-bash-11-pre-event-ponderings-and-questions/ http://blogs.cetis.org.uk/sheilamacneill/2011/09/08/design-bash-11-pre-event-ponderings-and-questions/#comments Thu, 08 Sep 2011 10:32:02 +0000 http://blogs.cetis.org.uk/sheilamacneill/?p=1025 In preparation for the this year’s Design Bash, I’ve been thinking about some of the “big” questions around learning design and what we actually want to achieve on the day.

When we first ran a design bash, 4 years ago as part of the JISC Design for Learning Programme we outlined three areas of activity /interoperability that we wanted to explore:
*System interoperability – looking at how the import and export of designs between systems can be facilitated;
*Sharing of designs – ascertaining the most effective way to export and share designs between systems;
*Describing designs – discovering the most useful representations of designs or patterns and whether they can be translated into runnable versions.

And to be fair I think these are still the valid and summarise the main areas we still need more exploration and sharing – particularly the translation into runnable versions aspect.

Over the past three years, there has been lots of progress in terms of the wider context of learning design in course and curriculum design contexts (i.e. through the JISC Curriculum Design and Delivery programmes) and also in terms of how best to support practitioners engage, develop and reflect on their practice. The evolution of the pedagogic planning tools from the Design for Learning programme into the current LDSE project being a key exemplar. We’ve also seen progress each year as a directly result of discussions at previous Design bashes e.g. embedding of LAMS sequences into Cloudworks (see my summary post from last year’s event for more details).

The work of the Curriculum Design projects in looking at the bigger picture in terms of the processes involved in formal curriculum design and approval processes, is making progress in bridging the gaps between formal course descriptions and representations/manifestations in such areas as course handbooks and marketing information, and what actually happens in the at the point of delivery to students. There is a growing set of tools emerging to help provide a number of representations of the curriculum. We also have a more thorough understanding of the wider business processes involved in curriculum approval as exemplified by this diagram from the PiP team, University of Strathclyde.

PiP Business Process workflow model

PiP Business Process workflow model

Given the multiple contexts we’re dealing with, how can we make the most of the day? Well I’d like to try and move away from the complexity of the PiP diagram concentrate a bit more on the “runtime” issue ie transforming and import representations/designs into systems which then can be used by students. It still takes a lot to beat the integration of design and runtime in LAMS imho. So, I’d like to see some exploration around potential workflows around the systems represented and how far inputs and outputs from each can actually go.

Based on some of the systems I know will be represented at the event, the diagram below makes a start at trying to illustrates some workflows we could potentially explore. N.B. This is a very simplified diagram and is meant as a starting point for discussion – it is not a complete picture.

Design Bash Workflows

Design Bash Workflows

So, for example, starting from some initial face to face activities such as the workshops being so successfully developed by the Viewpoints project or the Accreditation! game from the SRC project at MMU, or the various OULDI activities, what would be the next step? Could you then transform the mostly paper based information into a set of learning outcomes using the Co-genT tool? Could the file produced there then be imported into a learning design tool such as LAMS or LDSE or Compendium LD? And/ or could the file be imported to the MUSKET tool and transformed into XCRI CAP – which could then be used for marketing purposes? Can the finished design then be imported into a or a course database and/or a runtime environment such as a VLE or LAMS?

Or alternatively, working from the starting point of a course database, e.g. SRC where they have developed has a set template for all courses; would using the learning outcomes generating properties of the Co-genT tool enable staff to populate that database with “better” learning outcomes which are meaningful to the institution, teacher and student? (See this post for more information on the Co-genT toolkit).

Or another option, what is the scope for integrating some of these tools/workflows with other “hybrid” runtime environments such as Pebblepad?

These are just a few suggestions, and hopefully we will be able to start exploring some of them in more detail on the day. In the meantime if you have any thoughts/suggestions, I’d love to hear them.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2011/09/08/design-bash-11-pre-event-ponderings-and-questions/feed/ 1
IMS LTI and LIS in action webinar, 7 July http://blogs.cetis.org.uk/sheilamacneill/2011/06/23/ims-lti-and-lis-in-action-webinar-7-july/ http://blogs.cetis.org.uk/sheilamacneill/2011/06/23/ims-lti-and-lis-in-action-webinar-7-july/#comments Thu, 23 Jun 2011 14:53:16 +0000 http://blogs.cetis.org.uk/sheilamacneill/?p=867 As part of our on-going support for the current JISC DVLE programme, we’re running a webinar on Thursday 7 July at 2pm.

http://emea92334157.adobeconnect.com/r9lacqlg5ub/

The session will feature demonstrations of a number of “real world” system integrations using the IMS LTI and basic LTI and LIS specifications. These will be provided by the Stephen Vickers from the University of Edinburgh and the CeLTIc project; Steve Coppin, from the University of Essex and the EILE project and Phil Nichols from Psydev.

The webinar will run for approximately 1.5 hours, and is free to attend. More information, including a link to registration is available from the CETIS website.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2011/06/23/ims-lti-and-lis-in-action-webinar-7-july/feed/ 0
Understanding, creating and using learning outcomes http://blogs.cetis.org.uk/sheilamacneill/2011/06/23/understanding-creating-and-using-learning-outcomes/ http://blogs.cetis.org.uk/sheilamacneill/2011/06/23/understanding-creating-and-using-learning-outcomes/#comments Thu, 23 Jun 2011 14:44:20 +0000 http://blogs.cetis.org.uk/sheilamacneill/?p=889 How do you write learning outcomes? Do you really ensure that they are meaningful to you, to you students, to your academic board? Do you sometimes cut and paste from other courses? Are they just something that has to be done and are a bit opaque but do they job?

I suspect for most people involved in the development and teaching of courses, it’s a combination of all of the above. So, how can you ensure your learning outcomes are really engaging with all your key stakeholders?

Creating meaningful discussions around developing learning outcomes with employers was the starting point for the CogenT project (funded through the JISC Life Long Learning and Workforce Development Programme). Last week I attended a workshop where the project demonstrated the online toolkit they have developed. Initially designed to help foster meaningful and creative dialogue during co-circular course developments with employers, as the tool has developed and others have started to use it, a range of uses and possibilities have emerged.

As well as fostering creative dialogue and common understanding, the team wanted to develop a way to evidence discussions for QA purposes which showed explicit mappings between the expert employer language and academic/pedagogic language and the eventual learning outcomes used in formal course documentation.

Early versions of the toolkit started with the inclusion of number of relevant (and available) frameworks and vocabularies for level descriptors, from which the team extracted and contextualised key verbs into a list view.

List view of Cogent toolkit

List view of Cogent toolkit

(Ongoing development hopes to include the import of competencies frameworks and the use of XCRI CAP.)

Early feedback found that the list view was a bit off-putting so the developers created a cloud view.

Cloud view of CongeT toolkit

Cloud view of CongeT toolkit

and a Blooms view (based on Blooms Taxonomy).

Blooms View of CogenT toolkit

Blooms View of CogenT toolkit

By choosing verbs, the user is directed to set of recognised learning outcomes and can start to build and customize these for their own specific purpose.

CogenT learning outcomes

CogenT learning outcomes

As the tool uses standard frameworks, early user feedback started to highlight the potential for other uses for it such as: APEL; using it as part of HEAR reporting; using it with adult returners to education to help identify experience and skills; writing new learning outcomes and an almost natural progression to creating learning designs. Another really interesting use of the toolkit has been with learners. A case study at the University of Bedfordshire University has shown that students have found the toolkit very useful in helping them understand the differences and expectations of learning outcomes at different levels for example to paraphrase student feedback after using the tool ” I didn’t realise that evaluation at level 4 was different than evaluation at level 3″.

Unsurprisingly it was the learning design aspect that piqued my interest, and as the workshop progressed and we saw more examples of the toolkit in use, I could see it becoming another part of the the curriculum design tools and workflow jigsaw.

A number of the Design projects have revised curriculum documents now e.g. PALET and SRC, which clearly define the type of information needed to be inputted. The design workshops the Viewpoints project is running are proving to be very successful in getting people started on the course (re)design process (and like Co-genT use key verbs as discussion prompts).

So, for example I can see potential for course design teams after for taking part in a Viewpoints workshop then using the Co-genT tool to progress those outputs to specific learning outcomes (validated by the frameworks in the toolkit and/or ones they wanted to add) and then completing institutional documentation. I could also see toolkit being used in conjunction with a pedagogic planning tool such as Phoebe and the LDSE.

The Design projects could also play a useful role in helping to populate the toolkit with any competency or other recognised frameworks they are using. There could also be potential for using the toolkit as part of the development of XCRI to include more teaching and learning related information, by helping to identify common education fields through surfacing commonly used and recognised level descriptors and competencies and the potential development of identifiers for them.

Although JISC funding is now at an end, the team are continuing to refine and develop the tool and are looking for feedback. You can find out more from the project website. Paul Bailey has also written an excellent summary of the workshop.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2011/06/23/understanding-creating-and-using-learning-outcomes/feed/ 4
Technologies update from the Curriculum Design Programme http://blogs.cetis.org.uk/sheilamacneill/2011/04/21/technologies-update-from-the-curriculum-design-programme/ http://blogs.cetis.org.uk/sheilamacneill/2011/04/21/technologies-update-from-the-curriculum-design-programme/#comments Thu, 21 Apr 2011 09:43:22 +0000 http://blogs.cetis.org.uk/sheilamacneill/?p=785 We recently completed another round of PROD calls with the current JISC Curriculum Design projects. So, what developments are we seeing this time around?

Wordle of techs & standards used in Curriculum Design Prog, April 11

Wordle of techs & standards used in Curriculum Design Prog, April 11

Well, in terms of baseline technologies, integrations and approaches the majority of projects haven’t made any major deviations from what they originally planned. The range of technologies in use has grown slighty, mainly due to in parts to the addition of software being used for video capture (see my previous post on the use of video for capturing evidence and reflection).

The bubblegram below gives a view of the number of projects using a particular standard and/or technology.

XCRI is our front runner, with all 12 projects looking at it to a greater or lesser extent. But, we are still some way off all 12 projects actually implementing the specification. From our discussions with the projects, there isn’t really a specific reason for them not implementing XCRI, it’s more that it isn’t a priority for them at the moment. Whilst for others (SRC, Predict, Co-educate) it is firmly embedded in their processes. Some projects would like the spec to be more extensive than it stands which we have know for a while and the XCRI team are making inroads into further development particularly with its inclusion into the European MLO (Metadata for Learning Opportunities) developments. As with many education specific standards/specifications, unless there is a very big carrot (or stick) widespread adoption and uptake is sporadic however logical the argument for using the spec/standard is. On the plus side, most are confident that they could implement the spec, and we know from the XCRI mini-projects that there are no major technical difficulties in implementation.

Modelling course approval processes has been central to the programme and unsurprisingly there has been much interest and use of formal modelling languages such as BPMN and Archimate. Indeed nearly all the projects commented on how useful having models, however complex, has been to engage stakeholders at all levels within institutions. The “myth busting” power of models i.e. this shows what actually what happens and it’s not necessarily how you believe things happen, was one anecdote that made me smile and I’m sure resonates in many institutions/projects. There is also a growing use of the Archi tool for modelling and growing sharing of experience between a number of projects and the EA (Enterprise Architecture) group. As Gill has written, there are a number of parallels between EA and Curriculum Design.

Unsurprisingly for projects of this length (4 years) and perhaps heightened by “the current climate”, a number of the projects have (or are still) in the process of fairly major institutional senior staff changes. This has had some impact relating to purchasing decisions re potential institution wide systems, which are generally out of the control of the projects. There is also the issue of loss of academic champions for projects. This is generally manifesting itself in the projects by working on other areas, and lots of juggling by project managers. In this respect the programme clusters have also been effective with representatives from projects presenting to senior management teams in other institutions. Some of the more agile development processes teams have been using has also helped to allow teams to be more flexible in their approaches to development work.

One very practical development which is starting to emerge from work on rationalizing course databases is the automatic creation of course instances in VLEs. A common issue in many institutions is that there are no version controls for course within VLEs and it’s very common for staff to just create a new instance of a course every year and not delete older instances which apart from anything else can add up to quite a bit of server space. Projects such as SRC are now at the stage where there new (and approved) course templates are populating the course database which then triggers an automatic creation of a course in the VLE. Predict, and UG-Flex have similar systems. The UG-Flex team have also done some additional integration with their admissions systems so that students can only register for courses which are actually running during their enrollment dates.

Sharepoint is continuing to show a presence. Again there are a number of different approaches to using it. For example in the T-Spark project, their major work flow developments will be facilitated through Sharepoint. They now have a part time Sharepoint developer in place who is working with the team and central IT support. You can find out more at their development blog. Sharepoint also plays a significant role in the PiP project, however the team are also looking at integrations with “bigger” systems such as Oracle, and are developing a number of UI interfaces and forms which integrate with Sharepoint (and potentially Oracle). As most institutions in the UK have some flavour of Sharepoint deployed, there is significant interest in approaches to utilising it most effectively. There are some justifiable concerns relating to its use for document and data management, the later being seen as not one of its strengths.

As ever it is difficult to give a concise and comprehensive view from such a complex set of projects, who are all taking a slightly different approach to their use of technology and the methods they use for system integration. However many projects have said that the umbrella of course design has allowed them to discuss, develop the use of institutional administration and teaching and learning systems far more effectively than they have been able to previously. A growing number of resources from the projects is available from The Design Studio and you can view all the information we have gathered from the projects from our PROD database.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2011/04/21/technologies-update-from-the-curriculum-design-programme/feed/ 2
What’s in a name? http://blogs.cetis.org.uk/sheilamacneill/2010/10/21/whats-in-a-name/ http://blogs.cetis.org.uk/sheilamacneill/2010/10/21/whats-in-a-name/#comments Thu, 21 Oct 2010 10:34:48 +0000 http://blogs.cetis.org.uk/sheilamacneill/?p=523 Names, they’re funny things aren’t they? Particularly project ones. I’ve never really been great at coming up with project names, or clever acronyms. However remembering what acronyms stand for is almost a prerequisite for anyone who works for CETIS, and has anything to do with JISC :-). The issue of meaningful names/acronyms came up yesterday at the session the CCLiP project ran at the Festival of The Assemblies meeting in Oxford.

Working with 11 partners from the education and cultural sectors, the CCLiP project has been developing a CPD portal using XCRI as a common data standard. The experiences of working with such a cross sector of organisations has led members of the team to be involved in a benefits realisation project, the BR XCRI Knowledge Base. This project is investigating ways to for want of a better word, sell, the benefits of using XCRI. However, one of the major challenges is actually explaining what XCRI is to key (more often than not, non-technical) staff. Of course the obvious answer to some, is that it stands for eXchanging Course Related Information and that pretty much sums it up. But it’s not exactly something that naturally rolls of the tongue and encapsulates its potential uses is it? So, in terms of wider benefits realisation how do you explain the potential of XCRI and encourage wide adoption?

Of course, this is far a from unique problem, particularly in the standards world. They tend not have the most of exciting of names, and of course a lot of actual end users never need to know what they’re called either. However, at this stage in the XCRI life-cycle, there is a need for explanation for both the technical and non-technically minded. And of course that is happening with case-studies etc being developed.

During a lively and good natured discussion participants in the session discussed the possibility of changing the name from XCRI to “opportunity knocks” as way to encapsulate the potential benefits that had been demonstrated to us by the CCLiP team, and create a bit of curiosity and interest. I’m not sure if that would get a very positive clappometer response from certain circles, but I’d be interested in any thoughts you may have.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2010/10/21/whats-in-a-name/feed/ 1
2nd Linked Data Meetup London http://blogs.cetis.org.uk/sheilamacneill/2010/02/26/2nd-linked-data-meetup-london/ http://blogs.cetis.org.uk/sheilamacneill/2010/02/26/2nd-linked-data-meetup-london/#comments Fri, 26 Feb 2010 14:27:57 +0000 http://blogs.cetis.org.uk/sheilamacneill/?p=372 Co-located with the dev8D, the JISC Developer Days event, this week, I along with about 150 others gathered at UCL for a the 2nd Linked Data Meetup London.

Over the past year or so the concept and use of linked data seems to be gaining more and more traction. At CETIS we’ve been skirting around the edges of semantic technologies for some time – tying to explore realization of the vision particularly for the teaching and learning community. Most recently with our semantic technologies working group. Lorna’s blog post from the last meeting of the group summarized some potential activity areas we could be involved in.

The day started with a short presentation from Tom Heath, Talis, who set the scene by giving an overview of the linked data view of the web. He described it as a move away from the document centric view to a more exploratory one – the web of things. These “things” are commonly described, identified and shared. He outlined 10 task with potential for linked data and put forward a case for how linked data could enhance each one. E.g. locating – just now we can find a place, say Aberdeen, however using linked data allows us to begin to disambiguate the concept of Aberdeen for our own context(s). Also sharing content, with a linked data approach, we just need to be able to share and link to (persistent) identifiers and not worry about how we can move content around. According to Tom, the document centric metaphor of the web hides information in documents and limits our imagination in terms of what we could do/how we could use that information.

The next presentation was from Tom Scott, BBC who illustrated some key linked data concepts being exploited by the BBC’s Wildlife Finder website. The site allows people to make their own “wildlife journeys”, by allowing them to explore the natural world in their own context. It also allows the BBC to, in the nicest possible way, “pimp” their own progamme archives. Almost all the data on the site, comes from other sources either on the BBC or the wider web (e.g. WWF, Wikipedia). As well as using wikipedia their editorial team are feeding back into the wikipedia knowledge base – a virtuous circle of information sharing. Which worked well in this instance and subject area, but I have a feeling that it might not always be the case. I know I’ve had my run-ins with wikipedia editors over content.

They have used DBPedia as a controlled vocabulary. However as it only provides identifiers, and no structure they have built their own graph to link content and concepts together. There should be RDF available from their site now – it was going live yesterday. Their ontology is available online.

Next we had John Sheridan and Jeni Tennison from data.gov.uk. They very aptly conceptualised their presentation around a wild-west pioneer theme. They took us through how they are staking their claim, laying tracks for others to follow and outlined the civil wars they don’t want to fight. As they pointed out we’re all pioneers in this area and at early stages of development/deployment.

The data.gov.org project wants to:
* to develop social capital and improve delivery of public service
*make progress and leave legacy for the future
*use open standards
*look at approaches to publishing data in a distributed way

Like most people (and from my perspective, the teaching and learning community in particular) they are looking for, to continue with the western theme, the “Winchester ’73” for linked data. Just now they are investigating creating (simple) design patterns for linked data publishing to see what can be easily reproduced. I really liked their “brutally pragmatic and practical” approach. Particularly in terms of developing simple patterns which can be re-tooled in order to allow the “rich seams” of government data to be used e.g. tools to create linked data from Excel. Provenance and trust is recognised as being critical and they are working with the W3C provenance group. Jeni also pointed that data needs to be easy to query and process – we all neglect usability of data at our peril. There was quite a bit of discussion about trust and John emphasised that the data.gov.uk initiative was about public and not personal data.

Lin Clark then gave an overview of the RDF capabilities of the Drupal content managment system. For example it has default RDF settings and FOAF capability built in. The latest version now has an RDF mapping user interface which can be set up to offer up SPARQL end points. A nice example of the “out of the box” functionality which is needed for general uptake of linked data principles.

The morning finished with a panel session where some of key issues raised through the morning presentations were discussed in a bit more depth. In terms of technical barriers, Ian Davies (CEO, Talis) said that there needs to be a mind shift for application development from one centralised database to one where multiple apps access multiple data stores. But as Tom Scott pointed out it if if you start with things people care about and create URIs for them, then a linked approach is much more intuitive, it is “insanely easy to convert HTML into RDF “. It was generally agreed that the identifying of real world “things”, modelling and linking of data was the really hard bit. After that, publishing is relatively straightforward.

The afternoon consisted of a number of themed workshops which were mainly discussions around the issues people are grappling with just now. I think for me the human/cultural issues are crucial, particularly provenance and trust. If linked data is to gain more traction in any kind of organisation, we need to foster a “good data in, good data out” philosophy and move away from the fear of exposing data. We also need to ensure that people understand that taking a linked data approach doesn’t automatically presume that you are going to make that data available outwith your organisation. It can help with internal information sharing/knowledge building too. Of course what we need are more killer examples or winchester 73s. Hopefully over the past couple of days at Dev8 progress will have been made towards those killer apps or at least some lethal bullets.

The meet up was a great opportunity to share experiences with people from a range of sectors about their ideas and approaches to linked data. My colleague Wilbert Kraan has also blogged about his experiments with some of our data about JISC funded projects.

For an overview of the current situation in UK HE, it was timely that Paul Miller’s Linked Data Horizon Scan for JISC was published on Wednesday too.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2010/02/26/2nd-linked-data-meetup-london/feed/ 1
Some thoughts on the IMS Quarterly meeting http://blogs.cetis.org.uk/sheilamacneill/2008/09/21/some-thoughts-on-the-ims-quarterly-meeting/ http://blogs.cetis.org.uk/sheilamacneill/2008/09/21/some-thoughts-on-the-ims-quarterly-meeting/#comments Sun, 21 Sep 2008 15:09:21 +0000 http://blogs.cetis.org.uk/sheilamacneill/2008/09/21/some-thoughts-on-the-ims-quarterly-meeting/ I’ve spent most this week at the IMS Quarterly meeting in Birmingham and thought I’d share a few initial reflections. In contrast to most quarterly meetings this was an open event which had its benefits but some drawbacks (imho) too.

On the up side it was great to see so many people at an IMS meeting. I hadn’t attended a quarterly meeting for over a year so it was great to see old faces, but heartening to see so many new ones too. There did seem to be a real sense of momentum – particularly with regards to the Common Cartridge specification. The real drive for this seems to be coming from the K-12 CC working group who are making demands to extend the profile of the spec from its very limited initial version. They are pushing for major extensions to the QTI profile (it is limited to six question types at the moment) to be included, and are also looking to Learning Design as way to provide curriculum mapping and lesson planning to cartridges.

The schools sector on the whole do seem to be more pragmatic and more focused than our rather more (dare I say self-indulgent) HE mainly research focused community. There also seems to be concurrent rapid development (in context of spec development timescales) in the Tools Interoperability spec with Dr Chuck and his team’s developments in “simple TI” (you can watch the video here)

On the down side, the advertised plugfest was in reality more of a “presentationfest”, which although interesting in parts wasn’t really what I had expected. I was hoping to see more live demos and interoperability testing.

Thursday was billed as a “Summit on Interoperability: Now and Next”. Maybe it was just because I was presentation weary by that point, but I think we missed a bit of an opportunity to have more discussion – particularly in the first half of the day.

It’s nigh on impossible to explain the complexity of the Learning Design specification in half hour slots -as Dai Griffiths pointed out in his elevator pitch “Learning Design is a complex artefact”. Although the Dai and Paul Sharpels from the ReCourse team did a valiant job, as did Fabrizio Giongine from Guinti Labs with his Prolix LD demo; I can’t help thinking that what the community, and in turn perhaps what IMS should be concentrating on is developing a new, robust set of use cases for the specification. Having some really tangible designs rooted in really practice would (imho) make the demoing of tools much more accessible as would starting demos from the point of view of the actual “runnable” view of the design instead of the (complex) editor view. Hopefully some of the resources from the JISC D4L programme can provide some starting points for that.

The strap line for Common Cartridge is “freeing the content” and in the afternoon the demos from David Davies (University of Warwick ) on the use of repositories and RSS in teaching followed by Scott Wilson and Sarah Currier demoing some applications of the SWORD specification for publishing resources in Intralibrary through the Feedforward tool illustrated exactly that. David gave a similar presentation at a SIG meeting last year, and I continue to be impressed by the work David and his colleagues are doing using RSS. SWORD also continues to impresses with every implementation I see.

I hope that IMS are able to build on the new contacts and offers of contributions and collaborations that arose over the week, and that they organise some more open meetings in the future. Of course the real highlight of the week was learning to uʍop ǝpısdn ǝʇıɹʍ:-)

]]>
http://blogs.cetis.org.uk/sheilamacneill/2008/09/21/some-thoughts-on-the-ims-quarterly-meeting/feed/ 4
Opening up the IMS http://blogs.cetis.org.uk/sheilamacneill/2008/06/12/opening-up-the-ims/ http://blogs.cetis.org.uk/sheilamacneill/2008/06/12/opening-up-the-ims/#comments Thu, 12 Jun 2008 14:38:39 +0000 http://blogs.cetis.org.uk/sheilamacneill/2008/06/12/opening-up-the-ims/ Via Stephen Downes OL Daily I came across this post by Michael Feldstein about his recent experiences in IMS and around the contradiction of IMS being a subscription organisation producing so called open standards. This issue has been highlighted over the last 2 years or so with the changes in access to to public versions of specs.

Michael puts forward three proposals to help IMS in becoming more open:

    “Eliminate altogether the distinction between the members-only CM/DN draft and the one available to the general public. IMS members who want an early-adopter advantage should join the working groups.”

    Create a clear policy that individual working groups are free to release public general updates and solicit public input on specific issues prior to release of the public draft as they see fit.

    Begin a conversation with the IMS membership about the possibility of opening up the working group discussion areas and document libraries to the general public on a read-only basis.”

Getting sustained involvement in any kind of specification process is very difficult. I know I wouldn’t have much to do with IMS unless I was paid to do it :-) Thankfully here in the UK JISC has recognised that have an organisation like CETIS can have an impact on standards development and uptake. But the world is changing particularly around the means and access to educational content. Who needs standards compliant content when you can just rip and mix off the web as the edupunkers have been showing us over the last few weeks. I don’t think they are at all “bovvered” about needing for example to convert their videos to Common Cartridges when they can just stick them onto Youtube.

Here at CETIS we have been working closely with IMS to allow JISC projects access to specifications but the suggestions Michael makes would certainly help broaden out the reach of the organisation and hopefully help provide the development of useful, relevant (international) standards.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2008/06/12/opening-up-the-ims/feed/ 0
IMS Announces Pilot Project Exploring Creative Commons Licensing of Interoperability Specifications http://blogs.cetis.org.uk/sheilamacneill/2008/03/03/ims-announces-pilot-project-exploring-creative-commons-licensing-of-interoperability-specifications/ http://blogs.cetis.org.uk/sheilamacneill/2008/03/03/ims-announces-pilot-project-exploring-creative-commons-licensing-of-interoperability-specifications/#comments Mon, 03 Mar 2008 15:15:23 +0000 http://blogs.cetis.org.uk/sheilamacneill/2008/03/03/ims-announces-pilot-project-exploring-creative-commons-licensing-of-interoperability-specifications/ IMS (GLC)have just announced announced plans to initiate a pilot project in the distribution of interoperability specifications under a form of Creative Commons license. According to the press release “IMS GLC has conceptualized a novel approach that may be applicable to many standards organizations.”

I’m not exactly sure just what these novel approaches are, and even less so about how they would actually work. But not doubt we will be hearing more in coming months. Any moves towards more openness of the standards agenda can only be a move in the right direction.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2008/03/03/ims-announces-pilot-project-exploring-creative-commons-licensing-of-interoperability-specifications/feed/ 0
Assessment, Packaging – where, why and what is going on? http://blogs.cetis.org.uk/sheilamacneill/2008/02/26/joint-assessment-and-ec-sig-meeting-19-february-cambridge/ http://blogs.cetis.org.uk/sheilamacneill/2008/02/26/joint-assessment-and-ec-sig-meeting-19-february-cambridge/#comments Tue, 26 Feb 2008 15:01:42 +0000 http://blogs.cetis.org.uk/sheilamacneill/2008/02/26/joint-assessment-and-ec-sig-meeting-19-february-cambridge/ Steve Lay (CARET, University of Cambridge) hosted the joint Assessment and EC SIG meeting at the University of Cambridge last week. The day provided and opportunity to get an update on what is happening in the specification world, particularly in the content packaging and assessment areas and compare that to some really world implementations including a key interest – IMS Common Cartridge.

Packaging and QTI are intrinsically linked – to share and move questions/items they need to be packaged – preferably in an interoperable format:-) However despite recent developments in both the IMS QTI and CP specifications, due to changes in the structure of IMS working groups there have been no public releases of either specifications for well over a year. This is mainly due to the need for at least two working implementations of a specification before public release. In terms of interoperability, general uptake and usabillity this does seem like a perfectly sensible change. But as ever, life is never quite that simple.

IMS Common Cartridge has come along and has turned into something of a flag-bearer for IMS. This has meant that an awful lot of effort from some of the ‘big’ (or perhaps ‘active’ would be more accurate) members of IMS has been concentrated on the development of CC and not pushing implementation of CP1.2 or the latest version of QTI. A decision was taken early in the development of CC to use older, more widely implemented versions of specifications rather than the latest versions. (It should be noted that this looks like changing as more demands are being made on CC which the newer versions of the specs can achieve.)

So, the day was also an opportunity to reflect on what the current state of play is with IMS and other specification bodies, and to discuss with the community what areas they feel are most important for CETIS to be engaging in. Profiling did surface as something that the JISC elearning development community – particularly in the assessment domain – should be developing further.

In terms of specification updates, our host Steve Lay presented a brief history of QTI and future development plans, Adam Cooper (CETIS) gave a round up from the IMS Quarterly meeting held the week before and Wilbert Kraan (CETIS) gave a round up of packaging developments including non IMS initiatives such as OAI-ORE and IEEE RAMLET. On the implementation side of things Ross MacKenzie and Sarah Wood (OU) took us through their experiences of developing common cartridges for the OpenLearn project and Niall Barr (NB Software) gave an overview of integrating QTI and common cartridge. There was also a very stimulating presentation from Linn van der Zanden (SQA) on a pilot project using wikis and blogs as assessment tools.

Presentations/slidecasts ( including as much discussion as was audible) and MP3s are available from the wiki so if you want to get up to speed on what is happening in the wonderful world of specifications – have a listen. There is also an excellent review of the day over on Rowin’s blog.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2008/02/26/joint-assessment-and-ec-sig-meeting-19-february-cambridge/feed/ 1
LETSI update http://blogs.cetis.org.uk/sheilamacneill/2008/02/14/letsi-update/ http://blogs.cetis.org.uk/sheilamacneill/2008/02/14/letsi-update/#comments Thu, 14 Feb 2008 14:01:53 +0000 http://blogs.cetis.org.uk/sheilamacneill/2008/02/14/letsi-update/ Alongside the AICC meetings in last week in California, there was an ADL/AICC/LETSI Content Aggregation Workshop. Minutes from the meeting are available from the LETSI wiki. There seemed to be a fairly general discussion covering a range of packaging formats from IMS CP to MPEG 21 and DITA.

As we have reported previously, the ADL would like to see a transition to a community driven version of SCORM called core SCORM by 2009/10. This meeting brought together some of the key players although it looks like there was no official IMS representation. It does seem that things are still very much at the discussion stage and there is still a way to go for consensus on what de jour standards core SCORM will include. There is another LETSI meeting in Korea in March, before the SC36 Plenary Meeting. One positive suggestion that appears at the end of the minutes is the development of white paper with a clear conclusion or “call to action’. Until then it’s still difficult to see what impact this initiative will have.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2008/02/14/letsi-update/feed/ 0
IMS announces development of community testing tool for Common Cartridge http://blogs.cetis.org.uk/sheilamacneill/2008/01/23/ims-announces-development-of-community-testing-tool-for-common-cartridge/ http://blogs.cetis.org.uk/sheilamacneill/2008/01/23/ims-announces-development-of-community-testing-tool-for-common-cartridge/#comments Wed, 23 Jan 2008 10:29:34 +0000 http://blogs.cetis.org.uk/sheilamacneill/2008/01/23/ims-announces-development-of-community-testing-tool-for-common-cartridge/ The IMS Global Learning Consortium have announced the launch of a new project that will produce a community source testing tool for the Common Cartridge (CC) format. JISC along with ANGEL Learning, eCollege, McGraw-Hill, Microsoft, The Open University United Kingdom, Pearson Education and Ucompass.com have agreed to provide initial funding for the project.

“A number of organizations have recognized the community benefit in having a common format for both publisher-sourced materials and in-house production by learning institutions,” said Rob Abel of IMS. “I’m delighted to announce that such is the level of commitment to this goal, nine organizations have already stepped forward to fund and participate in a project to develop a cartridge testing tool that will be distributed free-of-charge by the CC Alliance.”

More information about the Cartridge Alliance is available @ http://www.imsglobal.org/cc/alliance.html

We will keep you informed of developments of this tool and the joint Assessment and EC SIG meeting on 19th February will include presentations from a number CC implementers including the OU and a community update from CETIS from the IMS quarterly meeting which takes place the week before.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2008/01/23/ims-announces-development-of-community-testing-tool-for-common-cartridge/feed/ 0
Design Bash: moving towards learning design interoperability http://blogs.cetis.org.uk/sheilamacneill/2007/10/26/design-bash-moving-towards-learning-design-interoperability/ http://blogs.cetis.org.uk/sheilamacneill/2007/10/26/design-bash-moving-towards-learning-design-interoperability/#comments Fri, 26 Oct 2007 08:40:17 +0000 http://blogs.cetis.org.uk/sheilamacneill/2007/10/26/design-bash-moving-towards-learning-design-interoperability/ Question: How do you get a group of projects with a common overarching goal, but with disparate outputs to share outputs? Answer: Hold a design bash. . .

Codebashes and CETIS are quite synonymous now and they have proved to be an effective way for our community to feedback into specification bodies and increase our own knowledge of how specs actually need to be implemented to allow interoperability. So, we decided that with a few modifications, the general codebash approach would be a great way for the current JISC Design for Learning Programme projects to share their outputs and start to get to grips with the many levels of interoperability the varied outputs of the programme present.

To prepare for the day the projects were asked to submit resources which fitted into four broad categories (tools, guidelines/resources, inspirational designs and runnable designs). These resources were tagged into the programmes’ del.icio.us site and using the DFL SUM (see Wilbert’s blog for more information on that) we were able to aggregrate resources and use rss feeds to pull them into the programme wiki. Over 60 resources were submitted, offering a great snapshot of the huge level activity within the programme.

One of the main differences between the design bash and the more established codebashes was the fact that there wasn’t really much code to bash. So we outlined three broad areas of interoperability to help begin conversations between projects. These were:
* conceptual interoperability: the two designs or design systems won’t work together because they make very different assumptions about the learning process, or are aimed at different parts of the process;
* semantic interoperability: the two designs or design systems won’t work together because they provide or expect functionality that the other doesn’t have. E.g. a learning design that calls for a shared whiteboard presented to a design system that doesn’t have such a service;
* syntactic interoperability:the two designs or design systems won’t work together because required or expected functionality is expressed in a format that is not understood by the other.

So did it work? Well in a word yes. As the programme was exploring general issues around designing for learning and not just looking at for example the IMS LD specification there wasn’t as much ‘hard’ interoperability evidence as one would expect from a codebash. However there were many levels of discussions between projects. It would be nigh on impossible to convey the depth and range of discussions in this article, but using the three broad categories above, I’ll try and summarize some of the emerging issues.

In terms of conceptual interoperability one of the main discussion points was the role of context in designing for learning. Was the influence coming from bottom up or top down? This has a clear effect on the way projects have been working and the tools they are using and outcomes produced. Also in some cases the tools sometimes didn’t really fit with the pedagogical concepts of some projects which led to a discussion around the need to start facilitating student design tools -what would these tools look like/work?

In terms of semantic interoperability there were wide ranging discussions around the levels of granularity of designs from the self contained learning object level to the issues of extending and embellishing designs created in LAMS by using IMS LD and tools such as Reload and SLeD.

At the syntactic level there were a number of discussions not just around the more obvious interoperability issues between systems such as LAMS and Reload, but also around the use of wikis and how best to access and share resources It was good to hear that some of the projects are now thinking of looking at the programme SUM as a possible way to access and share resources. There was also a lot of discussion around the incorporation of course description specifications such as XCRI into the pedagogic planner tools.

Overall a number of key issues were teased out over the day, with lots of firm commitment shown by all the projects to continue to work together and increase all levels of interoperability. There was also the acknowledgement that these discussions cannot take place in a vacuum and we need to connect with the rest of the learning design community. This is something which the CETIS support project will continue during the coming months.

More information about the Design Bash and the programme in general can be found on the programme support wiki.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2007/10/26/design-bash-moving-towards-learning-design-interoperability/feed/ 1
Joint MDR and EC SIG meeting, 29 June http://blogs.cetis.org.uk/sheilamacneill/2007/07/03/joint-mdr-and-ec-sig-meeting-29-june/ http://blogs.cetis.org.uk/sheilamacneill/2007/07/03/joint-mdr-and-ec-sig-meeting-29-june/#comments Tue, 03 Jul 2007 14:28:18 +0000 http://blogs.cetis.org.uk/sheilamacneill/2007/07/03/joint-mdr-and-ec-sig-meeting-29-june/ The MDR and EC SIGs held a joint meeting on 29 June at the University of Strathclyde. The focus of the meeting was on innovative ways of creating, storing, sharing and using content.

As this was a joint meeting the presenters were a mix of people and projects working a the more formal ‘coal face’ repository end of things and those working more with staff and students in creating content using more informal technologies.

The day got off to a great start with David Davies (IVIMEDS, University of Warwick) who gave us an overview of the way he is starting to mash up content from various sources (including their formal repository) to create new and dynamic resources for students. A process which he described as being potentially both transformative and disruptive – for everyone involved. David gave a really practical insight into the way he has been combining RSS feeds with yahoo pipes to create resources which are directly embedded into the institutions’ learning environment. Using this type of technology staff area able to share content in mulitple ways with students, without the student having to access the learning object repository. David also strongly advocated the use of offline aggregators, describing these as personal repositories. As well as using RSS feeds from their repository and various relevant journals, Warwick are increasingly creating and using podcasts. David described how a podcast is basically and RSS feed with binary enclosures which means that they can do much more than just contain audio. At Warwick they are creating podcasts which include flash animations. So in this way they are again providing another way for students to access content.

Of course the system David was describing is quite mature, has stable workflow processes with agreed metadata. However it did show the great potential for ‘remixing’ content within an academic environment and how more informal interfaces can interact with formal repositories to create dynamic, personalised content. A real inspiration if like me you’ve been meaning to do something with pipes but just haven’t quite got round to it yet :-)

Charles Duncan (Intrallect Ltd) then presented the SRU (Search and Retrive via a URL) tool they have developed as part of the CD-LOR project. SRU allows a way to embedded a simple query directly into a web-page. The tool was developed to meet a use case from CD-LOR which would allow someone (staff or student) to search a repository without actually having to ‘join’ it ( or become a member of that community) – a sort of try before you buy. Charles give an overview of the history of the development of SRU (and SRW) and then a demonstration of creating queries with the tool and then searching a number of respositories. The tool retrives XML metadata recordings which then can be transformed (using XSL generally) and then using style sheets the results are made ‘viewable’ on a webpage. Limitations of the tool include the fact that it is limited to a single repository search and there are a number of security issues surrounding XSL transforms from repositories. However using this approach does provide another way to access content (or at least the metadata about content) stored in repositories. As this was developed as part of a JISC project, the tool open source and is available on sourceforge.

Before lunch we had a short demonstration from Sue Manuel (University of Loughborough) of the PEDESTAL project. Part of current JISC Digital Repositories programme, the Platform for Exchange of Documents and Expertise Showcasing Teaching project created a service to provide new opportunities for the sharing of materials and discussion related to teaching and to provide new opportunities for showcasing teaching and research interests. Sue gave us a demo of the system, illustrating how it related content and people. It is now staring to be used by staff at Loughborough, unfortunately the future of the system is somewhat in doubt due to the implementation of a new VLE system throughout the institution.

After lunch we moved to more issues surrounding student generated content with Caroline Breslin and Andrew Wodehouse from the DIDET project. Part of the JISC/NSL funded digitial libraries in the classroom programme, DIDET is a collaborative project between the University of Strathclyde, Stanford University and Olin College. Based in a design engineering course DIDET actively encourages (global) online collaboration using online tools to create, store, share and assess coursework. Caroline and Andrew gave an overview of the project, the tools they had created (including an online collaborative learning environment and a digital library). They then outlined some of the challenges they’ve had to face particularly when putting resources into the formal repository and also how to capture some of the more tacit learning process that are taking place in this type of learning situation.

Students are increasingly using sites like youtube, flickr, etc when they are working – and this is actively encourged by staff. However a continuing challenge for staff and students alike is the issue of creativity versus legality. In a design course when students are expected to research existing products, and with the international dimension to this project, there is the added problem of differences between copyright laws in the UK and the US. As librarians as involved in course design and teaching information literacy is an underlying theme of the curriculum. There are QA procedures in place for any content that is going to be archived and made available in the formal repository. The project has a team of staff including lectures, learning technologists and librarians however they are still grabbing with workflow issues when it comes to adding content to the formal repository – mainly due to lack of time. However on the plus side the overall approach has been sucessful and gets positive feedback from students, staff and employers. The project also shows how newer collaborative content creation and sharing technologies can be integrated with more institutional based ones to allow students to use the technologies that suit their needs.

We then moved to the Resource Browser project, presented by Michael Gardner (University of Essex). Part the JISC eLearning programme’s current toolkits and demonstrators projects, Resource Browser is a tool which aims to help improve searching by linking resources with information about the people who created them and vice versa. Building on the work of a their previous Delta project (which was aiming to help practitioners find and share resources) Resource Browser combines a web service tool for storing FOAF (friend of a friend) profiles with exsiting functionality of Delta. Michael then gave a demo of the sytem. If you are familiar with topic maps it looks like quite a similar interface but uses a technology called touchgraph for viewing. By clicking on a person an extended view of that persons profile, the resources they have created and the people they are linked with is viewable. As this is only a six month project it is very much at a prototype stage but it does look like it could have potential. With the use of educational ontologies created in Delta it could be very useful for sharing learning designs as peer recommendation seems to be very important when searching for learning designs. Michael also outlined some ideas they have for automatic metadata creation where an application scans the documents on a users pc then creates a concept map which can be uploaded to the Delta system . . .I have to say the thought of what useful metadata might come back from such a scan on my documents does seem a little scary :-)

The final presentation of the day came from Julie Allinson (UKOLN, University of Bath) who presented the SWORD (simple webservice offering respository deposit). As Julie pointed out her presentation nicely ended the day as it dealt with putting ‘stuff’ into a repository and not just getting it out. The project is looking to improve ways to populate repositories through a standards based approach and they are looking at ATOM in particular. Perhaps the best summary of this talk comes from David Davies blog – where he describes how the project has restored his faith in educational technology – can’t get better than that really.

Overall a great day with lots of interesting presentations and hopefully some useful linking of people and projects – in fact a bit of f2f mash up of ideas! Presentations and audio recordings are available from the JISC CETIS wiki.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2007/07/03/joint-mdr-and-ec-sig-meeting-29-june/feed/ 0
CodeBash 4 – testing interoperability http://blogs.cetis.org.uk/sheilamacneill/2007/06/19/codebash-4/ http://blogs.cetis.org.uk/sheilamacneill/2007/06/19/codebash-4/#comments Tue, 19 Jun 2007 08:57:11 +0000 http://blogs.cetis.org.uk/sheilamacneill/2007/06/19/codebash-4/ It’s not been so much springwatch time as codewatch time for JISC CETIS with our fourth codebash taking place on 7/8th June at the University of Bolton.

As in previous events the ‘bash’ focused mainly on content related activities concerning IMS Content Packaging and QTI. However there were a number of extended conversations surrounding various e-portfolio issues. The Portfolio SIG held a co-located meeting at the University on the second day of the codebash.

Thanks to our Dutch colleagues at SURF we were able to provide remote access to the event through the use of their macromedia breeze system. We had about 15 remote participants including a large Scandinavian contingent organised through Tore Hoel from the Norwegian eStandards project. Tore also hosted a face to face meeting on day two of the bash.

Day one began with a series of presentations giving updates on IMS Content Packaging, QTI and SCORM. Although it may well seem that content packaging is ‘done and dusted’ there are still some issues that need resolved particularly with the imminent release of v1.2 of the specification. Wilbert Kraan outlined the plans the IMS project working group have to develop two profiles (one a quite limited version of widely implemented features and one more general) for the new version of the spec to mixed response. Some people felt there was a danger that providing such profiles could limit creativity and use of the newer features of the specification and create defacto limited implementation. It was agreed that care would have to be taken on the language used to describe the use of any such profiles.

Steve Lay then gave an update on IMS QTI and a useful potted history of the spec’s development stages and the functionality of each release of the specifcation. The IMS working group is currently looking at profiling issues and hopes to have a final release of the latest version of the spec available by early 2008. Angelo Panar from ADL provided the final presentation giving an overview of developments in SCORM and the proposed LETSI initiative to move the governance of SCORM out of ADL and into the wider user community. Angelo also outlined some of the areas he envisaged SCORM would develop such as extending sequencing and consistent user interface issues.

Although smaller than previous ‘bashes’, the general feeling was that this had been a useful event. There’s nothing quite like putting a group of developers in room together and letting them ‘talk technical’ :-) It’s probably fair to say that less bashing of packages took place than in previous events, but some useful testing particularly in relation to QTI did take place between remote and f2f participants. Maybe this was a sign of the success of previous events in that many interoperability issues have been ironed out. It is also probably indicative of the current state of technology use in our community where we are now increasingly moving towards web services and soa approaches. It is likely that the next event we run will focus more on those areas – so if you have any suggestions for such an event, please let us know.

Copies of the presentations and audio recordings are available from the codebash web page. You may also be interested in Pete Johnson’s (Eduserve) take on the event too.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2007/06/19/codebash-4/feed/ 0
Let’s see about LETSI http://blogs.cetis.org.uk/sheilamacneill/2007/02/09/lets-see-about-letsi/ http://blogs.cetis.org.uk/sheilamacneill/2007/02/09/lets-see-about-letsi/#comments Fri, 09 Feb 2007 11:00:16 +0000 http://blogs.cetis.org.uk/sheilamacneill/2007/02/09/lets-see-about-letsi/ ADL are proposing the formation of a new international body to “advance the interoperability of technical systems enabling learning, education and training (LET) through the use of reference models based on de jure standards”.

Provisionally called LETSI (Learning-Education-Training Systems Interoperability) it is proposed that one of the first tasks of this body would be to take over governance of the SCORM.

The proposed purpose of LETSI is:

• to enable organizations with a material interest in learning, education, training (LET)
• who agree to accept a set of organizing principles
• to participate in evolving broadly applicable LET Interoperability Reference Model(s)
• informed by shared priorities and requirements
• based initially on the Sharable Content Object Reference Model (SCORM)
• and to define and actualize related events, publications, technologies, and services
• through a process that is transparent, democratic and sustainable.

It is not proposed that this body replace any other international standards/specification develepment bodies, rather it will use exsiting standards/specs to develop reference models.

The ADL presented LETSI at the recent AICC meeting and a copy of the prospectus, which contains full information on the the proposed organisation, is available from the AICC blog. AICC are currently drafting a response to the proposal.

How, and if, this organistation will work is still to be fully realised. An inevitable question must be is there really a need for such a body? Particularly because many don’t really know about the differences
between IMS, IEEE LTSC, CEN/ISSS, ADL or ISO SC36, nor especially care to find out.

A start up meeting for LETSI is being held in London in March as part of the ISO meetings. More details on the March meeting is available from the ISO SC36 website.

JISC CETIS will be attending the March meetings and we will keep you updated on developments.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2007/02/09/lets-see-about-letsi/feed/ 2
Is content packaging just metadata? http://blogs.cetis.org.uk/sheilamacneill/2007/02/05/is-content-packaging-just-metadata/ http://blogs.cetis.org.uk/sheilamacneill/2007/02/05/is-content-packaging-just-metadata/#comments Mon, 05 Feb 2007 16:45:56 +0000 http://blogs.cetis.org.uk/sheilamacneill/2007/02/05/is-content-packaging-just-metadata/ According to Andy Powell (Eduserve), yes, it is. And at a technical workshop on content packaging for complex objects organised by the Repositories Research Team last week, he put forward a case for the potential of the Dublin Core Abstract Model to be used for packaging complex objects. ( A quick overview of his thinking is available from the eduserve blog). Along side the DCAM, presentations were given on MPEG21 DIDL, METS, IEEE RAMLET, IMS Content Packaging.

The objectives of the day were to reach a better understanding of the use of some content packaging standards and models to describe complex objects and to compare and evaluate the appropriateness of each in the context of digital repositories.

So what were the outcomes – were there any clear winners or losers? Is content packaging really just metadata? For me, I’d have to say. . . maybe. The elegance of the DC solution is perhaps, at this point in time, just a bit removed from some of the realities of certain packaging scenarios – particularly those relating to teaching and learning when you start to think about the differences between storing an package and then being able to run it. At a more fundamental level, and one that was brought up during the discussion, how should a repository deal with complex objects and their related standards/models – what should they injest, expose, make available to users? Answers on a postcard please :-)

The IEEE RAMLET (resource aggregration model for learning education and training) model is starting to address some of these issues by providing mappings in the form of an OWL ontology which will allow a system to perfom transforms from a number of specifications ( METS, MPEG21, IETF Atom have been idenfitied so far). But there’s no implementation yet, so how this will actually work remains to be seen.

Personally I found it really interesting to get an overview of each of the areas. Both MPEG 21 and the DC approach seemed to be quite similar in terms of each of them offering a great deal of flexibility in the ability to define and describe relationships between items. METS and IMS seemed to have a bit more strength in terms of describing structure. I think at this stage, it’s all still a bit horses for courses when deciding what standard to use/support, but I have no doubt that whatever the solution, metadata will play quite a big part in it.

Copies of the presentations from the day and a summary report comparing the appropriateness of the various approaches for digital repositories will be available from the RRT wiki soon.

]]>
http://blogs.cetis.org.uk/sheilamacneill/2007/02/05/is-content-packaging-just-metadata/feed/ 0