“Towards Open Standards and Interoperability” panel session at Online Educa

At Online Educa 2006 I chaired a very interesting session entitled Towards Open Standards and Interoperability (details of the session are available here)

We were fortunate to have excellent group of speakers:

The session was built around three prepared questions, with replies from all speakers followed by comments and questions from the floor.

I always find it a shame when interesting panel discussions are not reported, so I took it upon myself to take notes. I kept my own contributions down to a minimum, because the conversation was flowing so well, and those points which I did make have disappeared, because I couldn’t type my notes and speak at the same time! I’ve resisted the temptation to add my thoughts retrospectively, as it would misrepresent the conversation.

Important disclaimer! Please be aware that the this is my paraphrase of some of the main points in the discussion, and by no means a verbatim transcription. I would welcome comments or corrections from the speakers or participants, and more generally from anyone else who has an opinion on these issues. You can add a comment to this blog, or you can reach me at D -dot- E -dot- Griffiths -at- bolton.ac.uk

Question 1: As suppliers, what contribution do interoperability specifications make to your business? Have they in fact made (or will make) the difference between large scale take-up of systems and stagnation?

Rich Caccavale: It is not the applications which we provide which lead to adoption, rather the demand for those applications comes from adoption. The design of the software is based on demand from customers, and we want it to be like that. There isn’t always a standard. If a feature becomes adopted widely, and practice with it develops, then the demand for standards appears.

Jeff Merriman. I don’t represent a product, but I can speak from an Open Source perspective as I was on the board of Sakai. Richard’s comments resonate and I agree that the focus is has to be on functionality that the users require. Where we see standards falling short, those of us who want products don’t always do a good enough job of articulating what we need. Often people are good with functionality, but not so good at interoperability, which sneaks up on them. We have many systems at MIT, such as library, workflow, authentication. To create complex learning activities we need to integrate with them, and to do this we need to articulate the need for standard touch points. So far we haven’t seen as much interoperability as we would like.

Roger Larsen. Standards fuel the industry, and as a small vendor we have more to gain than larger players. IMS enterprise made a big difference, and it enabled us to go from 200 to 10,000 seat installations overnight. So definitely they are helping, and they are very good for everyone provided we keep it simple. On the other hand there can be standards madness.We should concentrate on simple standards, and try to get the specifications which are out there implemented. Only a handful of the 20 specs of IMS have been implemented so far.

Fabrizio Cardinali: Our business is as European publishing technology providers, and standards give us benefit as technology providers, less as publishers, and least of all as Europeans. As technology providers standardisation is a key point, and it is easy to see why if we imagine what the audio industry would be like if it was based on a turnkey approach. On the other hand you would think that publishers would be the natural beneficiaries, but publishers made a huge investment in first generation e-learning, go burnt, and have back tracked. The content industry is very scared at the moment, like rabbit on the highway. So publishing is staying still, but the big publishers are starting to move, and if they do get some momentum they can leverage it. Publishers want to produce and ship, and IMS Common Cartridge can help with this, extending content with services and making content a wider concept. People are also now doing grass roots publishing, and publishers need to manage that.

As Europeans we have invested a lot in content creation through EU projects, but we are losing control of our heritage because of shortcomings in interoperability.

Keith Baker:Reading. Learning interoperability. We’ve head a lot, and we’ve seen huge investment. We have interoperability of LMS, but very little progress on interoperability of granular content between learning programs. Will we see results on that?

Jeff Merriman: Cal State is trying to tackle the question of providing content aggregation and disaggregation. Again the publishers have been tentative on this. Most of the publishers want to sell a big content object, and Cal state has put a stake in the ground on this. Our faculty want to pick and choose between open and closed content repositories, and bring them together to create custom courses. That’s complex conversation from a business perspective.

Rich Caccavale: Maybe the business pain is not bad enough to push this. People use their own technology to share using Flash or whatever. Even if it is not exactly what they want, teachers have not felt enough pain to go beyond that.

Fabrizio Cardinali: Publishers won’t do it it unless they have to. Just in time streaming of information is very important, though, which we call self packaging. Once you have broken up your assets, you can put a layer on top of that. We foresee a legacy of content in templates, and closed algorithms which will be unique to the providers which will support the self packaging. We see a couple of large publishers going that way with standards.

Keith Baker: Several universities are looking at personalisation, even within single institutions, and it requires that.

Roger Larsen: We should not forget that the biggest content producers are students. Today’s standards and specs are too complex. In Fronter we have “meta-dipping”, so that depending on the context in which you store your content, it is dipped into a pool of content and tagged. This is combined with semantic search engines. You come to realise that you cannot tag everything, not a JPG. So you need a strong semantic search which is more focused than Google. You need to be able to map searches onto known and defined sources, and show a match based on the algorithm you are using. That needs to take over at some stage.

Question 2: “Enterprise systems interoperability has been more successful than Learning Object or Learning Activity interoperability.” Do you agree? Why do you think this should be, and what (if anything) should be done about it?

Fabrizio Cardinali: Enterprise systems interoperability is mission critical. If you don’t pass on the invoice it doesn’t get paid. So you get pressure, and the single platform has gone for ever. If you want to do any aspect of eLearning it will be granularised in services, and if it is done in the right way, it can be transparent. You don’t question the power generation when you turn on the light.

Roger Fronter: Interoperability of the student record system, etc. is absolutely necessary, so its obvious we had to start here. That’s covered now and it works very well. Many are amazed when they see what is happening down there. The content providers haven’t jumped into this space, and learning activity interoperability is more complex. Has anyone dug into LD? It’s there and its beautiful, but is it too much? We shouldn’t over specify everything. We should leave some things for human beings. I’m not surprised that these are areas that are underdeveloped.

Jeff Merriman: Its not clear to me that we are even close to solving enterprise interoperability. When I look at the money spent on integrating new systems into MIT, we spent 25-30 million dollars integrating a new payroll system. We think that’s mission critical, but we don’t spend that kind of money on educational software. Where we are ahead in our field in this panel and this room, is we want that integration to be very cheap. We want to spend our money on other things. We don’t have the budget in higher education to do both. There are small successes in data exchange, IMS Enterprise and LIP, focused on a small piece of enterprise interoperability. In the content space we’ve seen quite a bit of movement and progress. Content tends to be data oriented, but often we want to take things to run or play them. We need to have plug and play in our environment, and we’ve begun to show that we can do it. But it really gets back to the customers understanding and being able to articulate those interoperability requirements.

Rich Caccavale: Enterprise integration is a prerequisite. If you can’t log in to a system then you can’t share. Enterprise was the first we adopted, and it was good or us, but the users hated it. We deprecated the proprietary API, but many users wanted it back, because the new one was harder to use, had bigger files, etc. We had a lot of issues with adopting the IMS spec. Standards are sometimes a double edged sword, efficient and cost effective for whom? The users aren’t prioritising them, even if the vendors are. We came out to the market and supported SCORM, and the users asked us how they could turn it off, because it confused people. SCORM is valuable to share content, but let’s not take the human being out of it. Some users don’t need standards.

Ari Lubay, Dutch tax office: Jeff, I heard you talk about a large sum of money. If I have to choose a system which has technical integration, or the best functionality, which would you prioritise?

Jeff Merriman: We have been forced to search for the system functionality, and then fix it. We lean towards getting the functionality there, and the cost is the cost of doing business in the integration world. What we would like is to have more standard touch points, to mitigate those costs. What you mention is something we struggled with in Sakai. It was going to be modular and elegant, but when push came to shove, because we were trying to build a user community we made a decision to look at functionality first. That decision is now emerging as years of restructuring and rebuilding.

Fabrizio Cardinali: When it comes to content there is the Open Course Ware (OCW) initiative. There is nothing at that level, its open standards or not. If the content producer is not doing something for their heritage then it doesn’t matter. But if it is your heritage then you need to unbundle content from delivery. Content standards get adopted when it is part of your legacy and your assets. OCW is a very successful brand, but not fully correct in engineering, and this will be a problem moving from platform to platform. To use OCW fully you need to rebuild MIT, because you need the teacher behind.

Jeff Merriman: OCW was developed as a publishing arm of MIT. There is a demand now to aggregate and disaggregate, and it has the same problems as other publishers. We need to start engaging with the industry.

Stephen Marshal, Pretoria New Zealand. There is a feeling of “Technology is great, so long as I don’t see it out there when I’m teaching”. There is tendency among teachers to reject learning objects and standards, and a lot of resistance to the idea that the users want technology.

Rich Caccavale: The software is a tool, and that tool should be is as invisible as possible. Faculty ask if they could turn off SCORM, and you are right for the majority of people who are teaching. Standards can sometimes support teachers, but they can also achieve the opposite. You wonder who is the beneficiary of the standard, is it vendors or teachers.

Stephen Marshal: One of the uses of a standard is a guide us how to undertake an activity effectively.

Rich Caccavale: A standard was a flag that people rallied around. If people aren’t rallying its not a standard, it’s a spec.

Roger Larsen:The fact that there are well adopted standards is one of the ways that you can hide technology. If you do a search in your search box, you get a number of hits from external repositories, and you don’t have to deal with it. That’s because the technical job is being done properly. We need widely adopted standards so that we can rely on them, because the second that a system interprets it differently the technology goes wrong, and it becomes visible.

I’d like to add, that Fronter comes from country with 4 million people, and we are going through the Scandinavian countries. We are all working with Management Information Systems (MIS) which are leaders in each of those countries. None of them wanted to work with us, because they wanted to sell their own system and specifications. We are then new kids on the block who want to get in, and standards broke down those monopolies in the countries that we entered into. The student record systems are managed by an existing system, and we need to get them to open up. The MIS are losing this one.

Fabrizio Cardinali: If publishers and big media producers want to move onto any platform they will do it. They have the power. Technology should not be confused with standards. What we are trying to achieve are easy direct systems dedicated to what you want to do. We should talk about these things here, for example LD is getting into the area of pedagogical design.

Stephen Marshal: Isn’t one of the significant standards problems with standards the fact that the publishers want large chunks of DRM to travel around with the content. If we want to get the publishers on board they will want to check that.

Fabrizio Cardinali: We are part of a publishing group and we serve them. I’m agnostic about open and closed content. We need Britannica and Wikipedia, and they need to interoperate.

Question 3: The VLE market is becoming more concentrated, and in Higher Education is dominated by a single supplier, and a small number of Open Source alternatives. At the same time the VLE as a concept is being undermined by disaggregated systems. In these circumstances is there still a role for specifications which focus on exchanging materials between VLEs, such as SCORM and IMS LD?

Roger Larsen: The is a most open system in the world is blogs, but how can you get anything out from there? How can you guarantee that blind people can get access? How can you be sure you will have access for years to come? You need standards to underpin the WEB 2 applications. We definitely need standards more and more in order to disaggregate systems. LAMS was a big thing last year, and it can be integrated.Even if there was one VLE, there would still have to be a role for content interoperability. Its like comparing a car manufacturer and petrol supplier.

Fabrizio Cardinali: If you have a proprietary search schema and you start aggregating you need to sequence the results and display them. If you do it on a proprietary basis, then you are locked in.

Rich Caccavale: There are more eLearning systems that there were a few years ago. When people consider buying our software, they do multiple evaluations. We get 100 page tenders, and that speaks to the diversity in the market. Since 2005 things have really started changing and we know that we are not the only ones out there. If there isn’t innovation, there isn’t a demand for software.

Jeff Merriman: Moving towards an environment with open systems is critical. Open is not the same as Open Source. More often than not Open Source products are closed systems. If we are looking at bringing in any OS system to MIT, we can look at the source code, but the work to do the integration is often greater than building the thing from scratch. Secondly we don’t want necessarily to take an OS community’s product because we may not be able to give our adaptation back and get them to accept it. So we would have to do it again in two years time when the system changes. So it’s the same conclusion: for Open or Closed Source we need to have the points of contact. If I have an LD driven pedagogy, maybe I don’t want to use the LAMS chat tool, and that needs to integrate if I use my iChat.

Fabrizio Cardinali: At Giunti 50% of the tender docs are functionality or architectural standards. That’s what the market is asking for. Something which is emerging in the corporate sector, they are disaggregating content provision from VLEs. SCORM was a very very simple proposition from a very specific context. The academic sector said that was not enough, so they went the extreme opposite. In between there is an area in between, and maybe it can be filled with Common Cartridge.

Christian Essen: We are talking about interoperability in the exchange between systems, technological interoperability. We should address interoperability of all kinds of knowledge, between employees, organisations, units and so on. Is it necessary to focus on materials and on development of framework for all kinds of interoperability, including semantic and didactic, and could that be a good way to achieve interoperability between systems.

Roger Larsen: Its more or less there already. If you combine all the specs you could structure most of that. IMS interoperability framework is a big and complex issue, that’s the next big one. When systems start to touch as well as

Jeff Merriman: I hope the OSIDs don’t go the way of Open Doc! We’ve had some success, and because we factor out OSIDs we are starting to think in MIT about how we could organise aligning our organisational model with systems. It is worth looking at TENCompetence in relation to this.

Thinking about PLEs and LD at the CETIS conference

At the recent JISC CETIS conference we had two very interesting sessions on PLEs, in which we gave a lot of attention to the role of IMS LD. This is also a major theme in the TENCompetence project, which is developing a “Personal Competence Development Environment”. The following text is based on my notes during the sessions which I have edited (perhaps not enough) and pruned (even less).

Before I write about the PLE and LD aspects I’d like to comment on Ernest W. Adams’ entertaining and interesting plenary talk on the philosophical roots of games design which (perhaps surprisingly) linked with the PLE discussion which followed.
He described three perspectives on culture

English and French philosophy
Classical and Romantic
CP Snow’s two cultures

His basic point was that the culture of programmers is in the Classical and Scientific tradition, and informed by the English positivist and rationalist philosophical tradition, and that this tradition is ill equipped to meet the challenges of narrative game play. The narrative references of gaming are largely limited to the hero and the saga, mediated by Tolkien. So the challenge faced by the games designer is to write technical documents which enable the creation of narrative, and the games industry finds itself striving towards romantic ends using classical means
I tried to map this onto educational technology, and it brought to mind something which Stafford Beer wrote: “we tend to live our lives by heuristics, and to try and control them by algorithms”. He defines a heuristic as specifying “a method of behaving, which will tend towards a goal which cannot be precisely specified because we know what it is, but not where it is”.It seems to me that we are in this situation in the domain of teaching and learning, which is in general neither exhaustively defined nor agreed upon. We know in general terms what we want to achieve, and we try out a variety of strategies in our interactions which seem to lead us in the right direction, often using very subtle and intuitive criteria in our decision making. This enables us to deal with situations which would otherwise be too complex to handle. For example, when deciding on the next activity for a class the teacher does not (and cannot) take into consideration the full complexity of the current cognitive and affective state of each learner, and the relative importance of learning outcomes to each of them. On the other hand the culture of the programmer (and, I think, the educational technologist) leads to algorithmic solutions which require explicit statements of goals and procedures. Bill Olivier made a related point in his earlier plenary presentation, quoting Brown and Duguid’s statement that practice is what you do to make the process work.
It seems to me that we need to think a lot harder about how our algorithmic systems (like IMS LD) can best support the heuristic approaches which are ubiquitous classroom practice, rather than precluding them, and how these fit with support for systems which enable interactions guided by heuristics (like social software and many of the aspects of PLEs).

Oleg Liber of CETIS introduced the PLE session by pointing out that we are now living in a new world where the technology available outside institutions is richer than what you find inside. Can we exploit that external technology and blend that with what is inside? That is a broader agenda than what we had before, and the PLE is a key aspect of this. Oleg talked about how the idea of a Personal Learning Environment (PLE) links to other themes which we have been discussing for a number of years: personalisation, accessibility and inclusion, and how IMS Learning Design (IMS LD), and the wider learning design approach has been enabling this by pushing for more pedagogic variety and appropriateness.

Amanda Oddie from the team at Liverpool Hope University described the courses that they have been running using IMS LD with of learners in courses, funded by JISC. They have been using Reload to create the Units of Learning, and running them in SLeD, but they have found Reload rather hard to use and that the performance of SLeD needed to be improved. Their current project LD4P is looking at IMS LD from the users’ perspective. They have fixed the performance problems of SLeD, and is working on improving the interface of Reload and SLeD. It was interesting to hear that the process of modelling with IMS LD was generally seen by teachers as providing an interesting perspective on their practice. They have found that making practice explicit can help identify where there are problems in the learning design, for example checking that the learning is really happening. Teaching is highly complex, learners are complex, and capturing in technology is hard, so you need the complexity of the specification. It is the power of the specification which makes it attractive. Director is not easy to use either, and if practitioners see the power of IMS LD they may be willing to get to grips with it.
The problem is that although the names of the elements can be changed to be more friendly, when you want to edit level B then you find you get involved in editing XML, and this is too hard.
This contradicted the assumption that many of us had held that the specification is intrinsically too complex for practitioners to handle (and an interesting angle on how algorithmic analysis can support heuristic practice).

We had an interesting conversation about the need which teachers sometimes have to change the UOL when it has already started. Amanda commented that this was a problem, but that a work around was to divide a course up into a separate UOL for each week. Bill Olivier argued that this is not a problem which is inherent in the specification, but rather a function of the fact that the Coppercore player precompiles the UOL. If you had an interpreter engine there would be nothing to stop you changing it on the fly. You would get efficiency problems, but these could be improved by using a hybrid approach, with precompiled acts, for example.
It is also true that you can use IMS LD as an import and export format, leaving the system free to do what it wants with its own internal representation of the UOL in between these two events.
Another issue with IMS LD which was raised in discussion was the set up of services. For example, we might have a special space on Second Life we want to use. But if you put the URL into a UOL which is going to be reused in different runs, then all the students will come together on the one space. If this is not what you want, and you want to separate the learners from different runs, then you have to generate the URL at runtime. This means putting a spaceholder in the URL, and generating it at runtime. At the moment it’s not clear what is the best way of dealing with this. This seems to be one of the key issues to be addressed in relation to IMS LD.

Phil Beauvoir, lead developer of Reload, said that in building the application he was not thinking about making it beautiful or intuitive, but rather about exposing the elements. So, in terms of Ernest Adams’ presentation, what we have is a collection of classical trees and tabs. So maybe we need to get romantic! He outlined the development work which is planned for LD authoring tools in the TENCompetence project.

Raymond Elferink presented the OpenDocument.net repository which is being developed in the context of the OpenDock project. He explained that there are a lot of repositories out in the world, and that they are mostly big things written in Java, which requires a lot of expertise and machines to run it. What OpenDock wants is to address low tech requirements and provide high accessibility. This will stimulate sharing and reuse, and this is a key issue because the content providers of the future are the practitioners themselves. Integrated support of Creative Commons and interoperability specifications is a vital part of this. The repository can be installed on any LAMP server, so small institutions groups and individuals can use the system on rented web space.
There was general agreement that the value of IMS LD is in sharing, rather than in using it as part of the quality process or accreditation (where there are specified learning design features which must be adhered to).
Mark Stiles added that one of the important things about LD is that it does enable richness, and there are other systems, like Blackboard which are moving in the other direction. But we should ask ourselves if it works, and if it is worth doing. We are putting a lot of effort into this, and we are doing it on faith.
Bill Olivier of JISC commented that people who are working on pedagogy from a face to face perspective are also wrestling with the comparative value of the pedagogy. So the argument is outside the technology

Oleg Liber distinguished between the personal aspect of a PLE, which means that everyone should have tools to make a system which suits them, and personalisation which is about making all content accessible to all people (and tends to be about individual learning). We went on to discuss what this meant for accessibility in PLEs. Web standards can take us so far, but you will always hit limits for accessibility. When you think you have got it right, you inevitably get it wrong for some people, even if you do the best you possibly can. Someone will need adaptation, which will conflict with the needs of organisations. It’s not always in the interests of the people who are writing the standards to pay attention to this. We need to bridge this, and we have to talk about disaggregating resources, but the conflict is there.
You can’t make an object which is accessible to everyone, because you don’t know what they’ll want to do with it and in what context. So we can’t provide accessible content. We need to enable people to be able to manage this, and the contribution of the PLE is in separating the information from the instrumental presentation that people are faced with. By stripping out the services we can enable people to develop their own solution. On the other hand we should remember that the solution may not be technological, it may be someone sitting next to the user. So the content itself seems to be increasingly irrelevant, its how you access it is important, and this means that application design and functional requirements profiles become central.
Mark Johnson reminded us that accessibility is not only about UI design. Learners have different learning styles, and that’s accessibility too. There’s a need for guidance, so there’s a need for a teacher presence.
Bill Olivier made the link with IMS LD. Content needs to be stored in a well known XML format that can be configured in the PLE. LD has persistent personal properties, so you could write conditions in your UOL which would make the PLE presentation layer adaptable.

Finally Mark Johnson of The University of Bolton discussed recent developments in Firefox. A browser is a low variety tool, but recently this has been changing. Flock is a development of Firefox which includes some of the aspects of PLEs. It has RSS reading built into it, and has links to other services in the Web2 sphere which are useful. For blogging it will take the user to the editor and send text to their blog. To configure it you just set up the services, the news feeds, the blogging services, deli.cio.us, flickr. The tool serves as a focus to bring these services together, it allows for a considerable amount of personalisation.
XUL (XML User Interface Language) is used in Firefox (Microsoft have their own).
One of the things that makes XUL interesting is its relationship with RDF. XUL is built around RDF data structures, which opens up new possibilities well beyond conventional data description languages. This makes it increasingly easy to create a personal learning tool set, going beyond the normal limits of the browser window (which is what institutions tell us we have to use).

If you’d like to discuss any of this or need more information, please contact me at
dai -dot- griffiths -dot- 1 -at- gmail -dot- com