Open : data : co-op

A very interesting event in Manchester on Monday (2014-10-20) called “Open : Data : Cooperation” was focused around the idea of “building a data cooperative”. The central idea was the cooperative management of personal information.

Related ideas have been going round for a long time. In 1999 I first came across a formulation of the idea of managing personal informaton in the book called “Net Worth“. Ten years ago I started talking about personal information brokerage with John Harrison, who has devoted years to this cause. In 2008, Michel Bauwens was writing about “The business case for a User Data Commons“.

A simple background story emerges from following the money. People spend money, whether their own or other people’s, and influence others in their spending of money. Knowing what people are ready to spend money on is valuable, because businesses with something to sell can present their offerings at an opportune moment. Thus, information which might be relevant to anyone buying anything is valuable, and can be sold. Naturally, the more money is at stake, the higher the price of information relevant to that purchase. Some information about a person can be used in this way over and over again.

Given this, it should be possible for people themselves to profit from giving information about themselves. And in small ways, they already do: store cards give a little return for the information about your purchases. But once the information is gathered by someone else, it is open for sale to others. One worry is that, maybe in the future if not right away, that information might enable some “wrong” people to know what you are doing, when you don’t want them to know.

Can an individual manage all that information about themselves better, both to keep it out of the wrong hands, and to get a better price for it from those to whom it is entrusted? Maybe; but it looks like a daunting task. As individuals, we generally don’t bother. We give away information that looks trivial, perhaps, for very small benefits, and we lose control of it.

It’s a small step from these reflections to the idea of people grouping together, the better to control data about themselves. What they can’t practically do separately, there is a chance of doing collectively, with enough efficiencies of scale to make it worthwhile, financially as well as in terms of peace of mind. You could call such a grouping a “personal data cooperative” or a “personal information mutual”, or any of a range of similar names.

Compared with gathering and holding data about the public domain, personal information is much more challenging. There are the minefields of privacy law, such as the Data Protection Act in the UK.

In Manchester on Monday we had some interesting “lightning” talks (I gave one myself – here are the slides on Slideshare,) people wrote sticky notes on relevant topics they were concerned about, and there were six areas highlighted for discussion:

  • security
  • governance
  • participation & inclusivity
  • technical
  • business model
  • legislative

I joined the participation and the technical group discussions. Both fascinated me, in different ways.

The participation discussion led to thoughts about why people would join a cooperative to manage their personal data. They need specific motivation, which could come from the kind of close-knit networks that deal with particular interests. There are many examples of closely knit on-line groups around social or political campaigns, about specific medical issues, or other matters of shared personal concern. Groups of these kinds may well generate enough trust for people to share their personal information, but they are generally not large enough to have much commercial impact, so they might struggle to be sustainable as personal data co-ops. What if, somehow, a whole lot of these minority groups could get together in an umbrella organisation?

Curiously, this has much in common with my personal living situation in a cohousing project. Despite many people’s yearnings (if not cravings) for secure acceptance of their minority positions, to me it looks like our cohousing project is too large and diverse a group for any one “cause” to be a key part of the vision for everyone. What we realistically have is a kind of umbrella in which all these good and worthy causes may thrive. Low carbon footprints; local, organic food; veganism; renewable energy; they’re all here. All these interest groups live within a co-operative kind of structure, where the governance is as far as possible by consensus.

So, my current living situation has resonances with this “participation” – and my current work is highly relevant to the “technical” discussion. But the technical discussion proved to be hard!

If you take just one area of personal-related information, and manage to create a business model using that information, the technicalities start to be conceivable.

For instance, Cetis (particularly my colleague Scott Wilson) has been involved in the HEAR (Higher Education Achievement Report) for quite some time. Various large companies are interested in using the HEAR for recruiting graduates. Sure, that’s not a cooperative scenario, but it does illustrate a genuine business case for using personal data gathered from education. Then one can think about how that information is structured; how it is represented in some transferable format; how the APIs for fetching such information should work. There is definite progress in this direction for HEAR information in the UK – I was closely involved in the less established but wider European initiative around representing the Diploma Supplement, and more can be found under the heading European Learner Mobility.

While the HEAR is progressing towards viability, The “ecosystem” around learner information more widely is not very mature, so there are still questions about how effective our current technical formats are. I’ve been centrally involved in two efforts towards standardization: Leap2A and InLOC. Both have included discussion about the conceptual models, which has never been fully resolved.

More mature areas are more likely to have stable technical solutions. Less mature areas may not have any generally agreed conceptual, structural models for the data; there may be no established business models for generating revenues or profits; and there may be no standards specifically designed for the convenient representation of that kind of data. Generic standards like RDF can cover any linked data, but they are not necessarily convenient or elegant, and may or may not lead to workable practical applications.

Data sources mentioned at this meeting included:

quantified self data
that’s all about your physiological data, and possibly related information
energy (or other utility) usage data
coming from smart meters in the home
purchasing data
from store cards and online shops
communication data
perhaps from your mobile device
learner information
in conjunction with learning technology, as I introduced

I’m not clear how mature any of these particular areas are, but they all could play a part in a personal data co-op. And because of the diversity of this data, as well as its immaturity, there is little one can say in general about technical solutions.

What we could do would be to set out just a strategy for leading up to technical solutions. It might go something like this.

  1. Agree the scope of the data to be held.
  2. Work out a viable business model with that data.
  3. Devise models of the data that are, as far as possible, intuitively understandable to the various stakeholders.
  4. Consider feasible technical architectures within which this data would be used.
  5. Start considering APIs for services.
  6. Look at existing standards, including generic ones, to see whether any existing standard might suffice. If so, try using it, rather than inventing a new one.
  7. If there really isn’t anything else that works, get together a good, representative selection of stakeholders, with experience or skill in consensus standardization, and create your new standard.

It’s all a considerable challenge. We can’t ignore the technical issues, because ignoring them is likely to lead just to good ideas that don’t work in practice. On the other hand, solving the technical issues is far from the only challenge in personal data co-ops. Long experience with Cetis suggests that the technical issues are relatively easy, compared to the challenges of culture and habit.

Give up, then? No, to me the concept remains very attractive and worth working on. Collaboratively, of course!

Why, when and how should we use frameworks of skill and competence?

(25th in my logic of competence series.)

When we understand how frameworks could be used for badges, it becomes clearer that we need to distinguish between different kinds of ability, and that we need tools to manage and manipulate such open frameworks of abilities. InLOC gives a model, and formats, on which such tools can be based.

I’ll be presenting this material at the Crossover Edinburgh conference, 2014-06-05, though my conference presentation will be much more interactive and open, and without much of this detail below.

What are these frameworks?

Frameworks of skill or competence (under whatever name) are not as unfamiliar as they might sound to some people at first. Most of us have some experience or awareness of them. Large numbers of people have completed vocational qualifications — e.g. NVQs in England — which for a long time were each based on a syllabus taken from what are called National Occupational Standards (NOSs). Each NOS is a statement of what a person has to be able to do, and what they have to know to support that ability, in a stated vocational role, or job, or function. The scope of NOSs is very wide — to list the areas would take far too much space — so the reader is asked to take a look at the national database of current NOSs, which is hosted by the UKCES on their dedicated web site.

Several professions also have good reason to set out standards of competence for active members of that profession. One of the most advanced in this development, perhaps because of the consequences of their competence on life and death, is the medical profession. Documents like Good Medical Practice, published by the General Medical Council, starts by addressing doctors:

Patients must be able to trust doctors with their lives and health. To justify that trust you must show respect for human life and make sure your practice meets the standards expected of you in four domains.

and then goes on to detail those domains:

  • Knowledge, skills and performance
  • Safety and quality
  • Communication, partnership and teamwork
  • Maintaining trust

The GMC also publishes the related Tomorrow’s Doctors, in which it

sets the knowledge, skills and behaviours that medical students learn at UK medical schools: these are the outcomes that new UK graduates must be able to demonstrate.

These are the kinds of “framework” that we are discussing here. The constituent parts of these frameworks are sometimes called “competencies”, a term that is intended to cover knowledge, skills, behaviours, attitudes, etc., but as that word is a little unfriendly, and bearing in mind that practical knowledge is shown through the ability to put that knowledge into practice, I’ll use “ability” as a catch all term in this context.

Many larger employers have good reasons to know just what the abilities of their employees are. Often, people being recruited into a job are asked in person, and employers have to go through the process of weighing up the evidence of a person’s abilities. A well managed HR department might go beyond this to maintaining ongoing records of employees’ abilities, so that all kinds of planning can be done, skills gaps identified, people suggested for new roles, and training and development managed. And this is just an outsider’s view!

Some employers use their own frameworks, and others use common industry frameworks. One industry where common frameworks are widely used is information and communications technology. SFIA, the Skills Framework for the Information Age, sets out all kinds of skills, at various levels, that are combined together to define what a person needs to be able to do in a particular role. Similar to SFIA, but simpler, is the European e-Competence Framework, which has the advantage of being fully and openly available without charge or restriction.

Some frameworks are intended for wider use than just employment. A good example is Mozilla’s Web Literacy Map, which is “a map of competencies and skills that Mozilla and our community of stakeholders believe are important to pay attention to when getting better at reading, writing and participating on the web.” They say “map”, but the structure is the same as other frameworks. Their background page sets out well the case for their common framework. Doug Belshaw suggests that you could use the Web Literacy Map for “alignment” of the kind of Open Badges that are also promoted by Mozilla.

Links to badges

You can imagine having badges for keeping track of people’s abilities, where the abilities are part of frameworks. To help people move between different roles, from education and training to work, and back again, having their abilities recognised, and not having to retrain on abilities that have already been mastered, those frameworks would have to be openly published, able to be referenced in all the various contexts. It is open frameworks that are of particular interest to us here.

Badges are typically issued by organisations to individuals. Different organisations relate to abilities differently. Some organisations, doing business or providing a service, just use employees’ abilities to deliver products and services. Other organisations, focusing around education and training, just help people develop abilities, which will be used elsewhere. Perhaps most organisations, in practice, are somewhere on the spectrum between these two, where abilities are both used and developed, in varied proportions. Looking at the same thing from an individual point of view, in some roles people are just using their abilities to perform useful activities; in other roles they are developing their abilities to use in a different role. Perhaps there are many roles where, again, there is a mixture between these two positions. The value of using the common, open frameworks for badges is that the badges could (in principle) be valued across different kinds of organisation, and different kinds of role. This would then help people keep account of their abilities while moving between organisations and roles, and have those abilities more easily recognised.

The differing nature of different abilities

However, maybe we need to be more careful than simply to take every open framework, and turn it into badges. If all the abilities than were used in all roles and organisations had separate badges, vast numbers of badges would exist, and we could imagine the horrendous complexity of maintaining them and managing them. So it might make sense to select the most appropriate abilities for badging, as follows.

  • Some abilities are plentiful, and don’t need special training or rewarding — maybe organisations should just take them for granted, perhaps checking that what is expected is there.
  • Some abilities are hard, or impossible, to develop: you have them or you don’t. In this case, using badges would risk being discriminatory. Badges for e.g. how high a person can reach, or how long they can be in the sun without burning, would be unnecessary as well as seriously problematic, while one can think of many other personal characteristics, potentially framed as abilities, which might be less visible on the surface, but potentially lead to discrimination, as people can’t just change them.
  • Some abilities might only be able to be learned within a specific role. There is little point in creating badges for these abilities, if they do not transfer from role to role.
  • Some abilities can be developed, are not abundant, and can be transferred substantially from one role to another. These are the ones that deserve to be tracked, and for which badges are perhaps most worth developing. This still leaves open the question of the granularity of the badges.

Practical considerations governing the creation and use of frameworks

It’s hard to create a good, generally accepted common skills or competence framework. In order to do so, one has to put together several factors.

  • The abilities have to be sufficiently common to a number of different roles, between which people may want to move.
  • The abilities have to be described in a way that makes sense to all collaborating parties.
  • It must be practical to include the framework into other tools.
  • The framework needs to be kept up to date, to reflect changing abilities needed for actual roles.
  • In particular, as the requirements for particular jobs vary, the components of a framework need to be presented in such a way that they can be selected, or combined with components of other frameworks, to serve the variety of roles that will naturally occur in a creative economy.
  • Thus, the descriptions of the abilities, and the way in which they are put together, need all to be compatible.

Let’s look at some of this in more detail. What is needed for several purposes is the ability to create a tailored set of abilities. This would be clearly useful in describing both job opportunities, and actual personal abilities. It is of course possible to do all of this in a paper-like way, simply cutting and pasting between documents. But realistically, we need tools to help. As soon as we introduce ICT tools, we have the requirement for standard formats which these tools can work with. We need portability of the frameworks, and interoperability of the tools.

For instance, it would be very useful to have a tool or set of tools which could take frameworks, either ones that are published, or ones that are handed over privately, and manipulate them, perhaps with a graphical interface, to create new, bespoke structures.

Contrast with the actual position now. Current frameworks rarely attempt to use any standard format, as there are no very widely accepted standards for such a format. Within NOSs, there are some standards; the UK government has a list of their relevant documents including “NOS Quality Criteria” and a “NOS Guide for Developers” (by Geoff Carroll and Trevor Boutall). But outside this area practice varies widely. In the area of education and training, the scene is generally even less developed. People have started to take on the idea of specifying the “learning outcomes” that are intended to be achieved as a result of completing courses of learning, educaction or training, but practice is patchy, and there is very little progress towards common frameworks of learning outcomes.

We need, therefore, a uniform “model”, not for skills themselves, which are always likely to vary, but for the way of representing skills, and for the way in which they are combined into frameworks.

The InLOC format

Between 2011 and 2013 I led a team developing a specification for just this kind of model and format. The project was called “Integrating Learning Outcomes and Competences”, or InLOC for short. We developed CEN Workshop Agreement CWA 16655 in three parts, available from CEN in PDF format by ftp:

  1. Information Model for Learning Outcomes and Competences
  2. Guidelines including the integration of Learning Outcomes and Competences into existing specifications
  3. Application Profile of Europass Curriculum Vitae and Language Passport for Integrating Learning Outcomes and Competences

The same content and much extra background material is available on the InLOC project web site. This post is not the place to explain InLOC in detail, but anyone interested is welcome to contact me directly for assistance.

What can people do in the meanwhile?

I’ve proposed elsewhere often enough that we need to develop tools and open frameworks together, to achieve a critical mass where there enough frameworks published to make it worthwhile for tool developers, and sufficiently developed tools to make it worthwhile to make the extra effort to format frameworks in the common way (hopefully InLOC) that will work with the tools.

There will be a point at which growth and development in this area will become self-sustaining. But we don’t have to wait for that point. This is what I think we could usefully be doing in the meanwhile, if we are in a position to do so.

1. Build your own frameworks
It’s a challenge if you haven’t been involved in skill or competence frameworks before, but the principles are not too hard to grasp. Start out by asking what roles, and what functions, there are in your organisation, and try to work out what abilities, and what supporting knowledge, are needed for each role and for each function. You really need to do this, if you are to get started in this area. Or, if you are a microbusiness that really doesn’t need a framework, perhaps you can build one for a larger organisation.
2. Use parts of frameworks that are there already, where suitable
It may not be as difficult as you thought at first. There are many resources out there, such as NOSs, and the other frameworks mentioned above. Search, study, see if you can borrow or reuse. Not all frameworks allow it, but many do. So, some of your work may already be done for you.
3. Publish your frameworks, and their constituent abilities, each with a URL
This is the next vital step towards preparing your frameworks for open use and reuse. The constituent abilities (and levels, see the InLOC documentation) really need their own identifiers, as well as the overall frameworks, whether you call those identifiers URLs, URIs or IRIs.
4. Use the frameworks consistently throughout the organisation
To get the frameworks to stick, and to provide the motivation for maintaining them, you will have to use them in your organisation. I’m not an expert on this side of practice, but I would have thought that the principles are reasonably obvious. The more you have a uniform framework in use across your organisation, the more people will be able to see possibilities for transfer of skills, flexible working, moving across roles, job rotation, and other similar initiatives that can help satisfy employees.
5. Use InLOC if possible
It really does provide a good, general purpose model of how to represent a framework, so that it can be ready for use by ICT systems. Just ask if you need help on this!
6. Consider integrating open badges
It makes sense to consider your badge strategy and your framework strategy together. You may also find this old post of mine helpful.
7. Watch for future development of tools, or develop some yourself!
If you see any, try to help them towards being really useful, by giving constructive feedback. I’d be happy to help any tool developers “get” InLOC.

I hope these ideas offer people some pointers on a way forward for skill and competence frameworks. See other of my posts for related ideas. Comments or other feedback would be most welcome!

What is CEN TC 353 becoming?

The CEN TC 353 was set up (about seven years ago) as the European Standardization Technical Committee (“TC”) responsible for “ICT for Learning Education and Training” (LET). At the end of the meeting I will be describing below, we recognised that the title has led some people to think it is a committee for standardising e-learning technology, which is far from the truth. I would describe its business as being, effectively, the standardization of the representation of information about LET, so that it can be used in (any kind of) ICT systems. We want the ICT systems we use for LET to be interoperable, and we want to avoid the problems that come from vendors all defining their own ways of storing and handling information, thus making it hard to migrate to alternative systems. Perhaps the clearest evidence of where TC 353 works comes from the two recent European Standards to our name. EN 15981, “EuroLMAI”, is about information about learner results from any kind of learning, specifically including the Diploma Supplement, and the UK HEAR, that document any higher education achievements. EN 15982, “MLO” (Metadata for Learning Opportunities) is the European equivalent of the UK’s XCRI, “eXchanging Course-Related Information”, mainly about the information used to advertise courses, which can be of any kind. Neither of these are linked to the mode of learning, technology enhanced or not; and indeed we have no EN standards about e-learning as such. So that’s straight, then, I trust …

At this CEN TC 353 meeting on 2014-04-08 there were delegates from the National Bodies of: Finland; France (2); Germany; Greece; Norway; Sweden (2); UK (me); and the TC 353 secretary. That’s not very many for an active CEN TC. Many of the people there have been working with CETIS people, including me, for several years. You could see us as the dedicated, committed few.

The main substance of the day’s discussion was about two proposed new work items (“NWIs”), one from France, one from Sweden, and the issues coming out of that. I attended the meeting as the sole delegate (with the high-sounding designation, “head of delegation”) from BSI, with a steer from colleagues that neither proposal was ready for acceptance. That, at least, was agreed by the meeting. But something much more significant appeared to happen, which seemed to me like a subtle shift in the identity of TC 353. This is entirely appropriate, given that the CEN Workshop on Learning Technologies (WS-LT), which was the older, less formal body, is now acccepted as defunct — this is because CEN are maintaining their hard line on process and IPR, which makes running an open CEN workshop effectively impossible.

No technical standardization committee that I know of is designed to manage pre-standardization activities. Floating new ideas, research, project work, comparing national initiatives, etc., need to be done before a proposal reaches a committee of this kind, because TC work, whether in CEN, or in our related ISO JTC1 SC36, tends to be revision of documents that are presented to the committee. It’s very difficult and time consuming to construct a standard from a shaky foundation, simply by requesting formal input and votes from national member bodies. And when a small team is set up to work under the constraints of a bygone era of confidentiality, in some cases it has proved insurmountably difficult to reach a good consensus.

Tore Hoel, a long-time champion of the WS-LT, admitted that it is now effectively defunct. I sadly agree, while appreciating all the good work it has done. So TC 353 has to explore a new role in the absence of what was its own Workshop, which used to do the background work and to suggest the areas of work that needed attention. Tore has recently blogged what he thinks should be the essential characteristics of a future platform for European open standards work, and I very much agree with him. He uses the Open Stand principles as a key reference.

So what could this new role be? The TC members are well connected in our field, and while they do not themselves do much IT systems implementation, they know those people, and are generally in touch with their views. The TC members also have a good overview of how the matters of interest to TC 353 relate to neighbouring issues and stakeholders. We believe that the TC is, collectively, in quite a good position to judge when it is worth working towards a new European Standard, which is after all their raison d’etre. We can’t see any other body that could perform this role as well, in this specific area.

As we were in France, the famous verse of Rouget de Lisle, the “Marseillaise” came to my mind. “Aux armes, citoyens, Formez vos bataillons!” the TC could be saying. What I really like, on reflection, about this aspect of the French national anthem is that it isn’t urging citizens to join some pre-arranged (e.g. royal) battalions, but to create their own. Similarly, the TC could say, effectively, “now is the time to act — do it in your own ways, in your own organisations, whatever they are — but please bring the results together for us to formalise when they are ready.”

For me, this approach could change the whole scene. Instead of risking being an obstacle to progress, the CEN TC 353 could add legitimacy and coherence to the call for pre-standardization activity in chosen areas. It would be up to the individuals listening (us wearing different hats) to take up that challenge in whatever ways we believe are best. Let’s look at the two proposals from that perspective.

AFNOR, the French standards body, was suggesting working towards a European Standard (EN) with the title “Metadata for Learning Opportunities part 2 : Detailed Description of Training and Grading (face to face, distance or blended learning and MOOCs): Framework and Methodology”. The point is to extend MLO (EN 15982), including perhaps some of those characteristics of courses (learning opportunities), perhaps drawn from the Norwegian CDM or its French derivative, that didn’t make it into the initial version of MLO for advertising. There have from time to time in the UK been related conversations about the bits of the wider vision for XCRI that didn’t make it into XCRI-CAP (“Course Advertising Profile”). But they didn’t make it probably for some good reason — maybe either there wasn’t agreement about what they should be, or there wasn’t any pressing need, or there weren’t enough implementations of them to form the basis for effective consensus.

Responding to this, I can imagine BSI and CETIS colleagues in the UK seriously insisting, first, that implemention should go hand in hand with specification. We need to be propertly motivated by practical use cases, and we need to test ideas out in implementation before agreeing to standardize them. I could imagine other European colleagues insisting that the ideas should be accepted by all the relevant EC DGs before they have a chance of success in official circles. And so on — we can all do what we are best at, and bring those together. And perhaps also we need to collaborate between national bodies at this stage. It would make sense, and perhaps bring greater commitment from the national bodies and other agencies, if they were directly involved, rather than simply sending people to remote-feeling committees of standards organisations. In this case, it would be up to the French, whose Ministry of Education seems to be wanting something like this, to arrange to consult with others, to put together an implemented proposal that has a good chance of achieving European consensus.

We agreed that it was a good idea for the French proposal to use the “MOOC” label to gain interest and motivation, while the work would in no way be limited to MOOCs. And it’s important to get on board both some MOOC providers, and related though different, some of the agencies who aggregate information about MOOCs (etc.) and offer information about them through portals so that people can find appropriate ones. The additional new metadata would of course be designed to make that search more effective, in that more of the things that people ask about will be modelled explicitly.

So, let’s move on to the Swedish proposal. This was presented under the title “Linked and Open Data for Learning and Education”, based on their national project “Linked and Open Data in Schools” (LODIS). We agreed that it isn’t really on for a National Body simply to propose a national output for European agreement, without giving evidence on why it would be helpful. In the past, the Workshop would have been a fair place to bring this kind of raw idea, and we could have all pitched in with anything relevant. But under our new arrangements, we need the Swedes themselves to lead some cross-European collaboration to fill in the motivation, and do the necessary research and comparison.

There are additional questions also relevant to both proposals. How will they relate to the big international and American players? For example, are we going to get schema.org to take these ideas on, in the fullness of time? How so? Does it matter? (I’m inclined to think it does matter.)

I hope the essentials of the new approach are apparent in both cases. The principle is that TC 353 acts as a mediator and referee, saying “OK” to the idea that some area might be ripe for further work, and encouraging people to get on with it. I would, however, suggest that three vital conditions should apply, for this approach to be effective as well as generally acceptable.

  1. The principal stakeholders have to arrange the work themselves, with enough trans-national collaboration to be reasonably sure that the product will gain the European consensus needed in the context of CEN.
  2. The majority of the drafting and testing work is done clearly before a formal process is started in CEN. In our sector, it is vital that the essential ideas are free and open, so we want a openly licenced document to be presented to the TC as a starting point, as close as can be to the envisioned finishing point. CEN will still add value through the formal process and formal recognition, but the essential input will still be openly and freely licenced for others to work with in whatever way they see fit.
  3. The TC must assert the right to stop and revoke the CEN work item, if it turns out that it is not filling a genuine European need. There is room for improvement here over the past practice of the TC and the WS-LT. It is vital to our reputation and credibility, and to the ongoing quality of our output, that we are happy with rejecting work that it not of the right quality for CEN. Only in this way can CEN stakeholders have the confidence in a process that allows self-organising groups to do all the spadework, prior to and separate from formal CEN process and oversight.

At the meeting we also heard that the ballot on the TC 353 marketing website was positive. (Disclosure: I am a member of the TC 353 “Communications Board” who advised on the content.) Hopefully, a consequence of this will be that we are able to use the TC 353 website both to flag areas for which TC 353 believes there is potential for new work, and to link to the pre-standardization work that is done in those areas that have been encouraged by the TC, wherever that work is done. We hope that this will all help significantly towards our aim of effectively open standardization work, even where the final resulting EN standards remain as documents with a price tag.

I see the main resolutions made at the meeting as enacting this new role. TC 353 is encouraging proposers of new work to go ahead and develop mature open documentation, and clear standardization proposals, in whatever European collaborations they see fit, and bring them to a future TC meeting. I’d say that promises a new chapter in the work of the TC, which we should welcome, and we should play our part in helping it to work effectively for the common good.

JSON-LD: a useful interoperability binding

Over the last few months I’ve been exploring and detailing a provisional binding of the InLOC spec to JSON-LD (spec; site). My conclusion is that JSON is better matched to linked data than XML is, if you understand how to structure JSON in the JSON-LD way. Here are my reflections, which I hope add something to the JSON-LD official documentation.

Let’s start with XML, as it is less unfamiliar to most non-programmers, due to similarities with HTML. XML offers two kinds of structures: elements and attributes. Elements are the the pieces of XML that are bounded by start and end tags (or are simply empty tags). They may nest inside other elements. Attributes are name-value pairs that exist only within element start tags. The distinction is useful for marking up text documents, as the tags, along with their attributes, are added to the underlying text, without altering it. But for data, the distinction is less helpful. In fact, some XML specifications use almost no attributes. Generally, if you are using XML to represent data, you can change attributes into elements, with the attribute name as a contained element name, and the attribute value as text contained within the new element.

Confused? You’d be in good company. Many people have complained about this aspect of XML. It gives you more than enough “rope to hang yourself with”.

Now, if you’re writing a specification that might be even remotely relevant to the world of linked data, it is really important that you write your specification in a way that clearly distinguishes between the names of things – objects, entities, etc. – and the names of their properties, attributes, etc. It’s a bit like, in natural language, distinguishing nouns from adjectives. “Dog” is a good noun, “brown” is a good adjective, and we want to be able to express facts such as “this dog is of the colour brown”. The word “colour” is the name of the property; the word “brown” is the value of the property.

The bit of linked data that is really easy to visualise and grasp is its graphical representation. In a linked data graph, customarily, you have ovals that represent things – the nouns, objects, entities, etc. – labelled arrows to represent the property names (or “predicates”); and rectangles to represent literal values.

Given the confusion above, it’s not surprising that when you want to represent linked data using XML, it can be particularly confusing. Take a look at this bit of the RDF/XML spec. You can see the node and arc diagram, and the “striped” XML that is needed to represent it. “Striping” means that as you work your way up or down the document tree, you encounter elements that represent alternately (a) things and (b) the names of properties of these things.

Give up? So do most people.

But wait. Compared to RDF/XML, representing linked data in JSON-LD is a doddle! How so?

Basics of how JSON-LD works

Well, look at the remarkably simple JSON page to start with. There you see it: the most important JSON structure is the “object”, which is “an unordered set of name/value pairs”. Don’t worry about arrays for now. Just note that a value can also be an object, so that objects can nest inside each other.

the JSON object diagram

To map this onto linked data, just look carefully at the diagram, and figure that…

  1. a JSON object represents a thing, object, entity, etc.
  2. property names are represented by the strings.

In essence, there you have it!

But in practice, there is a bit more to the formal RDF view of linked data.

  • Objects in RDF have an associated unique URI, which is what allows the linking. (No need to confuse things with blank nodes right now.)
  • To do this in JSON, objects must have a special name/value pair. JSON-LD uses the name “@id” as the special name, and its value must be the URI of the object.
  • Predicates – the names of properties – are represented in RDF by URIs as well.
  • To keep JSON-LD readable, the names stay as short and meaningful labels, but they need to be mapped to URIs.
  • If a property value is a literal, it stays as a plain value, and isn’t an object in its own right.
  • In RDF, literal values can have a data type. JSON-LD allows for this, too.

JSON-LD manages these tricks by introducing a section called the “context”. It is in the “context” that the JSON names are mapped to URIs. Here also, it is possible to associate data types with each property, so that values are interpreted in the way intended.

What of JSON arrays, then? In JSON-LD, the JSON array is used specifically to give multiple values of the same property. Essentially, that’s all. So each property name, for a given object, is only used once.

Applying this to InLOC

At this point, it is probably getting hard to hold in one’s head, so take a look at the InLOC JSON-LD binding, where all these issues are illustrated.

InLOC is a specification designed for the representation of structures of learning outcomes, competence definitions, and similar kinds of thing. Using InLOC, authorities owning what are often called “frameworks” or (confusingly) “standards” can express their structures in a form that is completely explicit and machine processable, without the common reliance on print-style layout to convey the relationships between the different concepts. One of the vital characteristics of such structures is that one, higher-level competence can be decomposed in terms of several, lower-level competences.

InLOC was planned to able to be linked data from the outset. Following many good examples, including the revered Dublin Core, the InLOC information model is expressed in terms of classes and properties. Thus, it is clear from the outset that there is a mapping to a linked data style model.

To be fully multilingual, InLOC also takes advantage of the “language map” feature of JSON-LD. Instead of just giving one text value to a property, the value of any human-language property is an object, within which the keys are the two-letter language codes, and the values are the property value in that language.

To see more, please take a look at the JSON-LD spec alongside the InLOC JSON-LD binding. And you are most welcome to a personal explanation if you get in touch with me.

To take home…

If you want to use JSON-LD, ensure that:

  • anything in your model that looks like a predicate is represented as a name in JSON object name/value pairs;
  • anything in your model that looks like a value is represented as the value of a JSON name/value pair;
  • you only use each property name once – if there are multiple values of that property, use a JSON array;
  • any entities, objects, things, or whatever you call them, that have properties, are represented as JSON objects;
  • and then, following the spec, carefully craft the JSON-LD context, to map the names onto URIs, and to specify any data types.

Try it and see. If you follow me, I think it will make sense – more sense than XML. It’s now (January 2014) a W3C Recommendation.

A new (for me) understanding of standardization

When engaging deeply in any standardization project, as I have with the InLOC project, one is likely to get new insights into what standardization is, or should be. I tried to encapsulate this in a tweet yesterday, saying “Standardization, properly, should be the process of formulation and formalisation of the terms of collective commitment”.

Then @crispinweston replied “Commitment to whom and why? In the market, fellow standardisers are competitors.” I continued, with the slight frustration at the brevity of the tweet format, “standards are ideally agreed between mutually recognising group who negotiate their common interest in commitment”. But when Crispin went on “What role do you give to the people expected to make the collective commitment in drafting the terms of that commitment?” I knew it was time to revert from micro-blogging to macro-blogging, so to speak.

Crispin casts me in the position of definer of roles — I disclaim that. I am trying, rather, firstly to observe and generalise from my observations about what standardization is, when it is done successfully, whether or not people use or think of the term “standardization”, and secondly, to intuit a good and plausible way forward, perhaps to help grow a consensus about what standardization ought to be, within the standardization community itself.

One of the challenges of the InLOC project was that the project team started from more or less carte blanche. Where there is a lot of existing practice, standardization can (in theory at least) look at existing practice, and attempt to promote standardization on the best aspects of it, knowing that people do it already, and that they might welcome (for various reasons) a way to do it in just one way, rather than many. But in the case of InLOC, and any other “anticipatory” standard, people aren’t doing closely related things already. What they are doing is publishing many documents about the knowledge, skills, competence, or abilities (or “competencies”) that people need for particular roles, typically in jobs, but sometimes as learners outside of employment. However, existing practice says very little about how these should be structured, and interrelated, in general.

So, following this “anticipatory” path, you get to the place where you have the specification, but not the adoption. How do you then get the adoption? It can only be if you have been either lucky, in that you’ve formulated a need that people naturally come to see, or that you are persuasive, in that you persuade people successfully that it is what they really (really) want.

The way of following, rather than anticipating, practice certainly does look the easier, less troubled, surer path. Following in that way, there will be a “community” of some sort. Crispin identifies “fellow standardisers” as “competitors” in the market. “Coopetition” is a now rather old neologism that comes to mind. So let me try to answer the spirit at least of Crispin’s question — not the letter, as I am seeing myself here as more of an ethnographer than a social engineer.

I envisage many possible kinds of community coming together to formulate the terms of their collective commitments, and there may be many roles within those communities. I can’t personally imagine standard roles. I can imagine the community led by authority, imposing a standard requirement, perhaps legally, for regulation. I can imagine a community where any innovator comes up with a new idea for agreeing some way of doing things, and that serves to focus a group of people keen to promote the emerging standard.

I can imagine situations where an informal “norm” is not explicitly formulated at all, and is “enforced” purely by social peer pressure. And I can imagine situations where the standard is formulated by a representative body of appointees or delegates.

The point is that I can see the common thread linking all kinds of these practices, across the spectrum of formality–informality. And my view is that perhaps we can learn from reflecting on the common points across the spectrum. Take an everyday example: the rules of the road. These are both formal and informal; and enforced both by traffic authorities (e.g. police) and by peer pressure (often mediated by lights and/or horn!)

When there is a large majority of a community in support of norms, social pressure will usually be adequate, in the majority of situations. Formal regulation may be unnecessary. Regulation is often needed where there is less of a complete natural consensus about the desirability of a norm.

Formalisation of a norm or standard is, to me, a mixed blessing. It happens — indeed it must happen at some stage if there is to be clear and fair legal regulation. But the formalisation of a standard takes away the natural flexibility of a community’s response both to changing circumstances in general, and to unexpected situations or exceptions.

Time for more comment? You would be welcome.

What is my work?

Is there a good term for my specialist area of work for CETIS? I’ve been trying out “technology for learner support”, but that doesn’t fully seem to fit the bill. If I try to explain, reflecting on 10 years (as of this month) involvement with CETIS, might readers be able to help me?

Back in 2002, CETIS (through the CRA) had a small team working with “LIPSIG”, the CETIS special interest group involved with Learner Information (the “LI” of “LIPSIG”). Except that “learner information” wasn’t a particularly good title. It was also about the technology (soon to be labelled “e-portfolio”) that gathered and managed certain kinds of information related to learners, including their learning, their skills – abilities – competence, their development, and their plans. It was therefore also about PDP — Personal Development Planning — and PDP was known even then by its published definition “a structured and supported process undertaken by an individual to reflect upon their own learning, performance and/or achievement and to plan for their personal, educational and career development”.

There’s that root word, support (appearing as “supported”), and PDP is clearly about an “individual” in the learner role. Portfolio tools were, and still are, thought of as supporting people: in their learning; with the knowledge and skills they may attain, and evidence of these through their performance; their development as people, including their learning and work roles.

If you search the web now for “learner support”, you may get many results about funding — OK, that is financial support. Narrowing the search down to “technology for learner support”, the JISC RSC site mentions enabling “learners to be supported with their own particular learning issues”, and this doesn’t obviously imply support for everyone, but rather for those people with “issues”.

As web search is not much help, let’s take a step back, and try to see this area in a wider perspective. Over my 10 years involvement with CETIS, I have gradually come to see CETIS work as being in three overlapping areas. I see educational (or learning) technology, and related interoperability standards, as being aimed at:

  • institutions, to help them manage teaching, learning, and other processes;
  • providers of learning resources, to help those resources be stored, indexed, and found when appropriate;
  • individual learners;
  • perhaps there should be a branch aimed at employers, but that doesn’t seem to have been salient in CETIS work up to now.

Relatively speaking, there have always seemed to be plenty of resources to back up CETIS work in the first two areas, perhaps because we are dealing with powerful organisations and large amounts of money. But, rather than get involved in those two areas, I have always been drawn to the third — to the learner — and I don’t think it’s difficult to understand why. When I was a teacher for a short while, I was interested not in educational adminstration or writing textbooks, but in helping individuals learn, grow and develop. Similar themes pervade my long term interests in psychology, psychotherapy, counselling; my PhD was about cognitive science; my university teaching was about human-computer interaction — all to do with understanding and supporting individuals, and much of it involving the use of technology.

The question is, what does CETIS do — what can anyone do — for individual learners, either with the technology, or with the interoperability standards that allow ICT systems to work together?

The CETIS starting point may have been about “learner information”, but who benefits from this information? Instead of focusing on learners’ needs, it is all too easy for institutions to understand “learner information” as information than enables institutions to manage and control the learners. Happily though, the group of e-portfolio systems developers frequenting what became the “Portfolio” SIG (including Pebble, CIEPD and others) were keen to emphasise control by learners, and when they came together over the initiative that became Leap2A, nearly six years ago, the focus on supporting learners and learning was clear.

So at least then CETIS had a clear line of work in the area of e-portfolio tools and related interoperability standards. That technology is aimed at supporting personal, and increasingly professional, development. Partly, this can be by supporting learners taking responsibility for tracking the outcomes of their own learning. Several generic skills or competences support their development as people, as well as their roles as professionals or learners. But also, the fact that learners enter information about their own learning and development on the portfolio (or whatever) system means that the information can easily be made available to mentors, peers, or whoever else may want to support them. This means that support from people is easier to arrange, and better informed, thus likely to be more effective. Thus, the technology supports learners and learning indirectly, as well as directly.

That’s one thing that the phrase “technology for learner support” may miss — support for the processes of other people supporting the learner.

Picking up my personal path … building on my involvement in PDP and portfolio technology, it became clear that current representations of information about skills and competence were not as effective as they could be in supporting, for instance, the transition from education to work. So it was, that I found myself involved in the area that is currently the main focus of my work, both for CETIS, and also on my own account, through the InLOC project. This relates to learners rather indirectly: InLOC is enabling the communication and reuse of definitions and descriptions of learning outcomes and competence information, and particularly structures of sets of such definitions — which have up to now escaped an effective and well-adopted standard representation. Providing this will mean that it will be much easier for educators and employers to refer to the same definitions; and that should make a big positive difference to learners being able to prepare themselves effectively for the demands of their chosen work; or perhaps enable them to choose courses that will lead to the kind of work they want. Easier, clearer and more accurate descriptions of abilities surely must support all processes relating to people acquiring and evidencing abilities, and making use of related evidence towards their jobs, their well-being, and maybe the well-being of others.

My most recent interests are evidenced in my last two blog posts — Critical friendship pointer and Follower guidance: concept and rationale — where I have been starting to grapple with yet more complex issues. People benefit from appropriate guidance, but it is unlikely there will ever be the resources to provide this guidance from “experts” to everyone — if that is even what we really wanted.

I see these issues also as part of the broad concern with helping people learn, grow and develop. To provide full support without information technology only looks possible in a society that is stable — where roles are fixed and everyone knows their place, and the place of others they relate to. In such a traditionalist society, anyone and everyone can play their part maintaining the “social order” — but, sadly, such a fixed social order does not allow people to strike out in their own new ways. In any case, that is not our modern (and “modernist”) society.

I’ve just been reading Herman Hesse’s “Journey to the East” — a short, allegorical work. (It has been reproduced online.) Interestingly, it describes symbolically the kind of processes that people might have to go through in the course of their journey to personal enlightenment. The description is in no way realistic. Any “League” such as Hesse described, dedicated to supporting people on their journey, or quest, would practically be able to support only very few at most. Hesse had no personal information technology.

Robert K. Greenleaf was inspired by Hesse’s book to develop his ideas on “Servant Leadership“. His book of that name was put together in 1977, still before the widespread use of personal information techology, and the recognition of its potential. This idea of servant leadership is also very clearly about supporting people on their journey; supporting their development, personally and professionally. What information would be relevant to this?

Providing technology to support peer-to-peer human processes seems a very promising approach to allowing everyone to find their own, unique and personal way. What I wrote about follower guidance is related to this end: to describe ways by which we can offer each other helpful mutual support to guide our personal journeys, in work as well as learning and potentially other areas of life. Is there a short name for this? How can technology support it?

My involvement with Unlike Minds reminds me that there is a more important, wider concept than personal learning, which needs supporting. We should be aspiring even more to support personal well-being. And one way of doing this is through supporting individuals with information relevant to the decisions they make that affect their personal well-being. This can easily be seen to include: what options there are; ideas on how to make decisions; what the consequences of those decision may be. It is an area which has been more than touched on under the heading “Information, Advice and Guidance”.

I mentioned the developmental models of William G Perry and Robert Kegan back in my post earlier this year on academic humility. An understanding of these aspects of personal development is an essential part of what I have come to see as needed. How can we support people’s movement through Perry’s “positions”, or Kegan’s “orders of consciousness”? Recognising where people are in this, developmental, dimension is vital to informing effective support in so many ways.

My professional interest, where I have a very particular contribution, is around the representation of the information connected with all these areas. That’s what we try to deal with for interoperability and standardisation. So what do we have here? A quick attempt at a round-up…

  • Information about people (learners).
  • Information about what they have learned (learning outcomes, knowledge, skill, competence).
  • Information that learners find useful for their learning and development.
  • Information about many subtler aspects of personal development.
  • Information relevant to people’s well-being, including
    • information about possible choices and their likely outcomes
    • information about individual decision-making styles and capabilities
    • and, as this is highly context-dependent, information about contexts as well.
  • Information about other people who could help them
    • information supporting how to find and relate to those people
    • information supporting those relationships and the support processes
    • and in particular, the kind of information that would promote a trusting and trusted relationship — to do with personal values.

I have the strong sense that this all should be related. But the field as a whole doesn’t seem have a name. I am clear that it is not just the same as the other two areas (in my mind at least) of CETIS work:

  • information of direct relevance to institutions
  • information of direct relevance to content providers.

Of course my own area of interest is also relevant to those other players. Personal well-being is vital to the “student experience”, and thus to student retention, as well as to success in learning. That is of great interest to institutions. Knowing about individuals is of great value to those wanting to sell all kinds of services to to them, but particularly services to do with learning and resources supporting learning.

But now I ask people to think: where there is an overlap between information that the learner has an interest in, and information about learners of interest to institutions and content providers, surely the information should be under the control of the individual, not of those organisations?

What is the sum of this information?

Can we name that information and reclaim it?

Again, can people help me name this field, so my area of work can be better understood and recognised?

If you can, you earn 10 years worth of thanks…

Developing a new approach to competence representation

InLOC is a European project organised to come up with a good way of communicating structures or frameworks of competence, learning outcomes etc. We’ve now produced our interim reports for consultation: the Information Model and the Guidelines. We welcome feedback from everyone, to ensure this becomes genuinely useful and not just another academic exercise.

The reason I’ve not written any blog posts for a few weeks is that so much of my energy has been going into InLOC, and for good reason. It has been a really exciting time working with the team to develop a better approach to representing these things. Many of us have been pushing in this direction for years, without ever quite getting there. Several projects have been nearby, including, last year, InteropAbility (JISC page; project wiki) and eCOTOOL (project web site; my Competence Model page) — I’ve blogged about these before, and we have built on ideas from both of them, as well as from several other sources: you may be surprised at the range and variety of “stakeholders” in this area that we have assembled within InLOC. Doing the thinking for the Logic of Competence series was of course useful background, but nor did it quite get there.

What I want to announce now is that we are looking for the widest possible feedback as further input to the project. It’s all too easy for people like us, familiar with interoperability specifications, simply to cook up a new one. It is far more of a challenge, as well as hugely more worthwhile and satisfying, to create something genuinely useful, which people will actually use. We have been looking at other groups’ work for several months now, and discussing the rich, varied, and sometimes confusing ideas going around the community. Now we have made our own initial synthesis, and handed in the “interim” draft agreements, it is an excellent time to carry forward the wide and deep consultation process. We want to discuss with people whether our InLOC format will work for them; whether they can adopt, use or recommend it (or whatever their role is to do with specifications; or, what improvements need to be made so that they are most likely to take it on for real.

By the end of November we are planning to have completed this intense consultation, and we hope to end up with the desired genuinely useful results.

There are several features of this model which may be innovative (or seem so until someone points out somewhere they have been done before!)

  1. Relationships aren’t just direct as in RDF — there is a separate class to contain the relationship information. This allows extra information, including a number, vital for defining levels.
  2. We distinguish the normal simple properties, with literal objects, which are treated as integral parts of whatever it is (including: identifier, title, description, dates, etc.) from what could be called “compound properties”. Compound properties, that have more than one part to their range, are a little like relationships, and we give them a special property class, allowing labels, and a number (like in relationships).
  3. We have arranged for the logical structure, including the relationships and compound properties, to be largely independent of the representation structure. This allows several variant approaches to structuring, including tree structures, flat structures, or Atom-like structures.

The outcome is something that is slightly reminiscent both of Atom itself, and of Topic Maps. Both are not so like RDF, which uses the simplest possible building blocks, but resulting in the need for harder-to-grasp constructs like blank nodes. The fact of being hard to grasp leads to people trying different ways of doing things, and possibly losing interoperability on the way. Both Atom and Topic Maps, in contrast, add a little more general purpose structure, which does make quite a lot of intuitive sense in both cases, and they have been used widely, apparently with little troublesome divergence.

Are we therefore, in InLOC, trying to feel our way towards a general-purpose way of representing substantial hierarchical structures of independently existing units, in a way that makes more intuitive sense that elementary approaches to representing hierarchies? General taxonomies are simply trying to represent the relationships between concepts, whereas in InLOC we are dealing with a field where, for many years, people have recognised that the structure is an important entity in its own right — so much so that it has seemed hard to treat the components of existing structures (or “frameworks”) as independent and reusable.

So, see what you think, and please tell me, or one of the team, what you do honestly think. And let’s discuss it. The relevant links are also available straight from the InLOC wiki home page. And if you are responsible for creating or maintaining structures of intended learning outcomes, skills, competences, competencies, etc., then you are more than welcome to try out our new approach, that we hope combines ease of understanding with the power to express just what you want to express in your “framework”, and that you will be persuaded to use it “for real”, perhaps when we have made the improvements that you need.

We envisage a future when many ICT tools can use the same structures of learning outcomes and competences, saving effort, opening up interoperability, and greatly increasing the possibilities for services to build on top of each other. But you probably don’t need reminding of the value of those goals. We’re just trying to help along the way.

Reviewing the future for Leap2

JISC commissioned a Leap2A review report (PDF), carried out early in 2012, that has now been published. It is available along with other relevant materials from the e-Portfolio interoperability JISC page. For anyone following the fortunes of Leap2A, it is highly worthwhile reading. Naturally, not all possible questions were answered (or asked), and I’d like to take up some of these, with implications for the future direction of Leap2 more generally.

The summary recommendations were as follows — these are very welcome!

  1. JISC should continue to engage with vendors in HE who have not yet implemented Leap2A.
  2. Engagement should focus on communities of practice that are using or are likely to use e-portfolios, and situations where e-portfolio data transfer is likely to have a strong business case.
  3. JISC should continue to support small-scale tightly focused developments that are likely to show immediate impact.
  4. JISC should consider the production of case studies from PebblePad and Mahara that demonstrate the business case in favour of Leap2A.
  5. JISC should consider the best way of encouraging system vendors to provide seamless import services.
  6. JISC should consider constructing a standardisation roadmap via an appropriate BSI or CEN route.

That tallies reasonably with the outcome of the meeting back in November last year, where we reckoned that Leap2A needs: more adoption; more evidence of utility; to be taken more into the professional world; good governance; more examples; and for the practitioner community to build around it models of lifelong development that will justify its existence.

Working backwards up the list for the Leap2A review report, recommendation 6 is one for the long term. It could perhaps be read in the context of the newly formed CETIS position on the recent Government Open Standards Consultation. There we note:

Established public standards bodies (such as ISO, BSI and CEN), while doing valuable work, have some aspects that would benefit from modernisation to bring them more into line with organisations such as W3C and OASIS.

The point then elaborated is that the community really needs open standards that are freely available as well as royalty-free and unencumbered. The de jure standards bodies normally still charge for copies of their standards, as part of their business model, which we see as outdated. If we can circumvent that issue, then BSI and CEN would become more attractive options.

It is the previous recommendation, number 5 in the list above, that I will focus on more, though. Here is the fuller version of that recommendation (appearing as paragraph 81).

One of the challenges identified in this review is to increase the usability of data exchange with the Leap2A specification, by removing the current necessity for separate export and import. This report RECOMMENDS that JISC considers the best way of encouraging system vendors to provide seamless data exchange services between their products, perhaps based on converging practice in the use of interoperability and discovery technologies (for example future use of RDF). It is recognised that this type of data exchange may require co-ordinated agreement on interoperability approaches across HEIs, FECs and vendors, so that e-portfolio data can be made available through web services, stressing ease of access to the learner community. In an era of increasing quantities of open and linked data, this recommendation seems timely. The current initiatives around courses information — XCRI-CAP, Key Information Sets (KIS) and HEAR — may suggest some suitable technical approaches, even though a large scale and expensive initiative is not recommended in the current financially constrained circumstances.

As an ideal, that makes perfect sense from the point of view of an institution transferring a learner’s portfolio information to another institution. However, seamless transfer is inherently limited by the compatibility (or lack of it) between the information stored in each system. There is also a different scenario, that has always been in people’s minds when working on Leap2A. It is that learners themselves may want to be able to download their own information, to keep for use, at an uncertain time in the future, in various ways that are not necessarily predictable by the institutions that have been hosting their information. In any case, the predominant culture in the e-portfolio community is that all the information should be learner-ownable, if not actually learner-owned. This is reflected in the report’s paragraph 22, dealing with current usage from PebblePad.

The implication of the Leap2A functionality is that data transfer is a process of several steps under the learner’s control, so the learner has to be well-motivated to carry it out. In addition Leap2A is one of several different import/export possibilities, and it may be less well understood than other options. It should perhaps be stressed here that PebblePad supports extensive data transfer methods other than Leap2A, including zip archives, native PebblePad transfers of whole or partial data between accounts, and similarly full or partial export to HTML.

This is followed up in the report’s paragraph 36, part of the “Challenges and Issues” section.

There also appears to be a gap in promoting the usefulness of data transfer specifically to students. For example in the Mahara and PebblePad e-portfolios there is an option to export to a Leap2A zip file or to a website/HTML, without any explanation of what Leap2A is or why it might be valuable to export to that format. With a recognisable HTML format as the other option, it is reasonable to assume that students will pick the format that they understand. Similarly it was suggested that students are most likely to export into the default format, which in more than one case is not the Leap2A specification.

The obvious way to create a simpler interface for learners is to have just one format for export. What could that format be? It should be noted first that separate files that are attached to or included with a portfolio will always remain separate. The issue is the format of the core data, which in normal Leap2A exports is represented by a file named “leap2a.xml”.

  1. It could be plain HTML, but in this case the case for Leap2A would be lost, as there is no easy way for plain HTML to be imported into another portfolio system without a complex and time-consuming process of choosing where each single piece of information should be put in the new system.
  2. It could be Leap2A as it is, but the question then would be, would this satisfy users’ needs? Users’ own requirements for the use of exports is not spelled out in the report, and it does not appear to have been systematically investigated anywhere, but it would be reasonable to expect that one use case would be that users want to display the information so that it can be cut and pasted elsewhere. Leap2A supports the display of media files within text, and formatting of text, only through the inclusion of XHTML within the content of entries, in just the same way as Atom does. It is not unreasonable to conclude that limiting exports to plain Leap2A would not fully serve user export needs, and therefore it is and will continue to be unreasonable to expect portfolio systems to limit users to Leap2A export only.
  3. If there were a format that fully met the requirements both for ease of viewing and cut-and-paste, and for relatively easy and straightforward importing to another portfolio system (comparable to Leap2A currently), it might then be reasonable to expect portfolio systems to have this as their only export format. Then, users would not have to choose, would not be confused, and the files which they could view easily and fully through a browser on their own computer system would also be able to be imported to another portfolio system to save the same time and effort that is currently saved through the use of Leap2A.

So, on to the question, what could that format be? What follows explains just what the options are for this, and how it would work.

The idea for microformats apparently originated in 2000. The first sentence of the Wikipedia article summarises nicely:

A microformat (sometimes abbreviated µF) is a web-based approach to semantic markup which seeks to re-use existing HTML/XHTML tags to convey metadata and other attributes in web pages and other contexts that support (X)HTML, such as RSS. This approach allows software to process information intended for end-users (such as contact information, geographic coordinates, calendar events, and the like) automatically.

In 2004, a more sophisticated approach to similar ends was proposed in RDFa. Wikipedia has “RDFa (or Resource Description Framework –in– attributes) is a W3C Recommendation that adds a set of attribute-level extensions to XHTML for embedding rich metadata within Web documents.”

In 2009 the WHATWG were developing Microdata towards its current form. The Microformats community sees Microdata as having grown out of Microformats ideas. Wikipedia writes “Microdata is a WHATWG HTML specification used to nest semantics within existing content on web pages. Search engines, web crawlers, and browsers can extract and process Microdata from a web page and use it to provide a richer browsing experience for users.”

Wikipedia quotes the Schema.org originators (launched on 2 June 2011 by Bing, Google and Yahoo!) as stating that it was launched to “create and support a common set of schemas for structured data markup on web pages”. It provides a hierarchical vocabulary, in some cases drawing on Microformats work, that can be used within the RDFa as well as Microdata formats.

Is it possible to represent Leap2A information in this kind of way? Initial exploratory work on Leap2R has suggested that it is indeed possible to identify a set of classes and properties that could be used more or less as they are with RDFa, or could be correlated with the schema.org hierarchy for use with Microdata. However, the solution needs detail adding and working through.

In principle, using RDFa or Microdata, any portfolio information could be output as HTML, with the extra information currently represented by Leap2A added into the HTML attributes, which is not directly displayed, and so does not interfere with human reading of the HTML. Thus, this kind of representation could fully serve all the purposes currently served by HTML export of Leap2A. It seems highly likely that practical ways of doing this can be devised that can convey the complete structure currently given by Leap2A. The requirements currently satisfied by Leap2A would be satisfied by this new format, which might perhaps be called “Leap2H5″, for Leap2 information in HTML5, or maybe alternatively “Leap2XR”, for Leap2 information in XHTML+RDFa (in place of Leap2A, meaning Leap2 information in Atom).

Thus, in principle it appears perfectly possible to have a single format that simultaneously does the job both of HTML and Leap2A, and so could serve as a plausible principal export and import format, removing that key obstacle identified in paragraph 36 of the Leap2A review report. The practical details may be worked out in due course.

There is another clear motivation in using schema.org metadata to mark up portfolio information. If a web page uses schema.org semantics, whether publicly displayed on a portfolio system or on a user’s own site, Google and others state that the major search engines will create rich snippets to appear under the search result, explaining the content of the page. This means, potentially, that portfolio presentations would be more easily recognised by, for instance, employers looking for potential employees. In time, it might also mean that the search process itself was made more accurate. If portfolio systems were to adopt export and import using schema.org in HTML, it could also be used for all display of portfolio information through their systems. This would open the way to effective export of small amounts of portfolio information simply by saving a web page displayed through normal e-portfolio system operation; and could also serve as an even more effective and straightforward method for transferring small amounts of portfolio information between systems.

Having recently floated this idea of agreeing Leap2 semantics in schema.org with European collaborators, it looks like gaining substantial support. This opens up yet another very promising possibility: existing European portfolio related formats could be harmonised through this new format, that is not biased towards any of the existing ones — as well as Leap2A, there is the Dutch NTA 2035 (derived from IMS ePortfolio), and also the Europass CV format. (There is more about this strand of unfunded work through MELOI.) All of these are currently expressed using XML, but none have yet grasped the potential of schema.org in HTML through microdata or RDFa. To restate the main point here, this means having the semantics of portfolio information embedded in machine-processable ways, without interfering with the human-readable HTML.

I don’t want to be over-optimistic, as currently money tends only to go towards initiatives with a clear business case, but I am hopeful that in the medium term, people will recognise that this is an exciting and powerful potential development. When any development of Leap2 gets funded, I’m suggesting that this is what to go for, and if anyone has spare resource to work on Leap2 in the meanwhile, this is what I recommend.