Simon Grant » interoperability http://blogs.cetis.org.uk/asimong Cetis blog Fri, 18 Aug 2017 19:43:02 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.22 How do I go about doing InLOC? http://blogs.cetis.org.uk/asimong/2014/12/02/how-to-do-inloc/ http://blogs.cetis.org.uk/asimong/2014/12/02/how-to-do-inloc/#comments Tue, 02 Dec 2014 14:49:59 +0000 http://blogs.cetis.org.uk/asimong/?p=1581 (26th in my logic of competence series.)

It’s been three years now since the European expert team started work on InLOC, working out a good model for representing structures and frameworks of learning outcomes, skill and competence. As can be expected of forward-looking, provisional work, there has not yet been much take-up, but it’s all in place, and more timely now than ever.

Then yesterday I received a most welcome call from a training company involved in one particular sector, who are interested in using the principles of InLOC to help their LMS map course and module information to qualification frameworks. Yes! I enthusiastically replied.

What might help people in that situation is a simple, basic approach that sets you on the right path for doing things the InLOC way. I realised that this isn’t so easy to find in the main documentation, so here I set out this basic approach, which will reliably get anyone started on mapping anything to the InLOC model, and cross-references the InLOC documentation.

One description of what to do is documented in the section How to follow InLOC, but, for all the reasons above, here I will try going back to basics and starting again, in the hope that describing the approach in a different way may be helpful.

LOC definitions

The most basic feature that occurs many many times in any published framework is called, by InLOC, a “LOC definition”. This is, simply, any concept, described by any form of words, that indicates an ability – whether it be knowledge, skill, competence or any other learning outcome – that can be attributed to an individual person, and in some way – any way – assessed. It’s hard to define more clearly or succinctly than that, and to get a better understanding you may want to look at examples.

In the documentation, the best place to start is probably the section on InLOC explained through example. In that section, a framework (the European e-Competence Framework, e-CF) is thoroughly analysed. You can see in Figure 2 how, for just one page of the documentation, each LOC definition has been picked out separately.

A LOC definition includes at least these overlapping classes of concept:

  • anything that is listed as a learning outcome, a skill, a competency, an ability;
  • any separate parts of any learning outcomes;
  • anything that expresses an assessment criterion;
  • any level of any outcome, skill, competence, etc. (at any granularity);
  • a generic definition of what is required by a level.

Pieces of text that relate to the same concept – e.g. title and description of the same thing – are treated together. Everything that can be assessed separately is treated as a separate LOC definition. The grammatical structure of the text is of little importance. Often, though, in amongst the documentation, you read text that is not to do with abilities. Just pass over this for the moment.

One thing I’ve noticed sometimes is that some concepts, which could have their own LOC definitions, are implied but not explicit in the documentation. In yesterday’s discussion, one example was the levels of the unit as a whole. Assessment criteria are often specified for different levels of particular abilities, but the level as a whole is implied.

The first step, then, is to look for all the LOC definitions in your documentation, and any implied ones that are not explicitly documented. ANY piece of text that represents something that could potentially be assessed as an outcome of learning is most likely a LOC definition.

Binary and rankable

If you’ve looked through the documentation, you’ve probably come across this distinction, and it is very helpful if you are going to structure something in the InLOC way. But when I was writing the documentation, I don’t think I had grasped quite how central it is. It is so central that more recently I have come to putting it as a vital first concept to grasp. Very recently I quickly put together a slide deck about this, on Slideshare now, under the title Distinguishing binary and rankable definitions is key to structuring competence frameworks.

I first publicly clarified this distinction in a blog post before InLOC even started: Representing level relationships; and more recently mentioned in InLOC and OpenBadges: a reprise.

In essence: a binary learning outcome or competence (LOC) concept is one where it makes sense to ask, have you reached this level or standard? Are you as good as this? The answer gives a binary distinction between “yes”, for those people who have reached the level, and “not yet” for those who have not. The example I give in the recent slide deck is “can touch type in English at 60 wpm with fewer than 1 mistake per hundred words”. The answer is clearly yes or no. Or, “can juggle with three juggling balls for a minute or longer” (which I can’t yet).

On the other hand, a rankable concept is one where there is no clear binary criterion, but instead you can rank people in order of their ability in that concept. A rankable concept related to the previous binary one would simply be “touch typing” or “can touch type”. A good question for juggling would be “how well can you juggle?” You may want to analyse this more finely, and distinguish different independent dimensions of juggling ability, but more probably I guess you would be content to roughly rank people in order of a general juggling ability.

The second step is to look at all the LOC definitions you have isolated, and judge whether they are binary or (at least roughly) rankable.

Relating LOC definitions together

The third step is to relate all the LOC definitions you found to each other. It is commonplace that frameworks have a structure that is often hierarchical. An ability at a “high” level (of granularity) involves many abilities at “lower” levels. The simplest way of representing that is that the wider definition “has parts”, which are the narrower definitions, perhaps the products of “functional analysis” of the wider definition. InLOC allows you to relate definitions in this way, using the relationship “hasLOCpart”.

But InLOC also allows several other relationships between LOC definitions. These can be seen in the three tables on the relationships page in the documentation. To see how the relationships themselves are related, look at the third table, “ontology”. The tables together give you a clear and powerful vocabulary for describing relationships between LOC definitions. Naturally, it has been carefully thought through, and is a vital part of InLOC as a whole.

Very simple structures can be described using only the “hasLOCpart” relationship. However, when you have levels, you will need at least the “hasDefinedLevel” relationship as well. Broadly speaking, it will be a rankable LOC definition that “hasDefinedLevel” of a binary definition. Find these connections in particular!

For the other relationships, decide whether “hasLOCpart” is a good enough representation, or whether you need “hasNecessaryPart”, “hasOptionalPart” or “hasExample”. Each of these has a different meaning in the real world. Mostly, you will probably find that rankable definitions have rankable parts, and binary definitions have binary parts.

There is more related discussion in another of the blog posts from my “logic of competence” series, More and less specificity in competence definitions.

Putting together the LOC structure

In InLOC, a “LOC structure” is the collection of LOC definitions along with the relationships between them. Relationships between LOC definitions are only defined in LOC structures. This is to allow LOC definitions to appear in different structures, potentially with different relationships. You may think you know what comprises, for example, communication skills, but other people may have different opinions, and classify things differently.

A LOC structure often corresponds to a complete documented scheme of learning outcomes, and often has a name which is clearly not something that is a LOC definition, as described previously. You can’t assess how good someone is at “the European e-competence framework”(the e-CF) (unless you mean knowledge of that framework) but you can assess how good people are at its component parts, the LOC definitions (for rankable ones) or whether they reach the defined levels (for binary ones).

And the e-CF, analysed in detail in the InLOC documentation, is a good example where you can trace the structure down in two ways: either by topic, then later by levels; or by level, and then levelled (binary) topic definitions that are part of those levels.

Your aim is to document all the relationships between LOC definitions that are relevant to your application, and wrap those up with other related information in a LOC structure.

What you will have gained

The task of creating an InLOC structure is more than simply creating a file that can potentially be transmitted between web applications, and related to, referred to by, other structures that you are dealing with. It is also an exercise that can reveal more about the structure of the framework than was explicitly written into it. Often one finds oneself making explicit the relationships that are documented implicitly in terms of page and table layout. Often one fills in LOC definitions that have been left out. Whichever way you do it, you will be left with firmer, more principled structures on which to build your web applications.

We expect that sooner or later InLOC will be adopted as at least the basis of a model underlying interoperable and portable representations of frameworks of learning outcomes, skills, competences, abilities, and related knowledge structures. Much of the work has been done, but it may need revising in the light of future developments.

]]>
http://blogs.cetis.org.uk/asimong/2014/12/02/how-to-do-inloc/feed/ 1
Why, when and how should we use frameworks of skill and competence? http://blogs.cetis.org.uk/asimong/2014/05/19/why-frameworks-of-skill-and-competence/ http://blogs.cetis.org.uk/asimong/2014/05/19/why-frameworks-of-skill-and-competence/#comments Mon, 19 May 2014 12:11:14 +0000 http://blogs.cetis.org.uk/asimong/?p=1530 (25th in my logic of competence series.)

When we understand how frameworks could be used for badges, it becomes clearer that we need to distinguish between different kinds of ability, and that we need tools to manage and manipulate such open frameworks of abilities. InLOC gives a model, and formats, on which such tools can be based.

I’ll be presenting this material at the Crossover Edinburgh conference, 2014-06-05, though my conference presentation will be much more interactive and open, and without much of this detail below.

What are these frameworks?

Frameworks of skill or competence (under whatever name) are not as unfamiliar as they might sound to some people at first. Most of us have some experience or awareness of them. Large numbers of people have completed vocational qualifications — e.g. NVQs in England — which for a long time were each based on a syllabus taken from what are called National Occupational Standards (NOSs). Each NOS is a statement of what a person has to be able to do, and what they have to know to support that ability, in a stated vocational role, or job, or function. The scope of NOSs is very wide — to list the areas would take far too much space — so the reader is asked to take a look at the national database of current NOSs, which is hosted by the UKCES on their dedicated web site.

Several professions also have good reason to set out standards of competence for active members of that profession. One of the most advanced in this development, perhaps because of the consequences of their competence on life and death, is the medical profession. Documents like Good Medical Practice, published by the General Medical Council, starts by addressing doctors:

Patients must be able to trust doctors with their lives and health. To justify that trust you must show respect for human life and make sure your practice meets the standards expected of you in four domains.

and then goes on to detail those domains:

  • Knowledge, skills and performance
  • Safety and quality
  • Communication, partnership and teamwork
  • Maintaining trust

The GMC also publishes the related Tomorrow’s Doctors, in which it

sets the knowledge, skills and behaviours that medical students learn at UK medical schools: these are the outcomes that new UK graduates must be able to demonstrate.

These are the kinds of “framework” that we are discussing here. The constituent parts of these frameworks are sometimes called “competencies”, a term that is intended to cover knowledge, skills, behaviours, attitudes, etc., but as that word is a little unfriendly, and bearing in mind that practical knowledge is shown through the ability to put that knowledge into practice, I’ll use “ability” as a catch all term in this context.

Many larger employers have good reasons to know just what the abilities of their employees are. Often, people being recruited into a job are asked in person, and employers have to go through the process of weighing up the evidence of a person’s abilities. A well managed HR department might go beyond this to maintaining ongoing records of employees’ abilities, so that all kinds of planning can be done, skills gaps identified, people suggested for new roles, and training and development managed. And this is just an outsider’s view!

Some employers use their own frameworks, and others use common industry frameworks. One industry where common frameworks are widely used is information and communications technology. SFIA, the Skills Framework for the Information Age, sets out all kinds of skills, at various levels, that are combined together to define what a person needs to be able to do in a particular role. Similar to SFIA, but simpler, is the European e-Competence Framework, which has the advantage of being fully and openly available without charge or restriction.

Some frameworks are intended for wider use than just employment. A good example is Mozilla’s Web Literacy Map, which is “a map of competencies and skills that Mozilla and our community of stakeholders believe are important to pay attention to when getting better at reading, writing and participating on the web.” They say “map”, but the structure is the same as other frameworks. Their background page sets out well the case for their common framework. Doug Belshaw suggests that you could use the Web Literacy Map for “alignment” of the kind of Open Badges that are also promoted by Mozilla.

Links to badges

You can imagine having badges for keeping track of people’s abilities, where the abilities are part of frameworks. To help people move between different roles, from education and training to work, and back again, having their abilities recognised, and not having to retrain on abilities that have already been mastered, those frameworks would have to be openly published, able to be referenced in all the various contexts. It is open frameworks that are of particular interest to us here.

Badges are typically issued by organisations to individuals. Different organisations relate to abilities differently. Some organisations, doing business or providing a service, just use employees’ abilities to deliver products and services. Other organisations, focusing around education and training, just help people develop abilities, which will be used elsewhere. Perhaps most organisations, in practice, are somewhere on the spectrum between these two, where abilities are both used and developed, in varied proportions. Looking at the same thing from an individual point of view, in some roles people are just using their abilities to perform useful activities; in other roles they are developing their abilities to use in a different role. Perhaps there are many roles where, again, there is a mixture between these two positions. The value of using the common, open frameworks for badges is that the badges could (in principle) be valued across different kinds of organisation, and different kinds of role. This would then help people keep account of their abilities while moving between organisations and roles, and have those abilities more easily recognised.

The differing nature of different abilities

However, maybe we need to be more careful than simply to take every open framework, and turn it into badges. If all the abilities than were used in all roles and organisations had separate badges, vast numbers of badges would exist, and we could imagine the horrendous complexity of maintaining them and managing them. So it might make sense to select the most appropriate abilities for badging, as follows.

  • Some abilities are plentiful, and don’t need special training or rewarding — maybe organisations should just take them for granted, perhaps checking that what is expected is there.
  • Some abilities are hard, or impossible, to develop: you have them or you don’t. In this case, using badges would risk being discriminatory. Badges for e.g. how high a person can reach, or how long they can be in the sun without burning, would be unnecessary as well as seriously problematic, while one can think of many other personal characteristics, potentially framed as abilities, which might be less visible on the surface, but potentially lead to discrimination, as people can’t just change them.
  • Some abilities might only be able to be learned within a specific role. There is little point in creating badges for these abilities, if they do not transfer from role to role.
  • Some abilities can be developed, are not abundant, and can be transferred substantially from one role to another. These are the ones that deserve to be tracked, and for which badges are perhaps most worth developing. This still leaves open the question of the granularity of the badges.

Practical considerations governing the creation and use of frameworks

It’s hard to create a good, generally accepted common skills or competence framework. In order to do so, one has to put together several factors.

  • The abilities have to be sufficiently common to a number of different roles, between which people may want to move.
  • The abilities have to be described in a way that makes sense to all collaborating parties.
  • It must be practical to include the framework into other tools.
  • The framework needs to be kept up to date, to reflect changing abilities needed for actual roles.
  • In particular, as the requirements for particular jobs vary, the components of a framework need to be presented in such a way that they can be selected, or combined with components of other frameworks, to serve the variety of roles that will naturally occur in a creative economy.
  • Thus, the descriptions of the abilities, and the way in which they are put together, need all to be compatible.

Let’s look at some of this in more detail. What is needed for several purposes is the ability to create a tailored set of abilities. This would be clearly useful in describing both job opportunities, and actual personal abilities. It is of course possible to do all of this in a paper-like way, simply cutting and pasting between documents. But realistically, we need tools to help. As soon as we introduce ICT tools, we have the requirement for standard formats which these tools can work with. We need portability of the frameworks, and interoperability of the tools.

For instance, it would be very useful to have a tool or set of tools which could take frameworks, either ones that are published, or ones that are handed over privately, and manipulate them, perhaps with a graphical interface, to create new, bespoke structures.

Contrast with the actual position now. Current frameworks rarely attempt to use any standard format, as there are no very widely accepted standards for such a format. Within NOSs, there are some standards; the UK government has a list of their relevant documents including “NOS Quality Criteria” and a “NOS Guide for Developers” (by Geoff Carroll and Trevor Boutall). But outside this area practice varies widely. In the area of education and training, the scene is generally even less developed. People have started to take on the idea of specifying the “learning outcomes” that are intended to be achieved as a result of completing courses of learning, educaction or training, but practice is patchy, and there is very little progress towards common frameworks of learning outcomes.

We need, therefore, a uniform “model”, not for skills themselves, which are always likely to vary, but for the way of representing skills, and for the way in which they are combined into frameworks.

The InLOC format

Between 2011 and 2013 I led a team developing a specification for just this kind of model and format. The project was called “Integrating Learning Outcomes and Competences”, or InLOC for short. We developed CEN Workshop Agreement CWA 16655 in three parts, available from CEN in PDF format by ftp:

  1. Information Model for Learning Outcomes and Competences
  2. Guidelines including the integration of Learning Outcomes and Competences into existing specifications
  3. Application Profile of Europass Curriculum Vitae and Language Passport for Integrating Learning Outcomes and Competences

The same content and much extra background material is available on the InLOC project web site. This post is not the place to explain InLOC in detail, but anyone interested is welcome to contact me directly for assistance.

What can people do in the meanwhile?

I’ve proposed elsewhere often enough that we need to develop tools and open frameworks together, to achieve a critical mass where there enough frameworks published to make it worthwhile for tool developers, and sufficiently developed tools to make it worthwhile to make the extra effort to format frameworks in the common way (hopefully InLOC) that will work with the tools.

There will be a point at which growth and development in this area will become self-sustaining. But we don’t have to wait for that point. This is what I think we could usefully be doing in the meanwhile, if we are in a position to do so.

1. Build your own frameworks
It’s a challenge if you haven’t been involved in skill or competence frameworks before, but the principles are not too hard to grasp. Start out by asking what roles, and what functions, there are in your organisation, and try to work out what abilities, and what supporting knowledge, are needed for each role and for each function. You really need to do this, if you are to get started in this area. Or, if you are a microbusiness that really doesn’t need a framework, perhaps you can build one for a larger organisation.
2. Use parts of frameworks that are there already, where suitable
It may not be as difficult as you thought at first. There are many resources out there, such as NOSs, and the other frameworks mentioned above. Search, study, see if you can borrow or reuse. Not all frameworks allow it, but many do. So, some of your work may already be done for you.
3. Publish your frameworks, and their constituent abilities, each with a URL
This is the next vital step towards preparing your frameworks for open use and reuse. The constituent abilities (and levels, see the InLOC documentation) really need their own identifiers, as well as the overall frameworks, whether you call those identifiers URLs, URIs or IRIs.
4. Use the frameworks consistently throughout the organisation
To get the frameworks to stick, and to provide the motivation for maintaining them, you will have to use them in your organisation. I’m not an expert on this side of practice, but I would have thought that the principles are reasonably obvious. The more you have a uniform framework in use across your organisation, the more people will be able to see possibilities for transfer of skills, flexible working, moving across roles, job rotation, and other similar initiatives that can help satisfy employees.
5. Use InLOC if possible
It really does provide a good, general purpose model of how to represent a framework, so that it can be ready for use by ICT systems. Just ask if you need help on this!
6. Consider integrating open badges
It makes sense to consider your badge strategy and your framework strategy together. You may also find this old post of mine helpful.
7. Watch for future development of tools, or develop some yourself!
If you see any, try to help them towards being really useful, by giving constructive feedback. I’d be happy to help any tool developers “get” InLOC.

I hope these ideas offer people some pointers on a way forward for skill and competence frameworks. See other of my posts for related ideas. Comments or other feedback would be most welcome!

]]>
http://blogs.cetis.org.uk/asimong/2014/05/19/why-frameworks-of-skill-and-competence/feed/ 0
JSON-LD: a useful interoperability binding http://blogs.cetis.org.uk/asimong/2013/12/13/json-ld-binding/ http://blogs.cetis.org.uk/asimong/2013/12/13/json-ld-binding/#comments Fri, 13 Dec 2013 12:12:12 +0000 http://blogs.cetis.org.uk/asimong/?p=1488 Over the last few months I’ve been exploring and detailing a provisional binding of the InLOC spec to JSON-LD (spec; site). My conclusion is that JSON is better matched to linked data than XML is, if you understand how to structure JSON in the JSON-LD way. Here are my reflections, which I hope add something to the JSON-LD official documentation.

Let’s start with XML, as it is less unfamiliar to most non-programmers, due to similarities with HTML. XML offers two kinds of structures: elements and attributes. Elements are the the pieces of XML that are bounded by start and end tags (or are simply empty tags). They may nest inside other elements. Attributes are name-value pairs that exist only within element start tags. The distinction is useful for marking up text documents, as the tags, along with their attributes, are added to the underlying text, without altering it. But for data, the distinction is less helpful. In fact, some XML specifications use almost no attributes. Generally, if you are using XML to represent data, you can change attributes into elements, with the attribute name as a contained element name, and the attribute value as text contained within the new element.

Confused? You’d be in good company. Many people have complained about this aspect of XML. It gives you more than enough “rope to hang yourself with”.

Now, if you’re writing a specification that might be even remotely relevant to the world of linked data, it is really important that you write your specification in a way that clearly distinguishes between the names of things – objects, entities, etc. – and the names of their properties, attributes, etc. It’s a bit like, in natural language, distinguishing nouns from adjectives. “Dog” is a good noun, “brown” is a good adjective, and we want to be able to express facts such as “this dog is of the colour brown”. The word “colour” is the name of the property; the word “brown” is the value of the property.

The bit of linked data that is really easy to visualise and grasp is its graphical representation. In a linked data graph, customarily, you have ovals that represent things – the nouns, objects, entities, etc. – labelled arrows to represent the property names (or “predicates”); and rectangles to represent literal values.

Given the confusion above, it’s not surprising that when you want to represent linked data using XML, it can be particularly confusing. Take a look at this bit of the RDF/XML spec. You can see the node and arc diagram, and the “striped” XML that is needed to represent it. “Striping” means that as you work your way up or down the document tree, you encounter elements that represent alternately (a) things and (b) the names of properties of these things.

Give up? So do most people.

But wait. Compared to RDF/XML, representing linked data in JSON-LD is a doddle! How so?

Basics of how JSON-LD works

Well, look at the remarkably simple JSON page to start with. There you see it: the most important JSON structure is the “object”, which is “an unordered set of name/value pairs”. Don’t worry about arrays for now. Just note that a value can also be an object, so that objects can nest inside each other.

the JSON object diagram

To map this onto linked data, just look carefully at the diagram, and figure that…

  1. a JSON object represents a thing, object, entity, etc.
  2. property names are represented by the strings.

In essence, there you have it!

But in practice, there is a bit more to the formal RDF view of linked data.

  • Objects in RDF have an associated unique URI, which is what allows the linking. (No need to confuse things with blank nodes right now.)
  • To do this in JSON, objects must have a special name/value pair. JSON-LD uses the name “@id” as the special name, and its value must be the URI of the object.
  • Predicates – the names of properties – are represented in RDF by URIs as well.
  • To keep JSON-LD readable, the names stay as short and meaningful labels, but they need to be mapped to URIs.
  • If a property value is a literal, it stays as a plain value, and isn’t an object in its own right.
  • In RDF, literal values can have a data type. JSON-LD allows for this, too.

JSON-LD manages these tricks by introducing a section called the “context”. It is in the “context” that the JSON names are mapped to URIs. Here also, it is possible to associate data types with each property, so that values are interpreted in the way intended.

What of JSON arrays, then? In JSON-LD, the JSON array is used specifically to give multiple values of the same property. Essentially, that’s all. So each property name, for a given object, is only used once.

Applying this to InLOC

At this point, it is probably getting hard to hold in one’s head, so take a look at the InLOC JSON-LD binding, where all these issues are illustrated.

InLOC is a specification designed for the representation of structures of learning outcomes, competence definitions, and similar kinds of thing. Using InLOC, authorities owning what are often called “frameworks” or (confusingly) “standards” can express their structures in a form that is completely explicit and machine processable, without the common reliance on print-style layout to convey the relationships between the different concepts. One of the vital characteristics of such structures is that one, higher-level competence can be decomposed in terms of several, lower-level competences.

InLOC was planned to able to be linked data from the outset. Following many good examples, including the revered Dublin Core, the InLOC information model is expressed in terms of classes and properties. Thus, it is clear from the outset that there is a mapping to a linked data style model.

To be fully multilingual, InLOC also takes advantage of the “language map” feature of JSON-LD. Instead of just giving one text value to a property, the value of any human-language property is an object, within which the keys are the two-letter language codes, and the values are the property value in that language.

To see more, please take a look at the JSON-LD spec alongside the InLOC JSON-LD binding. And you are most welcome to a personal explanation if you get in touch with me.

To take home…

If you want to use JSON-LD, ensure that:

  • anything in your model that looks like a predicate is represented as a name in JSON object name/value pairs;
  • anything in your model that looks like a value is represented as the value of a JSON name/value pair;
  • you only use each property name once – if there are multiple values of that property, use a JSON array;
  • any entities, objects, things, or whatever you call them, that have properties, are represented as JSON objects;
  • and then, following the spec, carefully craft the JSON-LD context, to map the names onto URIs, and to specify any data types.

Try it and see. If you follow me, I think it will make sense – more sense than XML. It’s now (January 2014) a W3C Recommendation.

]]>
http://blogs.cetis.org.uk/asimong/2013/12/13/json-ld-binding/feed/ 0
Open Badges, Tin Can, LRMI can use InLOC as one cornerstone http://blogs.cetis.org.uk/asimong/2013/07/31/open-badges-tin-can-lrmi-can-use-inloc-as-one-cornerstone/ http://blogs.cetis.org.uk/asimong/2013/07/31/open-badges-tin-can-lrmi-can-use-inloc-as-one-cornerstone/#comments Wed, 31 Jul 2013 08:42:53 +0000 http://blogs.cetis.org.uk/asimong/?p=1469 (22nd in my logic of competence series.)

There has been much discussion recently about Mozilla Open Badges, xAPI (Experience API, alias “Tin Can API“) and LRMI, as new and interesting specifications to help bring standardization particularly into the world of technology and resources involved with people and their learning. They have all reached their “version 1″ this year, along with InLOC.

InLOC can quietly serve as a cornerstone of all three, providing a specification for one of the important things they may all want to refer to. InLOC allows documentation of the frameworks, of learning outcomes, competencies, abilities, whatever you call them, that describe what people need to know and be able to do.

Mozilla has been given, and devoted, plenty of resource to their OpenBadges effort, and as a result is it widely known about, though not so well known is the rapid and impressive development of the actual specification. The key part of the spec is how OpenBadges represents the “assertions” that someone has achieved something. The thing that people achieve (rather that its achievement) could well be represented in an InLOC framework.

Tin Can / Experience API (I’ll use the customary abbreviation “xAPI”) has also been talked about widely, as a successor to SCORM. The xAPI “makes it possible to collect the data about the wide range of experiences a person has (online and offline)”. This clearly includes “experiences” such as completing a task or attaining a learning outcome. But xAPI does not deal with the relationships between these. If one greater learning outcome was composed of several lesser ones, it wouldn’t be natural to represent that fact in xAPI itself. That is where InLOC naturally comes in.

LRMI (“Learning Resource Metadata Initiative”) is, as one would expect, designed to help represent metadata about learning resources, in a way that is integrated with schema.org. What if many of those learning resources are designed to help a learner achieve an intended learning outcome? LRMI can naturally refer to such a learning outcome, but is not designed to represent the structures themselves. Again, InLOC can do that.

What would be chaotic would be if these three specifications, each one potentially very useful in its own way, all specified their own, possibly incompatible ways of representing the structures or frameworks that are often created to bring common ground and order to this whole area of life.

Please don’t let that happen! Instead, I believe we should be using InLOC for what it is good at, leaving each other spec to handle its own area, and no one shamefully “reinventing the wheel”.

Draft proposals

These proposals are only initial proposals at present, looking forward to discussion with other people involved with or interested in the other three specifications. Please don’t hesitate to suggest better ways if you can see them.

OpenBadges

The Assertions page gives the necessary detail of how the OpenBadges spec works.

  • The BadgeClass criteria property means the “URL of the criteria for earning the achievement.” If there is an InLOC LOCdefinition or LOCstructure that represents these criteria, as there could well be, then the natural mapping would be for the criteria property simply to hold the URI, either of the (single) LOCdefinition, or of the LOCstructure that comprises all of the definitions together.
  • The BadgeClass alignment property gives a list of “objects describing which educational standards this badge aligns to, if any.” In cases where there is no LOCdefinition or LOCstructure representing the whole of the badge criteria, it seems natural to put a set of LOCdefinition URIs into the (multiple) objects of this property — which are AlignmentObjects.
  • Each AlignmentObject has the following properties, which map directly onto InLOC.
    • name: this could be the title of a LOCdefinition
    • url: this could be the id of the same LOCdefinition
    • description: this could be the description of the same LOCdefinition

One could also potentially take both approaches at the same time.

I will record more detail, and change it as it evolves, on the InLOC wiki.

xAPI

The developers call this the Tin Can API, but their sponsors, ADL, call it the Experience API or xAPI.

The specification (v1.0.1, 2013-10-01) can be read in this PDF document.

Tin Can is based around the statement. This is defined as “a simple construct consisting of <actor (learner)> <verb> <object>, with <result>, in <context> to track an aspect of a learning experience.” There are a number of ways in which a statement could relate to a learning outcome or competence. How might these correspond to InLOC?

  1. If the statement “verb” is something like completed, or mastered, or passed, the “object” could well be something like a learning outcome, or an assessment directly related to a learning outcome. The object has two properties on top of the expected objectType:
    • id: this can be the same as a LOC id in InLOC
    • definition: this in turn has recommended properties of:
      1. name: this is proposed as the LOC title
      2. description: this is proposed as the LOC description
      3. type: this is proposed as the URI for LOCdefinition or LOCstructure
  2. The statement could be that some experiences were had (e.g. an apprenticeship), and the result was the learning outcome or competence. It might therefore be useful to give the URI of an InLOC-formatted learning outcome as part of an xAPI result. Unfortunately, none of the specified properties of the Result object have a URI type, so the URI of a LOC definition would have to go in the extensions property of the result.
  3. Often in personal or professional development planning, it is useful to record what is planned. An example of how to represent this, with the object as a sub-statement, is given in the spec section 4.1.4.3, page numbered 20. The sub-statement can be something similar to the first option above.
  4. A learning outcome may form part of the context of an activity in diverse ways. If it is not one of the above, it may be possible to use the context property of a statement, either as a statement reference in the statement property of the context, or as part of the context‘s extensions.

In essence, the clearest and most straightforward way of linking to an InLOC LOCstructure or LOCdefinition is as a statement object, rather than its result or context. The other options could be seen as giving too many options, which may lead away from useful interoperability.

I will record more detail, and change it as it evolves, on the InLOC wiki.

LRMI

The documentation for the Learning Resource Metadata Initiative is at http://www.lrmi.net/. The specification, and its correspondence with InLOC, is very simple. All the properties are naturally understood as properties of a learning resource. The property relevant to InLOC is educationalAlignment, whose object is an AlignmentObject.

Here, the LRMI AlignmentObject properties are mapped to LOCdefinition properties.

  • targetURL: LOCdefinition id
  • targetName: LOCdefinition title
  • targetDescription: LOCdefinition description

I will record more detail, and change it as it evolves, on the InLOC wiki.

What this all means

xAPI and LRMI

The implications for xAPI and LRMI are just that they could suggest InLOC as a possible format for the publication of frameworks that they may want to refer to. Neither spec has pretensions to cover this area of frameworks, and the existence of InLOC should help to prevent people inventing diverse solutions, when we really want one standard approach to help interoperability.

A question remains about what a suitable binding of InLOC would be for both specs. In many ways it should not matter, as it will be the URIs and some values that will be used for reference from xAPI and LRMI, not any of the InLOC syntax. However, it might be useful to remember that xAPI’s native language is JSON, and LRMI’s is HTML, with added schema.org markup using microdata or RDFa. Neither of these bindings has been finalised for InLOC, so an opportunity exists to ensure that suitable bindings are agreed, while still conforming to the InLOC information model in one or other form.

OpenBadges

The case of Mozilla Open Badges is perhaps the most interesting. Clearly, there is a potential interest for badges to link to representations of learning outcomes or competences as defined by relevant authorities. It is so much more powerful when these representations reside in a common space that can be referred to by anyone (including e.g. xAPI and LRMI users, personal development, portfolio, and recruitment systems). It is easy to see how badges could usefully become “metadata-infused” tokens of the achievement of something that is already defined elsewhere. Redefining those things would simply confuse people.

InLOC solves several problems that OpenBadges should not have to worry about. One is representing equivalence (or not) between different competencies. That is provided for straightforwardly within InLOC, and should be done by the authorities defining the competencies, whether or not they are the same people as those who define and issue the badges.

Second, InLOC gives a clear, comprehensive and predefined vocabulary for how different competencies relate to each other. Mozilla’s Web Literacy Standard defines a tree structure of “literacies”, “competencies” and “skills”. Other frameworks and standards use other terms and concepts. InLOC is generic enough to represent all the relationships in all of these structures. As with equivalencies, the badge issuer should not have to define, for example, what roles require what skills and what knowledge. That should be up to occupational domain experts.

But OpenBadges do require some way to represent the fact that one, greater, badge can stand for a number of lesser badges. This is necessary to avoid being drowned in a flood of badges each one so small that it is unrecognisable or insignificant.

While so many frameworks have not been expressed in a machine processable format like InLOC, there will remain a requirement for an internal mechanism within OpenBadges to specify precisely which set of lesser badges is represented by a single a greater badge. But when the InLOC structures are in place, and all the OpenBadges in question refer to InLOC URIs for their criteria, we can look forward to automatic consistency checking of super-badges. To check a greater badge with a set of lesser component badges, check that the criteria structure or definition for the greater badge has parts (as defined by InLOC relationships) which are each the criteria of one of the set of lesser badges.

As with xAPI, JSON is the native language of OpenBadges, so one task that remains to be completed is to ensure that there is a JSON binding of InLOC that satisfies both the OpenBadges and the Tin Can communities.

That should be it! Is it?

]]>
http://blogs.cetis.org.uk/asimong/2013/07/31/open-badges-tin-can-lrmi-can-use-inloc-as-one-cornerstone/feed/ 1
What is my work? http://blogs.cetis.org.uk/asimong/2012/09/29/what-is-my-work/ http://blogs.cetis.org.uk/asimong/2012/09/29/what-is-my-work/#comments Sat, 29 Sep 2012 19:27:35 +0000 http://blogs.cetis.org.uk/asimong/?p=1370 Is there a good term for my specialist area of work for CETIS? I’ve been trying out “technology for learner support”, but that doesn’t fully seem to fit the bill. If I try to explain, reflecting on 10 years (as of this month) involvement with CETIS, might readers be able to help me?

Back in 2002, CETIS (through the CRA) had a small team working with “LIPSIG”, the CETIS special interest group involved with Learner Information (the “LI” of “LIPSIG”). Except that “learner information” wasn’t a particularly good title. It was also about the technology (soon to be labelled “e-portfolio”) that gathered and managed certain kinds of information related to learners, including their learning, their skills – abilities – competence, their development, and their plans. It was therefore also about PDP — Personal Development Planning — and PDP was known even then by its published definition “a structured and supported process undertaken by an individual to reflect upon their own learning, performance and/or achievement and to plan for their personal, educational and career development”.

There’s that root word, support (appearing as “supported”), and PDP is clearly about an “individual” in the learner role. Portfolio tools were, and still are, thought of as supporting people: in their learning; with the knowledge and skills they may attain, and evidence of these through their performance; their development as people, including their learning and work roles.

If you search the web now for “learner support”, you may get many results about funding — OK, that is financial support. Narrowing the search down to “technology for learner support”, the JISC RSC site mentions enabling “learners to be supported with their own particular learning issues”, and this doesn’t obviously imply support for everyone, but rather for those people with “issues”.

As web search is not much help, let’s take a step back, and try to see this area in a wider perspective. Over my 10 years involvement with CETIS, I have gradually come to see CETIS work as being in three overlapping areas. I see educational (or learning) technology, and related interoperability standards, as being aimed at:

  • institutions, to help them manage teaching, learning, and other processes;
  • providers of learning resources, to help those resources be stored, indexed, and found when appropriate;
  • individual learners;
  • perhaps there should be a branch aimed at employers, but that doesn’t seem to have been salient in CETIS work up to now.

Relatively speaking, there have always seemed to be plenty of resources to back up CETIS work in the first two areas, perhaps because we are dealing with powerful organisations and large amounts of money. But, rather than get involved in those two areas, I have always been drawn to the third — to the learner — and I don’t think it’s difficult to understand why. When I was a teacher for a short while, I was interested not in educational adminstration or writing textbooks, but in helping individuals learn, grow and develop. Similar themes pervade my long term interests in psychology, psychotherapy, counselling; my PhD was about cognitive science; my university teaching was about human-computer interaction — all to do with understanding and supporting individuals, and much of it involving the use of technology.

The question is, what does CETIS do — what can anyone do — for individual learners, either with the technology, or with the interoperability standards that allow ICT systems to work together?

The CETIS starting point may have been about “learner information”, but who benefits from this information? Instead of focusing on learners’ needs, it is all too easy for institutions to understand “learner information” as information than enables institutions to manage and control the learners. Happily though, the group of e-portfolio systems developers frequenting what became the “Portfolio” SIG (including Pebble, CIEPD and others) were keen to emphasise control by learners, and when they came together over the initiative that became Leap2A, nearly six years ago, the focus on supporting learners and learning was clear.

So at least then CETIS had a clear line of work in the area of e-portfolio tools and related interoperability standards. That technology is aimed at supporting personal, and increasingly professional, development. Partly, this can be by supporting learners taking responsibility for tracking the outcomes of their own learning. Several generic skills or competences support their development as people, as well as their roles as professionals or learners. But also, the fact that learners enter information about their own learning and development on the portfolio (or whatever) system means that the information can easily be made available to mentors, peers, or whoever else may want to support them. This means that support from people is easier to arrange, and better informed, thus likely to be more effective. Thus, the technology supports learners and learning indirectly, as well as directly.

That’s one thing that the phrase “technology for learner support” may miss — support for the processes of other people supporting the learner.

Picking up my personal path … building on my involvement in PDP and portfolio technology, it became clear that current representations of information about skills and competence were not as effective as they could be in supporting, for instance, the transition from education to work. So it was, that I found myself involved in the area that is currently the main focus of my work, both for CETIS, and also on my own account, through the InLOC project. This relates to learners rather indirectly: InLOC is enabling the communication and reuse of definitions and descriptions of learning outcomes and competence information, and particularly structures of sets of such definitions — which have up to now escaped an effective and well-adopted standard representation. Providing this will mean that it will be much easier for educators and employers to refer to the same definitions; and that should make a big positive difference to learners being able to prepare themselves effectively for the demands of their chosen work; or perhaps enable them to choose courses that will lead to the kind of work they want. Easier, clearer and more accurate descriptions of abilities surely must support all processes relating to people acquiring and evidencing abilities, and making use of related evidence towards their jobs, their well-being, and maybe the well-being of others.

My most recent interests are evidenced in my last two blog posts — Critical friendship pointer and Follower guidance: concept and rationale — where I have been starting to grapple with yet more complex issues. People benefit from appropriate guidance, but it is unlikely there will ever be the resources to provide this guidance from “experts” to everyone — if that is even what we really wanted.

I see these issues also as part of the broad concern with helping people learn, grow and develop. To provide full support without information technology only looks possible in a society that is stable — where roles are fixed and everyone knows their place, and the place of others they relate to. In such a traditionalist society, anyone and everyone can play their part maintaining the “social order” — but, sadly, such a fixed social order does not allow people to strike out in their own new ways. In any case, that is not our modern (and “modernist”) society.

I’ve just been reading Herman Hesse’s “Journey to the East” — a short, allegorical work. (It has been reproduced online.) Interestingly, it describes symbolically the kind of processes that people might have to go through in the course of their journey to personal enlightenment. The description is in no way realistic. Any “League” such as Hesse described, dedicated to supporting people on their journey, or quest, would practically be able to support only very few at most. Hesse had no personal information technology.

Robert K. Greenleaf was inspired by Hesse’s book to develop his ideas on “Servant Leadership“. His book of that name was put together in 1977, still before the widespread use of personal information techology, and the recognition of its potential. This idea of servant leadership is also very clearly about supporting people on their journey; supporting their development, personally and professionally. What information would be relevant to this?

Providing technology to support peer-to-peer human processes seems a very promising approach to allowing everyone to find their own, unique and personal way. What I wrote about follower guidance is related to this end: to describe ways by which we can offer each other helpful mutual support to guide our personal journeys, in work as well as learning and potentially other areas of life. Is there a short name for this? How can technology support it?

My involvement with Unlike Minds reminds me that there is a more important, wider concept than personal learning, which needs supporting. We should be aspiring even more to support personal well-being. And one way of doing this is through supporting individuals with information relevant to the decisions they make that affect their personal well-being. This can easily be seen to include: what options there are; ideas on how to make decisions; what the consequences of those decision may be. It is an area which has been more than touched on under the heading “Information, Advice and Guidance”.

I mentioned the developmental models of William G Perry and Robert Kegan back in my post earlier this year on academic humility. An understanding of these aspects of personal development is an essential part of what I have come to see as needed. How can we support people’s movement through Perry’s “positions”, or Kegan’s “orders of consciousness”? Recognising where people are in this, developmental, dimension is vital to informing effective support in so many ways.

My professional interest, where I have a very particular contribution, is around the representation of the information connected with all these areas. That’s what we try to deal with for interoperability and standardisation. So what do we have here? A quick attempt at a round-up…

  • Information about people (learners).
  • Information about what they have learned (learning outcomes, knowledge, skill, competence).
  • Information that learners find useful for their learning and development.
  • Information about many subtler aspects of personal development.
  • Information relevant to people’s well-being, including
    • information about possible choices and their likely outcomes
    • information about individual decision-making styles and capabilities
    • and, as this is highly context-dependent, information about contexts as well.
  • Information about other people who could help them
    • information supporting how to find and relate to those people
    • information supporting those relationships and the support processes
    • and in particular, the kind of information that would promote a trusting and trusted relationship — to do with personal values.

I have the strong sense that this all should be related. But the field as a whole doesn’t seem have a name. I am clear that it is not just the same as the other two areas (in my mind at least) of CETIS work:

  • information of direct relevance to institutions
  • information of direct relevance to content providers.

Of course my own area of interest is also relevant to those other players. Personal well-being is vital to the “student experience”, and thus to student retention, as well as to success in learning. That is of great interest to institutions. Knowing about individuals is of great value to those wanting to sell all kinds of services to to them, but particularly services to do with learning and resources supporting learning.

But now I ask people to think: where there is an overlap between information that the learner has an interest in, and information about learners of interest to institutions and content providers, surely the information should be under the control of the individual, not of those organisations?

What is the sum of this information?

Can we name that information and reclaim it?

Again, can people help me name this field, so my area of work can be better understood and recognised?

If you can, you earn 10 years worth of thanks…

]]>
http://blogs.cetis.org.uk/asimong/2012/09/29/what-is-my-work/feed/ 0
Developing a new approach to competence representation http://blogs.cetis.org.uk/asimong/2012/06/30/new-competence-representation/ http://blogs.cetis.org.uk/asimong/2012/06/30/new-competence-representation/#comments Sat, 30 Jun 2012 19:57:16 +0000 http://blogs.cetis.org.uk/asimong/?p=1226 InLOC is a European project organised to come up with a good way of communicating structures or frameworks of competence, learning outcomes etc. We've now produced our interim reports for consultation: the Information Model and the Guidelines. We welcome feedback from everyone, to ensure this becomes genuinely useful and not just another academic exercise. ]]> InLOC is a European project organised to come up with a good way of communicating structures or frameworks of competence, learning outcomes etc. We’ve now produced our interim reports for consultation: the Information Model and the Guidelines. We welcome feedback from everyone, to ensure this becomes genuinely useful and not just another academic exercise.

The reason I’ve not written any blog posts for a few weeks is that so much of my energy has been going into InLOC, and for good reason. It has been a really exciting time working with the team to develop a better approach to representing these things. Many of us have been pushing in this direction for years, without ever quite getting there. Several projects have been nearby, including, last year, InteropAbility (JISC page; project wiki) and eCOTOOL (project web site; my Competence Model page) — I’ve blogged about these before, and we have built on ideas from both of them, as well as from several other sources: you may be surprised at the range and variety of “stakeholders” in this area that we have assembled within InLOC. Doing the thinking for the Logic of Competence series was of course useful background, but nor did it quite get there.

What I want to announce now is that we are looking for the widest possible feedback as further input to the project. It’s all too easy for people like us, familiar with interoperability specifications, simply to cook up a new one. It is far more of a challenge, as well as hugely more worthwhile and satisfying, to create something genuinely useful, which people will actually use. We have been looking at other groups’ work for several months now, and discussing the rich, varied, and sometimes confusing ideas going around the community. Now we have made our own initial synthesis, and handed in the “interim” draft agreements, it is an excellent time to carry forward the wide and deep consultation process. We want to discuss with people whether our InLOC format will work for them; whether they can adopt, use or recommend it (or whatever their role is to do with specifications; or, what improvements need to be made so that they are most likely to take it on for real.

By the end of November we are planning to have completed this intense consultation, and we hope to end up with the desired genuinely useful results.

There are several features of this model which may be innovative (or seem so until someone points out somewhere they have been done before!)

  1. Relationships aren’t just direct as in RDF — there is a separate class to contain the relationship information. This allows extra information, including a number, vital for defining levels.
  2. We distinguish the normal simple properties, with literal objects, which are treated as integral parts of whatever it is (including: identifier, title, description, dates, etc.) from what could be called “compound properties”. Compound properties, that have more than one part to their range, are a little like relationships, and we give them a special property class, allowing labels, and a number (like in relationships).
  3. We have arranged for the logical structure, including the relationships and compound properties, to be largely independent of the representation structure. This allows several variant approaches to structuring, including tree structures, flat structures, or Atom-like structures.

The outcome is something that is slightly reminiscent both of Atom itself, and of Topic Maps. Both are not so like RDF, which uses the simplest possible building blocks, but resulting in the need for harder-to-grasp constructs like blank nodes. The fact of being hard to grasp leads to people trying different ways of doing things, and possibly losing interoperability on the way. Both Atom and Topic Maps, in contrast, add a little more general purpose structure, which does make quite a lot of intuitive sense in both cases, and they have been used widely, apparently with little troublesome divergence.

Are we therefore, in InLOC, trying to feel our way towards a general-purpose way of representing substantial hierarchical structures of independently existing units, in a way that makes more intuitive sense that elementary approaches to representing hierarchies? General taxonomies are simply trying to represent the relationships between concepts, whereas in InLOC we are dealing with a field where, for many years, people have recognised that the structure is an important entity in its own right — so much so that it has seemed hard to treat the components of existing structures (or “frameworks”) as independent and reusable.

So, see what you think, and please tell me, or one of the team, what you do honestly think. And let’s discuss it. The relevant links are also available straight from the InLOC wiki home page. And if you are responsible for creating or maintaining structures of intended learning outcomes, skills, competences, competencies, etc., then you are more than welcome to try out our new approach, that we hope combines ease of understanding with the power to express just what you want to express in your “framework”, and that you will be persuaded to use it “for real”, perhaps when we have made the improvements that you need.

We envisage a future when many ICT tools can use the same structures of learning outcomes and competences, saving effort, opening up interoperability, and greatly increasing the possibilities for services to build on top of each other. But you probably don’t need reminding of the value of those goals. We’re just trying to help along the way.

]]>
http://blogs.cetis.org.uk/asimong/2012/06/30/new-competence-representation/feed/ 1
The future of Leap2A? http://blogs.cetis.org.uk/asimong/2011/11/17/the-future-of-leap2a/ http://blogs.cetis.org.uk/asimong/2011/11/17/the-future-of-leap2a/#comments Thu, 17 Nov 2011 11:20:15 +0000 http://blogs.cetis.org.uk/asimong/?p=916 Leap2A in terms of providing a workable starting point for interoperability of e-portfolio systems and portability of learner-ownable information, but what are the next steps we (and JISC) should be taking? That's what we need to think about.]]> We’ve done a great job with Leap2A in terms of providing a workable starting point for interoperability of e-portfolio systems and portability of learner-ownable information, but what are the next steps we (and JISC) should be taking? That’s what we need to think about.

The role of CETIS was only to co-ordinate this work. The ones to take the real credit are the vendors and developers of e-portfolio and related systems, who worked well together to make the decisions on how Leap2A should be, representing all the information that is seen as sharable between actual e-portfolio tools, allowing it to be communicated between different systems.

The current limitations come from the lack of coherent practice in personal and professional development, indeed in all the areas that e-portfolio and related tools are used for. Where some institutions support activities that are simply different from those supported by a different institution, there is no magic wand that can be waved over the information related to one activity that can turn it into a form that supports a fundamentally different one. We need coherent practice. Not identical practice, by any means, but practice where it is as clear as possible what the building blocks of stored lifelong learning information are.

What we really need is for real users — learners — to be taking information between systems that they use or have used. We need to have motivating stories of how this opens up new possibilities; how it enables lifelong personal and professional development in ways that haven’t been open before. When learners start needing the interoperability, it will naturally be time to start looking again, and developing Leap2A to respond to the actual needs. We’ve broken the deadlock by providing a good initial basis, but now the baton passes to real practice, to take advantage of what we have created.

What will help this? Does it need convergence, not on individual development practice necessarily, but on the concepts behind it? Does it need tools to be better – and if so, what tools? Does it need changes in the ways institutions support PDP? In November, we held a meeting co-located with the annual residential seminar of the CRA, as a body that has a long history of collaboration with CETIS in this area.

And how do we provide for the future of Leap2A more generally? Is it time to form a governing group of software developers who have implemented Leap2A? Is there any funding, or are there any initiatives, that can keep Leap2A fresh and increasingly relevant?

Please consider sharing your views, and contributing to the future of Leap2A.

]]>
http://blogs.cetis.org.uk/asimong/2011/11/17/the-future-of-leap2a/feed/ 0
Future of Interoperability Standards – factors contributing to reuse http://blogs.cetis.org.uk/asimong/2010/10/04/cetisfis-reuse/ http://blogs.cetis.org.uk/asimong/2010/10/04/cetisfis-reuse/#comments Mon, 04 Oct 2010 10:20:41 +0000 http://blogs.cetis.org.uk/asimong/?p=376 my inputs to the CETIS FIS meeting on 24th September were partly to do with extensibility and reuse, I facilitated a small but select group initially charged with talking about extensibility and reuse.]]> Reuse requires awareness: to support that, how?

As my inputs to the CETIS FIS meeting on 24th September were partly to do with extensibility and reuse, I facilitated a small but select group initially charged with talking about extensibility and reuse. It included Alan Paull, a colleague very active in the XCRI and HEAR work, and two others who I did not know previously, one (Roger) from a large computer business, and one (Neil) ploughing his own furrow. And rather than talking about the mechanics of extensibility and reuse, we found ourselves pulled back to more human issues.

A key emerging point, that perhaps deserves more attention, is that before anyone can reuse or extend another specification or standard, they have to know of it, and then know about it. How do people actually get to know about specs that they might find reuseful, or extendable? You can make a standard perfectly extendable, or reusable, but if the appropriate people do not know about it, it will not be reused or extended. What can we do about that?

It was suggested:

  • publish case studies of good practice
  • support the community of practice
  • maintain a standards map, functional and technical
  • signpost related standards

We do much of this already, of course, in CETIS, but it is still notable that many people (including Roger and Neil) only came across this meeting by accident. In HE, maybe CETIS is not unknown or unreachable, but outside, how do we reach people?

The next major point we came up with was that XML is not be best vehicle for extendability and reuse. There is a tendency for people to be lulled into writing their own XML schemas – a practice that CETIS has warned against for some time – and it is very easy to create XML schemas in a way that is hard to extend or reuse.

To address this, Roger indicated that his big company was already very interested in Semantic Web ideas. The underlying structure of RDF (not RDF/XML!) naturally lends itself to decomposing complex structures into nodes and links. The problem of extension largely disappears, but the problem of reuse remains, in that to get reuse of Semantic Web information, people have either to use the same URIs (both for subjects and properties) or to set up and use links to indicate equivalence. owl:sameAs is of course useful (see sameAs.org) but not a panacea. I have been saying for a long time that we need to be using something like skos:exactMatch and skos:closeMatch. So, perhaps we need to focus on

  1. tools to help people put in the links for the linked data
  2. helping people define the links in the first place
  3. understanding other difficulties that seem to be present, and overcoming them

Another point that I drew from the discussion was that the more that any data is used, the more motivation people have for keeping it up to date. Thus, the more that information about people is consolidated, the more there is a single copy that is used many times rather than several copies each of which is used less often. We need to keep kicking to kick-start the virtuous circle of using standards to help information to be consolidated, and further motivating people to consolidate it – and that naturally means to link it, probably in a linked data kind of way.

Motivation also depends on the economics and politics. What if changing the way that things are done (inevitably, along with the improvements we are suggesting) shifts costs from one party to another? It may be that costs are cut overall, but what if many costs are cut, but a few costs, of key players, are raised? We will have to keep aware of this happening, and think how to solve it when it arises.

Perhaps at a tangent to our main topic, we noted that XCRI-CAP is not a completely satisfying whole, and needs to be extended to cope with other areas of course-related information.

And the “ecosystem” that is the world of standards and specifications needs to take into account the motivation for standardisation in the first place. Perhaps CETIS could be a bit more ambitious about the niche we carve out for ourselves?

]]>
http://blogs.cetis.org.uk/asimong/2010/10/04/cetisfis-reuse/feed/ 0
Future of interoperability standards – small points http://blogs.cetis.org.uk/asimong/2010/09/23/future-of-interoperability-standards-small-points/ http://blogs.cetis.org.uk/asimong/2010/09/23/future-of-interoperability-standards-small-points/#comments Thu, 23 Sep 2010 13:59:38 +0000 http://blogs.cetis.org.uk/asimong/?p=373 CETIS meeting on 24th September. I have just three things to note: two issues from helping to create the EuroLMAI CEN Workshop Agreement (moving towards an EN European Standard) and one issue from Leap2A.]]> This is a rather ephemeral statement of position-of-the-month on the future of interoperability standards, for the CETIS meeting on 24th September. I have just three things to note: two issues from helping to create the EuroLMAI CEN Workshop Agreement (moving towards an EN European Standard) and one issue from Leap2A.

1. Keep pressing for those URIs.

For EuroLMAI, we want URIs for our classes and properties, so that we can be good citizens of the Semantic Web. How hard is that? Well, first, whose domain are they going to be in? As this is a prospective CEN standard, one would have thought they would be keen to help by providing suitable URIs. Maybe they are, and maybe they will provide them, but, being a European institution, it does seem to take time, and plenty of it! It looks like we will have to use a PURL server like purl.org instead, at least for the time being. That is sort of OK, but there is a time penalty for accessing things through a PURL server, so it does slow things down and have the potential for increased frustration. And it doesn’t look half as official: there is some PR cost.

2. Do keep a clear conceptual model, as it helps later on as well.

In the EuroLM work, I was always keen on, and played a large part in, getting a good conceptual model with good definitions, meant to serve as a relatively firm foundation on which to build the specifications and standards. Recent experience suggests that not only is this useful in the initial work, but it is also useful to have the conceptual model to hand when checking the detail of the spec. My own experience reflects what may be obvious, that it is easy, when revising a draft much later on, to forget why something was done in a certain way. A little doubt in the mind, and it is too easy to edit something back to what looks like a common-sense position, but actually represents something that you carefully argued against on the basis of having taken the pains to build that clear and agreed conceptual model. (The problem being that we all habitually take our own personal cognitive short cuts, which may seem like common sense, and too often these end up being represented in formal structures when they shouldn’t be.)

3. Prepare better for people building on your spec.

OK, so your new spec is really gaining ground. You’ve done a fair job of capturing requirements and representing structures that everyone can relate to. You’ve not built a monster, but something that covers more or less just what it needs to cover, coherently. So now you shouldn’t be surprised that people want to take your spec and adapt it to their needs. Perhaps they will need to add a class or two of their own, perhaps some of their own properties, perhaps some categories or vocabularies, which may overlap with the default ones you have provided with the spec. How are you going to recommend that they proceed, in each case? This is a real question that is taxing me with Leap2A at the moment, and is a learning experience, as I find I am not as well prepared as I would have liked to be. I’d like to be able to document a page on “Building on Leap2A”, which might perhaps refer to the DCMI “Singapore Framework”.

]]>
http://blogs.cetis.org.uk/asimong/2010/09/23/future-of-interoperability-standards-small-points/feed/ 0
ISKO Linked Data event http://blogs.cetis.org.uk/asimong/2010/09/15/isko-linked-data-event/ http://blogs.cetis.org.uk/asimong/2010/09/15/isko-linked-data-event/#comments Wed, 15 Sep 2010 08:00:23 +0000 http://blogs.cetis.org.uk/asimong/?p=365 this event was partly introductory, giving good revision, but going on to some interesting ideas around open linked data. What I was most looking for, leads on linked personal data, wasn't covered, but it was useful nevertheless.]]> The full but mixed audience meant that this event was partly introductory, giving good revision, but going on to some interesting ideas around open linked data. What I was most looking for, leads on linked personal data, wasn’t covered, but it was useful nevertheless.

Nigel Shadbolt, the first keynote speaker, has the co-distinction with Sir Tim B-L of advising for data.gov.uk, and naturally he talked about government linked data. It is great that so much information is being exposed from government sources. I asked him about the National Occupational Standards maintained by Sector Skills Councils, coordinated by the UK Commission on Employment and Skills, and I hope he will be able to advise on leverage points, as even the first steps of the linked data ladder, giving things dereferenceable URIs, would be a highly significant for skills and competences for use in conjunction with learning outcomes, job role competence specifications, and matching outcomes of learning to skills wanted for employment. (UKCES is sponsored by several government departments, though BIS is the lead sponsor and therefore would probably be our best point of contact.)

Crown Copyright information is to have a new, more open, licence, assumed and designed to help reuse. Nigel introduced two sites, enakting.org and sameAs.org, which featured in later presentations as well (both useful and new to me).

Antoine Isaac gave a good introduction to SKOS. I asked him later about applying SKOS to skill definitions, and he seemed to agree that some specialisation of skos:broader and skos:narrower was in order. He also encouraged me to bring the topic up on the SKOS mailing list, which I will do when ready. He seemed to (and Nigel Shadbolt certainly did) imply that linked data meant using RDF/XML as the vehicle — somewhat daunting if not actually dispiriting — but at the end it became apparent that Antoine at least regarded RDFa as equivalent to RDF/XML. The more popular- and commercial-minded participants and presenters seemed to favour RDFa, which left me wondering how in touch RDF/XML proponents are. Probably not that many people are aware that RDFa is currently being developed to be more friendly to people who have started with microformats, so some existing reading on RDFa might not yet be as persuasive as it could be. However, it was good to note that no one at this conference was advocating microformats. Microdata, on the other hand, seems to be an entirely unknown quantity. (Current discussions within the RDFa community suggest a possible cross-mapping.)

Richard Wallis brought the Birmingham origins of Talis (co-sponsors of the event) into a generally informative presentation reinforcing some points already made with interesting examples. His presentation is on slideshare under his id of “rjw”.

Steve Dale told us about the local government “Knowledge Hub“, a “big, bold and ambitious” project going live in February 2011. Again it is about public sector information, though this time perhaps as much for local government workers themselves (who may not even be aware of all the information held) as much as members of the public. Needless to say, none of this involves information about individual members of the public, though I did engage in some discussion around this. Seems that people still shy away from the area. My view would be that individuals have more to gain than to lose by having the infrastructure available for them easily to access information held about them from various sources, particularly in the public sector.

After a pleasant and plenteous lunch, Martin Hepp introduced the GoodRelations ontology, designed for representing the semantics of e-commerce, thus enabling much faster and more accurate matches of offers and requests. He reckons that a very large proportion of GDP — perhaps over 50% — can be accounted for as involved with commercial matchmaking, which becomes quite plausible when you consider that it must include marketing, advertising, etc. Hence it is clear that improvements here can have a huge positive effect on an economy. Martin was one of the explicit advocates of RDFa, and the systems he helps to facilitate use RDFa.

Then came the well-known-to-us Andy Powell (one of the very few I knew there) telling a well-illustrated “long and winding road” story of how Dublin Core has related to RDF, in the process trying to balance the enthusiasm of the Semantic Web evangelists against the cataloguing librarians who were not at all so sure. He introduced the amusing Southampton blog post describing a new Batman antihero, “the Modeller”, which I hadn’t seen before…

Challenges that he pointed out include the fact that modelling is hard, and that models have to gain recognition and consensus within a community before becoming useful. This fits in well with my recent emphasis on the processes supporting consensus in conceptual modelling, as a precursor to standardisation.

John Goodwin went into more detail about the Ordnance Survey’s “OpenData”, exposing for free the small-scale map geographical data of the country, though keeping the large-scale detail to sell. New to me. But some fascinating challenges came up for discussion. How does one relate, and keep track, of geographical entities that may both change their names, and have subtly different meanings in different contexts? “Hampshire” was used as an example (does it include the Isle of Wight, Bournemouth, or even Southampton?) Even more interestingly, he is looking at building up a vernacular gazetteer, for example to help emergency services locate places referred to by local people under the names they actually use.

The other co-sponsor of the event was punkt. netServices from Austria. Andreas Blumauer demonstrated their “PoolParty” system, which certainly looked clever enough, and includes a “corporate ontology” similar to the idea I was advocating for CETIS a while back, in connection with the topics that we have on our web site and blogs. Is it really that easy, I wondered?

The most esoteric presentation was reserved to the final spot. Bernard Vatant of Mondeca explained how there is more than the concept-centric SKOS to his ideal of linking data. Not just the Semantic Web, but the Semiotic Web… He would like to complement the representation of concepts with an explicit representation also of terms themselves. Give the terms their own URIs, make statements about them, don’t just include them as bare literals. Why exactly, I wondered, other than theoretical rigour, or the motive to include the discourse of semiotics (etc.)? If I had a few hours with him some time, I’d really like to bottom this out in conversation, partly to follow my bent towards relating to as many different conceptual starting points as I can.

The networking was valuable. As well as querying Nigel Shadbolt and Antoine Isaac, I caught up with some people I came across some time ago from Metataxis, asked some of the many BBC people there about skills and competences, and at least made one contact interested in linking personal data. (Colleagues are of course very welcome to ask me more while the memories are fresh.)

]]>
http://blogs.cetis.org.uk/asimong/2010/09/15/isko-linked-data-event/feed/ 1