Simon Grant » semantic technologies http://blogs.cetis.org.uk/asimong Cetis blog Fri, 18 Aug 2017 19:43:02 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.22 The growing need for open frameworks of learning outcomes http://blogs.cetis.org.uk/asimong/2014/03/12/open-frameworks-of-learning-outcomes/ http://blogs.cetis.org.uk/asimong/2014/03/12/open-frameworks-of-learning-outcomes/#comments Wed, 12 Mar 2014 10:31:09 +0000 http://blogs.cetis.org.uk/asimong/?p=1505 (A contribution to Open Education Week — see note at end.)

(24th in my logic of competence series.)

What is the need?

Imagine what could happen if we had a really good sets of usable open learning outcomes, across academic subjects, occupations and professions. It would be easy to express and then trace the relationships between any learning outcomes. To start with, it would be easy to find out which higher-level learning outcomes are composed, in a general consensus view, of which lower-level outcomes.

Some examples … In academic study, for example around a more complex topic from calculus, perhaps it would be made clear what other mathematics needs to be mastered first (see this recent example which lists, but does not structure). In management, it would be made clear, for instance, what needs to be mastered in order to be able to advise on intellectual property rights. In medicine, to pluck another example out of the air, it would be clarified what the necessary components of competent dementia care are. Imagine this is all done, and each learning outcome or competence definition, at each level, is given a clear and unambiguous identifier. Further, imagine all these identifiers are in HTTP IRI/URI/URL format, as is envisaged for Linked Data and the Semantic Web. Imagine that putting in the URL into your browser leads you straight to results giving information about that learning outcome. And in time it would become possible to trace not just what is composed of what, but other relationships between outcomes: equivalence, similarity, origin, etc.

It won’t surprise anyone who has read other pieces from me that I am putting forward one technical specification as part of an answer to what is needed: InLOC.

So what could then happen?

Every course, every training opportunity, however large or small, could be tagged with the learning outcomes that are intended to result from it. Every educational resource (as in “OER”) could be similarly tagged. Every person’s learning record, every person’s CV, people’s electronic portfolios, could have each individual point referred, unambiguously, to one or more learning outcomes. Every job advert or offer could specify precisely which are the learning outcomes that candidates need to have achieved, to have a chance of being selected.

All these things could be linked together, leading to a huge increase in clarity, a vast improvement in the efficiency of relevant web-based search services, and generally a much better experience for people in personal, occupational and professional training and development, and ultimately in finding jobs or recruiting people to fill vacancies, right down to finding the right person to do a small job for you.

So why doesn’t that happen already? To answer that, we need to look at what is actually out there, what it doesn’t offer, and what can be done about it.

What is out there?

Frameworks, that is, structures of learning outcomes, skills, competences, or similar things under other names, are surprisingly common in the UK. For many years now in the UK, Sector Skills Councils (SSCs), and other similar bodies, have been producing National Occupational Standards (NOSs), which provided the basis for all National Vocational Qualifications (NVQs). In theory at least, this meant that the industry representatives in the SSCs made sure that the needs of industry were reflected in the assessment criteria for awarding NVQs, generally regarded as useful and prized qualifications at least in occupations that are not classed as “professional”.

NOSs have always been published openly, and they are still available to be searched and downloaded at the UKCES’s NOS site. The site provides a search page. As one of my current interests is corporate governance, I put that phrase in to the search box giving several results, including a NOS called CFABAI131 Support corporate decision-making (which is a PDF document). It’s a short document, with a few lines of overview, six performance criteria, each expressed as one sentence, and 15 items of knowledge and understanding, which is what is seen to be needed to underpin competent performance. It serves to let us all know what industry representatives think is important in that support function.

In professional training and development, practice has been more diverse. At one pole, the medical profession has been very keen to document all the skills and competences that doctors should have, and keen to ensure that these are reflected in medical education. The GMC publishes Tomorrow’s Doctors, introduced as follows:

The GMC sets the knowledge, skills and behaviours that medical students learn at UK medical schools: these are the outcomes that new UK graduates must be able to demonstrate.

Tomorrow’s Doctors covers the outline of the whole syllabus. It prepares the ground for doctors to move on to working in line with Good Medical Practice — in essence, the GMC’s list of requirements for someone to be recognised as a competent doctor.

The medical field is probably the best developed in this way. Some other professions, for example engineering and teaching, have some general frameworks in place. Yet others may only have paper documentation, if any at all.

Beyond the confines of such enclaves of good practice, yet more diverse structures of learning outcomes can be found, which may be incoherent and conflicting, particularly where there is no authority or effective body charged with bringing people to consensus. There are few restrictions on who can now offer a training course, and ask for it to be accredited. It doesn’t have to be consistent with a NOS, let alone have the richer technical infrastructure hinted at above. In Higher Education, people have started to think in terms of learning outcomes (see e.g. the excellent Writing and using good learning outcomes by David Baume), but, lacking sufficient motivation to do otherwise, intended learning outcomes tend to be oriented towards institutional assessment processes, rather than to the needs of employers, or learners themselves. In FE, the standardisation influence of NOSs has been weakened and diluted.

In schools in the UK there is little evidence of useful common learning outcomes being used, though (mainly) for the USA there exists the Achievement Standards Network (ASN), documenting a very wide range of school curricula and some other things. It has recently been taken over by private interests (Desire2Learn) because no central funding is available for this kind of service in the USA.

What do these not offer?

The ASN is a brilliant piece of work, considering its age. Also related to its age, it has been constructed mainly through processing paper-style documentation into the ASN web site, which includes allocating ASN URIs. It hasn’t been used much for authorities constructing their own learning outcome frameworks, with URIs belonging to their own domains, though it could in principle be.

Apart from ASN, practically none of the other frameworks that are openly available (and none that are not) have published URIs for every component. Without these URIs, it is much harder to identify, unambiguously, which learning outcome one is referring to, and virtually impossible to check that automatically. So the quality of any computer assisted searching or matching will inevitably be at best compromised, at worst non-existent.

As learning outcomes are not easily searchable (outside specific areas like NOSs), the tendency is to reinvent them each time they are written. Even similar outcomes, whatever the level, routinely seem to be be reinvented and rewritten without cross-reference to ones that already exist. Thus it becomes impossible in practice to see whether a learning opportunity or educational resource is roughly equivalent to another one in terms of its learning outcomes.

Thus, there is little effective transparency, no easy comparison, only the confusion of it being practically impossible to do the useful things that were envisaged above.

What is needed?

What is needed is, on the one hand, much richer support for bodies to construct useful frameworks, and on the other hand, good examples leading the way, as should be expected from public bodies.

And as a part of this support, we need standard ways of modelling, representing, encoding, and communicating learning outcomes and competences. It was just towards these ends that InLOC was commissioned. There’s a hint in the name: Integrating Learning Outcomes and Competences. InLOC is also known as ELM 2.0, where ELM stands for European Learner Mobility, within which InLOC represents part of a powerful proposed infrastructure. It has been developed under the auspices of the CEN Workshop, Learning Technologies, and funded by the DG Enterprise‘s ICT Standardization Work Programme.

InLOC, fully developed, would really be the icing on the cake. Even if people just did no more than publishing stable URIs to go with every component of every framework or structure of learning outcomes or competencies, that would be a great step forward. The existence and openness of InLOC provides some of the motivation and encouragement for everyone to get on with documenting their learning outcomes in a way that is not only open in terms of rights and licences, but open in terms of practice and effect.


Open Education Week 2014 logoThe third annual Open Education Week takes place from 10-15 March 2014. As described on the Open Education Week web site “its purpose is to raise awareness about the movement and its impact on teaching and learning worldwide“.

Cetis staff are supporting Open Education Week by publishing a series of blog posts about open education activities. Cetis have had long-standing involvement in open education and have published a range of papers which cover topics such as OERs (Open Educational Resources) and MOOCs (Massive Open Online Courses).

The Cetis blog provides access to the posts which describe Cetis activities concerned with a range of open education activities.

]]>
http://blogs.cetis.org.uk/asimong/2014/03/12/open-frameworks-of-learning-outcomes/feed/ 3
JSON-LD: a useful interoperability binding http://blogs.cetis.org.uk/asimong/2013/12/13/json-ld-binding/ http://blogs.cetis.org.uk/asimong/2013/12/13/json-ld-binding/#comments Fri, 13 Dec 2013 12:12:12 +0000 http://blogs.cetis.org.uk/asimong/?p=1488 Over the last few months I’ve been exploring and detailing a provisional binding of the InLOC spec to JSON-LD (spec; site). My conclusion is that JSON is better matched to linked data than XML is, if you understand how to structure JSON in the JSON-LD way. Here are my reflections, which I hope add something to the JSON-LD official documentation.

Let’s start with XML, as it is less unfamiliar to most non-programmers, due to similarities with HTML. XML offers two kinds of structures: elements and attributes. Elements are the the pieces of XML that are bounded by start and end tags (or are simply empty tags). They may nest inside other elements. Attributes are name-value pairs that exist only within element start tags. The distinction is useful for marking up text documents, as the tags, along with their attributes, are added to the underlying text, without altering it. But for data, the distinction is less helpful. In fact, some XML specifications use almost no attributes. Generally, if you are using XML to represent data, you can change attributes into elements, with the attribute name as a contained element name, and the attribute value as text contained within the new element.

Confused? You’d be in good company. Many people have complained about this aspect of XML. It gives you more than enough “rope to hang yourself with”.

Now, if you’re writing a specification that might be even remotely relevant to the world of linked data, it is really important that you write your specification in a way that clearly distinguishes between the names of things – objects, entities, etc. – and the names of their properties, attributes, etc. It’s a bit like, in natural language, distinguishing nouns from adjectives. “Dog” is a good noun, “brown” is a good adjective, and we want to be able to express facts such as “this dog is of the colour brown”. The word “colour” is the name of the property; the word “brown” is the value of the property.

The bit of linked data that is really easy to visualise and grasp is its graphical representation. In a linked data graph, customarily, you have ovals that represent things – the nouns, objects, entities, etc. – labelled arrows to represent the property names (or “predicates”); and rectangles to represent literal values.

Given the confusion above, it’s not surprising that when you want to represent linked data using XML, it can be particularly confusing. Take a look at this bit of the RDF/XML spec. You can see the node and arc diagram, and the “striped” XML that is needed to represent it. “Striping” means that as you work your way up or down the document tree, you encounter elements that represent alternately (a) things and (b) the names of properties of these things.

Give up? So do most people.

But wait. Compared to RDF/XML, representing linked data in JSON-LD is a doddle! How so?

Basics of how JSON-LD works

Well, look at the remarkably simple JSON page to start with. There you see it: the most important JSON structure is the “object”, which is “an unordered set of name/value pairs”. Don’t worry about arrays for now. Just note that a value can also be an object, so that objects can nest inside each other.

the JSON object diagram

To map this onto linked data, just look carefully at the diagram, and figure that…

  1. a JSON object represents a thing, object, entity, etc.
  2. property names are represented by the strings.

In essence, there you have it!

But in practice, there is a bit more to the formal RDF view of linked data.

  • Objects in RDF have an associated unique URI, which is what allows the linking. (No need to confuse things with blank nodes right now.)
  • To do this in JSON, objects must have a special name/value pair. JSON-LD uses the name “@id” as the special name, and its value must be the URI of the object.
  • Predicates – the names of properties – are represented in RDF by URIs as well.
  • To keep JSON-LD readable, the names stay as short and meaningful labels, but they need to be mapped to URIs.
  • If a property value is a literal, it stays as a plain value, and isn’t an object in its own right.
  • In RDF, literal values can have a data type. JSON-LD allows for this, too.

JSON-LD manages these tricks by introducing a section called the “context”. It is in the “context” that the JSON names are mapped to URIs. Here also, it is possible to associate data types with each property, so that values are interpreted in the way intended.

What of JSON arrays, then? In JSON-LD, the JSON array is used specifically to give multiple values of the same property. Essentially, that’s all. So each property name, for a given object, is only used once.

Applying this to InLOC

At this point, it is probably getting hard to hold in one’s head, so take a look at the InLOC JSON-LD binding, where all these issues are illustrated.

InLOC is a specification designed for the representation of structures of learning outcomes, competence definitions, and similar kinds of thing. Using InLOC, authorities owning what are often called “frameworks” or (confusingly) “standards” can express their structures in a form that is completely explicit and machine processable, without the common reliance on print-style layout to convey the relationships between the different concepts. One of the vital characteristics of such structures is that one, higher-level competence can be decomposed in terms of several, lower-level competences.

InLOC was planned to able to be linked data from the outset. Following many good examples, including the revered Dublin Core, the InLOC information model is expressed in terms of classes and properties. Thus, it is clear from the outset that there is a mapping to a linked data style model.

To be fully multilingual, InLOC also takes advantage of the “language map” feature of JSON-LD. Instead of just giving one text value to a property, the value of any human-language property is an object, within which the keys are the two-letter language codes, and the values are the property value in that language.

To see more, please take a look at the JSON-LD spec alongside the InLOC JSON-LD binding. And you are most welcome to a personal explanation if you get in touch with me.

To take home…

If you want to use JSON-LD, ensure that:

  • anything in your model that looks like a predicate is represented as a name in JSON object name/value pairs;
  • anything in your model that looks like a value is represented as the value of a JSON name/value pair;
  • you only use each property name once – if there are multiple values of that property, use a JSON array;
  • any entities, objects, things, or whatever you call them, that have properties, are represented as JSON objects;
  • and then, following the spec, carefully craft the JSON-LD context, to map the names onto URIs, and to specify any data types.

Try it and see. If you follow me, I think it will make sense – more sense than XML. It’s now (January 2014) a W3C Recommendation.

]]>
http://blogs.cetis.org.uk/asimong/2013/12/13/json-ld-binding/feed/ 0
linked portfolios? http://blogs.cetis.org.uk/asimong/2010/03/23/linked-portfolios/ http://blogs.cetis.org.uk/asimong/2010/03/23/linked-portfolios/#comments Tue, 23 Mar 2010 11:46:32 +0000 http://blogs.cetis.org.uk/asimong/?p=291 There’s been continued development of interest within CETIS around the issue of linked data. Most people seem to start from the assumption that linked data is public data, and of course that isn’t going to work in e-portfolio land. (See e.g. this W3C guide in construction.) I see it as a creative challenge for CETIS to get hold of the issue of linking personal data, the issues it involves, and perhaps leading on to initial guidance for others implementing systems. This is perhaps needed to make progress with Leap2R.

Wilbert Kraan was in the Bolton office today, and I had a brief chat that opened up some of these issues to me. (He is a CETIS Semantic Web authority.) We could approach linked personal data in at least two ways:

  1. named graphs with permissions attached;
  2. security policies for particular URIs.

The named graph approach would seem to fit well with the way that e-portfolio systems make information available. Mahara has “views”, PebblePad has “webfolios”, which are somewhat similar in structure. They are both the means for presenting subsets of one’s information to particular audiences. So, if an e-portfolio had a SPARQL query facility attached, it would have to give no information by default, but only information derived from the graphs specifically named in the query. It is, I am assured, quite possible to restrict permission to access particular named graphs in a way very similar to restricting access to any web document.

But does that give too little to those who want to write really interesting SPARQL queries involving personal information? Or would the necessary permission processes be too cumbersome? What if an individual could create permissions, or an access regime, for individual bits of his or her information? That might be more in keeping with the spirit of the Semantic Web. In which case, perhaps we could envisage two strengths of control:

  • filtering triples output from a SPARQL query to ensure that they only contained restricted URIs if the querying agent had permission to have those URIs;
  • filtering the inferencing process so that triples containing restricted URIs were only used in the inferencing process if they querying agent had permission to use them.

We would need to look into what the effects of these might be. Maybe we might conclude that the latter was an appropriate way of keeping sensitive data really private, while the former might be OK for personal information that was not sensitive? That is no more than a guess. If this approach proved to be feasible, it might provide a way, not only for the principled permission to use particular personal information, but a really effective approach to keeping data private while still allowing it to be linked where allowed.

The point here is just to open up the agenda. If we are to take the future of linked data and the Semantic Web seriously, in any case we need to think through what we do to link personal information. Just assuming that no one will want to link personal data is very unlikely to work in the long run.

]]>
http://blogs.cetis.org.uk/asimong/2010/03/23/linked-portfolios/feed/ 0