Learning about learning about …

I was recently reading a short piece from Peter Honey (of learning styles fame)
in a CIPD blog post in which he writes, saving the most important item for last in his list:

Learning to learn – the ultimate life-skill

You can turn learning in on itself and use your learning skills to help you learn how to become an increasingly effective learner. Learning to learn is the key to enhancing all the above.

It’s all excellent stuff, and very central to the consideration of learning technology, particularly that dedicated to supporting reflection.

Then I started thinking further (sorry, just can’t help it…)

If learning to learn is the ultimate life skill, then surely the best that educators can do is to help people learn to learn.

But learning to learn is not altogether straightforward. There are many pitfalls that interfere with effective learning, and which may not respond to pure unaided will-power or effort. Thus, to help people learn to learn, we (as educators) have to know about those pitfalls, those obstacles, those hazards that stand in the way of learning generally, and we have to be able somehow at least to guide the learners we want to help around those hazards.

There are two approaches we could take here. First, we could try to diagnose what our learners are trying to learn, what is preventing them, and maybe give them the knowledge they are lacking. That’s a bit like a physician prescribing some cure — not just medicine, perhaps, but a cure that involves a change of behaviour. Or it’s a bit like seeing people hungry, and feeding them — hungry for knowledge, perhaps? If we’re talking about knowledge here, of course, there is a next stage: helping people to find the knowledge that they need, rather than giving it to them directly. I put that in the same category, as it is not so very different.

There is a second, qualitatively different approach. We could help our learners learn about their own learning. We could guide them — and this is a highly reflective task — to diagnose their own obstables to learning. This is not simply not knowing where to look for what they want to know, it is about knowing more about themselves, and what it may be within them that interferes with their learning processes — their will to learn, their resolve (Peter Honey’s article starts with New Year’s resolutions) or, even, their blind spots. To pursue the analogy, that is like a physician giving people the tools to maintain their own health, or, proverbially, rather than giving a person a fish, teaching them to fish.

Taking this further starts to relate closely in my mind to Kelly’s Personal Construct Psychology; and also perhaps to Kuhn’s ideas about the “Structure of Scientific Revolutions”. Within a particular world view, one’s learning is limited by that world view. When the boundaries of that learning are being pushed, it is time to abandon the old skin and take up a new and more expansive one; or just a different one, more suited to the learning that one wants. But it is hard — painful even (Kelly recognised that clearly) and the scientific establishment resists revolutions.

In the literature and on the web, there is the concept called “triple loop learning”, and though this doesn’t seem to be quite the same, it would appear to be going in the same direction, even if not as far.

What, then, is our task as would-be educators; guides; coaches; mentors? Can we get beyond the practices analogous to Freudian psychoanalyis, which are all too prone to set up a dependency? How can we set our learners truly free?

This may sound strange, but I would say we (as educators, etc.) need to study, and learn about, learning about learning. We need to understand not just about particular obstacles to learning, and how to get around those; but also about how people learn about their own inner obstacles, and how they can successfully grow around them.

As part of this learning, we do indeed need to understand how, in any given situation, a person’s world view is likely to relate to what they can learn in that situation; but further, we need to understand how it might be possible to help people recognise that in themselves. You think not? You think that we just have to let people be, to find their own way? It may be, indeed, that there is nothing effective that we are wise enough to know how to do, for a particular person, in a particular situation. And, naturally, it may be that even if we offer some deep insight, that we know someone is ready to receive, they may choose not to receive it. That is always a possibility that we must indeed respect.

And there cannot be a magic formula, a infallible practice, a sure method, a way of forcibly imbuing people with that deep wisdom. Of course there isn’t — we know that. But at least we can strive in our own ways to live with the attitude of doing whatever we can, firstly, not to stand in the way of whatever light may dawn on others, but also, if we are entrusted with the opportunity, to channel or reflect some of that light in a direction that we hope might bear fruit.

Again, it is not hard to connect this to systems thinking and cybernetics. Beyond the law of requisite variety — something about controlling systems needing to be at least as complex as the systems they are controlling — the corresponding principle is practically commonplace: to help people learn something, we have to have learned more than we expect them to learn. In this case, to help people learn about their own learning, we have to have learned about learning about learning.

People are all complex. It is sadly common to fail to take into account the richness and complexity of the people we have dealings with. To understand the issues and challenges people might have with learning about their own learning, we have to really stretch ourselves, to attend to the Other, to listen and to hear acutely enough with all our senses, to understand enough about them, where they come from, where they are, to have an idea about what may either stand in the way, or enable, their learning about their learning. Maybe love is the best motivator. But we also need to learn.

Right then, back on the CETIS earth (which is now that elegant blue-grey place…) I just have to ask, how can technology help? E-portfolio technology has over the years taken a few small steps towards supporting reflection, and indeed communication between learners, and between learners and tutors, mentors, educators. I think there is something we can do, but what it is, I am not so sure…

Learning about learning about learning — let’s talk about it!

Privacy? What about self-disclosure?

When we talk about privacy, we are often talking about the right to privacy. That is something like the right to limit or constrain disclosure of information relating to oneself. I’ve often been puzzled by the concept of privacy, and I think that it helps to think first about self-disclosure.

Self-disclosure is something that we would probably all like to control. There’s a lot of literature on self-disclosure in many settings, and it is clearly recognised as important in several ways. I like the concept of self-disclosure, because it is a positive concept, in contrast to the rather negative idea of privacy. Privacy is, as its name suggests, a “privative” concept. Though definitions vary greatly, one common factor is that definitions of privacy tend to be in terms of the absence of something undesirable, rather than directly as the presence of something valuable.

Before I go on, let me explain my particular interest in privacy and self-disclosure – though everyone potentially has a strong legitimate interest in them. Privacy is a key aspect of e-portfolio technology. People are only happy with writing down reflections on personal matters, including personal development, if they can be assured that the information will only be shared with people they want it to be shared with. It is easy to understand this in terms of mistakes, for example. To learn from one’s mistakes, one needs to be aware of them, and it may help to be able to discuss mistakes with other trusted people. But we often naturally have a sense of shame about mistakes, and unless understanding and compassion can be assured, we reasonably worry that the knowledge of our mistakes may negatively influence other people’s perception of our value as people. So it is vital that e-portfolio technology allows us to record reflections on such sensitive matters privately, and share them only with carefully selected people, if anyone at all.

This central concept for e-portfolios, reflection, links straight back to self-disclosure and self-understanding, and indeed identity. Developing ourselves, our identity, qualities and values as well as our knowledge and skill, depends in part on reflection giving us a realistic appreciation of where we are now, and who we are now.

Let me make the perhaps obvious point that most of us want to be accepted and valued as we are, and ideally understood positively; and that this can even be a precondition of our personal growth and development. Privacy, being a negative concept, doesn’t directly help with that. What is vital to acceptance and understanding is appropriate self-disclosure, with the right people, at the right time and in the right context. Even in a world where there was no privacy, this would still be a challenge. How would we gain the attention of people we trust, to notice what we are, what we do, what we mean, and to help us make sense of that?

In our society, commercial interests see, in more and more detail, some selected aspects of what we do. Our information browsing behaviour is noted by Google, and helps to shape what we get in reply to our searches, as well as the adverts that are served up. On Amazon, our shopping behaviour is pooled, enabling us to be told what others in our position might have bought or looked at. The result of this kind of information gathering is that we are “understood”, but only superficially, in the dimensions that relate to what we might pay for. If this helps in our development, it is only in superficial ways. That is a problem.

A more sinister aspect is where much of the energy in the privacy discussion is used up. The patterns of our information search, added to the records of who we communicate with, and perhaps key words in the content of our communications, alert those in power to the possibility that we may pose a threat to the status quo, or to those who have a vested interest in maintaining that power. We have noticed the trend of growing inequality in our society over the last half century.

But, in focusing on these, albeit genuine and worrying issues, what is lost from focus is the rich subtlety of active self-disclosure. It is as if we are so worried by information about ourselves falling into undesirable hands that we forget about the value of knowledge of ourselves being shared with, and entrusted to, those who can really validate us, and who can help us to understand who we are and where we might want to go.

So, I say, let’s turn the spotlight instead onto how technology can help make self-disclosure not only easier, but directed to the right people. This could start along the lines of finding trustable people, and verifying their trustworthiness. Rather than these trustable people being establishment authorities, how about finding peers, or peer groups, where mutual trust can develop? Given a suitable peer group, it is easy to envisage tools helping with an ordered process of mutual self-disclosure, and increasing trust. Yes, privacy comes into this, because an ordered process of self-disclosure will avoid untimely and inappropriate disclosures. But what do we mean by appropriate? Beyond reciprocity, which is pretty much universally acknowledged as an essential part in friendship and other good relationships, I’d say that what is appropriate is a matter for negotiation, rather than assumption. So, there is a role for tools to help in the negotiation of what is appropriate. Tools could help expose assumptions, so that they can be questioned and laid open to change.

Let’s make and use tools like this to retake control, or perhaps take control for the first time, of the rules and processes of self-disclosure, so that we can genuinely improve mutual recognition, acceptance and understanding, and provide a more powerful and fertile ground for personal and collective development.

Even-handed peer-to-peer self-disclosure will be a stimulus to move towards more sharing, equality, co-operation, collaboration, and a better society.

JSON-LD: a useful interoperability binding

Over the last few months I’ve been exploring and detailing a provisional binding of the InLOC spec to JSON-LD (spec; site). My conclusion is that JSON is better matched to linked data than XML is, if you understand how to structure JSON in the JSON-LD way. Here are my reflections, which I hope add something to the JSON-LD official documentation.

Let’s start with XML, as it is less unfamiliar to most non-programmers, due to similarities with HTML. XML offers two kinds of structures: elements and attributes. Elements are the the pieces of XML that are bounded by start and end tags (or are simply empty tags). They may nest inside other elements. Attributes are name-value pairs that exist only within element start tags. The distinction is useful for marking up text documents, as the tags, along with their attributes, are added to the underlying text, without altering it. But for data, the distinction is less helpful. In fact, some XML specifications use almost no attributes. Generally, if you are using XML to represent data, you can change attributes into elements, with the attribute name as a contained element name, and the attribute value as text contained within the new element.

Confused? You’d be in good company. Many people have complained about this aspect of XML. It gives you more than enough “rope to hang yourself with”.

Now, if you’re writing a specification that might be even remotely relevant to the world of linked data, it is really important that you write your specification in a way that clearly distinguishes between the names of things – objects, entities, etc. – and the names of their properties, attributes, etc. It’s a bit like, in natural language, distinguishing nouns from adjectives. “Dog” is a good noun, “brown” is a good adjective, and we want to be able to express facts such as “this dog is of the colour brown”. The word “colour” is the name of the property; the word “brown” is the value of the property.

The bit of linked data that is really easy to visualise and grasp is its graphical representation. In a linked data graph, customarily, you have ovals that represent things – the nouns, objects, entities, etc. – labelled arrows to represent the property names (or “predicates”); and rectangles to represent literal values.

Given the confusion above, it’s not surprising that when you want to represent linked data using XML, it can be particularly confusing. Take a look at this bit of the RDF/XML spec. You can see the node and arc diagram, and the “striped” XML that is needed to represent it. “Striping” means that as you work your way up or down the document tree, you encounter elements that represent alternately (a) things and (b) the names of properties of these things.

Give up? So do most people.

But wait. Compared to RDF/XML, representing linked data in JSON-LD is a doddle! How so?

Basics of how JSON-LD works

Well, look at the remarkably simple JSON page to start with. There you see it: the most important JSON structure is the “object”, which is “an unordered set of name/value pairs”. Don’t worry about arrays for now. Just note that a value can also be an object, so that objects can nest inside each other.

the JSON object diagram

To map this onto linked data, just look carefully at the diagram, and figure that…

  1. a JSON object represents a thing, object, entity, etc.
  2. property names are represented by the strings.

In essence, there you have it!

But in practice, there is a bit more to the formal RDF view of linked data.

  • Objects in RDF have an associated unique URI, which is what allows the linking. (No need to confuse things with blank nodes right now.)
  • To do this in JSON, objects must have a special name/value pair. JSON-LD uses the name “@id” as the special name, and its value must be the URI of the object.
  • Predicates – the names of properties – are represented in RDF by URIs as well.
  • To keep JSON-LD readable, the names stay as short and meaningful labels, but they need to be mapped to URIs.
  • If a property value is a literal, it stays as a plain value, and isn’t an object in its own right.
  • In RDF, literal values can have a data type. JSON-LD allows for this, too.

JSON-LD manages these tricks by introducing a section called the “context”. It is in the “context” that the JSON names are mapped to URIs. Here also, it is possible to associate data types with each property, so that values are interpreted in the way intended.

What of JSON arrays, then? In JSON-LD, the JSON array is used specifically to give multiple values of the same property. Essentially, that’s all. So each property name, for a given object, is only used once.

Applying this to InLOC

At this point, it is probably getting hard to hold in one’s head, so take a look at the InLOC JSON-LD binding, where all these issues are illustrated.

InLOC is a specification designed for the representation of structures of learning outcomes, competence definitions, and similar kinds of thing. Using InLOC, authorities owning what are often called “frameworks” or (confusingly) “standards” can express their structures in a form that is completely explicit and machine processable, without the common reliance on print-style layout to convey the relationships between the different concepts. One of the vital characteristics of such structures is that one, higher-level competence can be decomposed in terms of several, lower-level competences.

InLOC was planned to able to be linked data from the outset. Following many good examples, including the revered Dublin Core, the InLOC information model is expressed in terms of classes and properties. Thus, it is clear from the outset that there is a mapping to a linked data style model.

To be fully multilingual, InLOC also takes advantage of the “language map” feature of JSON-LD. Instead of just giving one text value to a property, the value of any human-language property is an object, within which the keys are the two-letter language codes, and the values are the property value in that language.

To see more, please take a look at the JSON-LD spec alongside the InLOC JSON-LD binding. And you are most welcome to a personal explanation if you get in touch with me.

To take home…

If you want to use JSON-LD, ensure that:

  • anything in your model that looks like a predicate is represented as a name in JSON object name/value pairs;
  • anything in your model that looks like a value is represented as the value of a JSON name/value pair;
  • you only use each property name once – if there are multiple values of that property, use a JSON array;
  • any entities, objects, things, or whatever you call them, that have properties, are represented as JSON objects;
  • and then, following the spec, carefully craft the JSON-LD context, to map the names onto URIs, and to specify any data types.

Try it and see. If you follow me, I think it will make sense – more sense than XML. It’s now (January 2014) a W3C Recommendation.

Educational Technology Standardization in Europe

The current situation in Europe regarding the whole process of standardization in the area of ICT for Learning Education and Training (LET) is up in the air just now, because of a conflict between how we, the participants, see it best proceeding, and how the formal de jure standards bodies are reinforcing their set up.

My dealings with European learning technology standardization colleagues in the last few years have probably been at least as much as any other single CETIS staff member. Because of my work on European Learner Mobility and InLOC, since 2009 I have attended most of the meetings of the Workshop Learning Technologies (which also has an official page), and I have also been involved centrally in the eCOTOOL and to a lesser extend in the ICOPER European projects.

So what is going on now — what is of concern?

In CETIS, we share some common views on the how the standardization process should be taken forward. During the course of specification development, it is important to involve people implementing them, and not just have people who theorise about them. In the case of educational technology, the companies who are most likely to use the interoperability specifications we are interested in tend to be small and agile. They are helped by specifications that are freely available, and available as soon as they are agreed. Having to pay for them is an unwelcome obstacle. They need to be able to implement the specifications without any constraints or legal worries.

However, over the course of this last year, CEN has reaffirmed long standing positions which don’t match our requirements. The issue centres partly around perceived business models. The official standards bodies make money from selling copies of standards documents. In a paper based, slow-moving world, one can see some sense in this. Documents may have been costly to produce, and businesses relying on a standard wanted a definitive copy. We see similar issues and arguments around academic publishing. In both fields, it is clear that the game is continuing to change, but hasn’t reached a new stable state yet. What we are saying is that, in our area, this traditional business model is never likely to be be justified, and it’s diffcult to imagine the revenues materialising.

The European learning technology standardization community have been lucky in past years, because the official standards bodies have tolerated activity which is not profitable for them. Now — we can only guess, because of financial belts being tightened — CEN at least is not going to continue tolerating this. Their position is set out in their freely available Guides.

Guide 10, the “Guidelines for the distribution and sales of CEN-CENELEC publications”, states:

Members shall exercise these rights in accordance with the provisions of this Guide and in a way that protects the integrity and value of the Publications, safeguards the interests of other Members and recognizes the value of the intellectual property that they contain and the costs to the CEN-CENELEC system of its development and maintenance.
In particular, Members shall not make Publications, including national implementations and definitive language versions, available free of charge to general users without the specific approval of the Administrative Boards of CEN and/or CENELEC.

And, just in case anyone was thinking of circumventing official sales by distributing early or draft versions, this is expressly forbidden.

6.1.1 Working drafts and committee drafts
The distribution of working drafts, committee drafts and other proceedings of CEN-CENELEC technical bodies and Working Groups is generally restricted to the participants and observers in those technical bodies and Working Groups and they shall not otherwise be distributed.

So there it is: specification development under the auspices of CEN is not allowed to be open, despite our view that openness works best in any case, and that it is genuinely needed in our area.

As if this were not difficult enough, the problems extend beyond the copyright of standards documentation. After a standard is agreed, it has to be “implemented”, of course. What kind of use is permitted, and under what terms? A fully open standard will allow any kind of use without royalty or any other kind of restriction, and this is particularly relevant to developers of free and open source software. One specification can build on another, and this can get very tricky if there are conditions attached to implementation of specifications. I’ve come across cases where a standardization body won’t reuse a specification because it is not clear that it is licenced freely enough.

So what is the CEN position on this? Guide 8 (December 2011) is the “CEN-CENELEC Guidelines for Implementation of the Common IPR Policy on Patent”. Guide 8 does say that the use of official standards is to be free of royalties, but at the end of Clause 4.1 one senses a slight hesitation:

The words “free of charge” in the Declaration Form do not mean that the patent holder is waiving all of its rights with respect to the essential patent. Rather, it refers to the issue of monetary compensation; i.e. that the patent holder will not seek any monetary compensation as part of the licensing arrangement (whether such compensation is called a royalty, a one-time licensing fee, etc.). However, while the patent holder in this situation is committing to not charging any monetary amount, the patent holder is still entitled to require that the implementer of the above document sign a licence agreement that contains other reasonable terms and conditions such as those relating to governing law, field of use, reciprocity, warranties, etc.

What does this mean in practice? It seems unclear in a way that could cause considerable concern. And when thinking of potential cumulative effects, Definition 2.9 defines “reciprocity” thus:

as used herein, requirement for the patent holder to license any prospective licensee only if such prospective licensee will commit to license, where applicable, its essential patent(s) or essential patent claim(s) for implementation of the same above document free of charge or under reasonable terms and conditions

Does that mean that the implementer of a standard can impose any terms and condition that are arguably reasonable on its users, including payments? Could this be used to change the terms of a derivative specification? We — our educational technology community — really don’t need this kind of unclarity and uncertainty. Why not have just a plain, open licence?

What seems to be happening here is the opposite of the arrangement known as “copyleft“. While under “copyleft”, any derivative work has to be similarly licenced, under the CEN terms, it seems that patent holders can impose conditions, and can allow companies implementing their patents to impose more conditions or charge any reasonable fees. Perhaps CEN recognises that they can’t expect everyone to give them all of the cake? To stretch that metaphor a bit, maybe we are guessing that much of the educational technology community — the open section that we believe is particularly important — has no appetite for that kind of cake.

The CEN Workshop on Learning Technologies has suspended its own proceedings for reasons such as the above, and several of us are trying to think of how to go forward. It seems that it will be fruitless to try to continue under a strict application of the existing rules. The situation is difficult.

Perhaps we need a different approach to consensus process governance. Yes, that reads “consensus process governance”, a short phrase, apparently never used before, but packed full of interesting questions. If we have heavyweight bodies sitting on top of standardization, it is no wonder that people have to pay (in whatever way) for those staff, those premises, that bureaucracy.

It is becoming commonplace to talk of the “1%” extracting more and more resource from us “99%“. (See e.g. videos like this one.) And naturally any establishment tends to seek to preserve itself and feather its own nest. But the real risk is that our community is left out, progressively deprived of sustenance and air, with the strongest vested interests growing fatter, continually trying to tighten their grip on effective control.

So, it is all the more important to find a way forward that is genuinely collaborative, in keeping with a proper consensus, fair to all including those with less resource, here in standardization as in other places in society. I am personally up for collaborating with others to find a better way forward, and hope that we will make progress together under the CETIS umbrella — or indeed any other convenient umbrella that can be opened.

InLOC and OpenBadges: a reprise

(23rd in my logic of competence series.)

InLOC is well designed to provide the conceptual “glue” or “thread” for holding together structures and planned pathways of achievement, which can be represented by Mozilla OpenBadges.

Since my last post — the last of the previous academic year, also about OpenBadges and InLOC — I have been invited to talk at OBSEG – the Open Badges in Scottish Education Group. This is a great opportunity, because it involves engaging with a community with real aspirations for using Open Badges. One of the things that interests people in OBSEG is setting up combinations of lesser badges, or pathways for several lesser badges to build up to greater badges. I imagine that if badges are set up in this way, the lesser badges are likely to become the stepping stones along the pathway, while it is the greater badge that is likely to be of direct interest to, e.g., employers.

All this is right in the main stream of what InLOC addresses. Remember that, using InLOC, one can set out and publish a structure or framework of learning outcomes, competenc(i)es, etc., (called “LOC definitions”) each one with its own URL (or IRI, to be technically correct), with all the relationships between them set out clearly (as part of the “LOC structure”).

The way in which these Scottish colleagues have been thinking of their badges brings home another key point to put the use of InLOC into perspective. As with so many certificates, awards, qualifications etc., part of the achievement is completion in compliance with the constraints or conditions set out. These are likely not to be learning outcomes or competences in their own right.

The simplest of these non-learning-outcome criteria could be attendance. Attendance, you might say, stands in for some kind of competence; but the kind of basic timekeeping and personal organisation ability that is evidenced by attendance is very common in many activities, so is unlikely to be significant in the context of a Badge awarded for something else. Other such criteria could be grouped together under “ability to follow instructions” or something similar. A different kind of criterion could be the kinds of character “traits” that are not expected to be learned. A person could be expected to be cheerful; respectful; tall; good-looking; or a host of other things not directly under their control, and either difficult or impossible to learn. These non learning outcome aspects of criteria are not what InLOC is principally designed for.

Also, over the summer, Mozilla’s Web Literacy Standard (“WebLitStd”) has been progressing towards version 1.0, to be featured in the upcoming MozFest in London. I have been tracking this with the help of Doug Belshaw, who after great success as an Open Badges evangelist has been focusing on the WebLitStd as its main protagonist. I’m hoping soon (hopefully by MozFest time) to have a version of the WebLitStd in InLOC, and this brings to the fore another very pragmatic question about using InLOC as a representation.

Many posts ago, I was drawing out the distinction between LOC (that is, Learning Outcome or Competence) definitions that are, on the one hand, “binary”, and on the other hand, “rankable”. This is written up in the InLOC documentation. “Binary” ones are the ones for which you can say, without further ado, that someone has achieved this learning outcome, or not yet achieved it. “Rankable” ones are ones where you can put people in order of their ability or competence, but there is no single set of criteria distinguishing two categories that one could call “achieved” and “not yet achieved”.

In the WebLitStd, it is probably fair to say that none of the “competencies” are binary in these terms. One could perhaps characterise them as rankable, though perhaps not fully, in that there may be two people with different configurations of that competency, as a result perhaps of different experiences, each of whom were better in some ways than the other, and each conversely less good in other ways. It may well be similar in some of the Scottish work, or indeed in many other Badge criteria. So what to do for InLOC?

If we recognise a situation where the idea is to issue a badge for an achievement that is clearly not a binary learning outcome, we can outline a few stages of development of their frameworks, which would result in a progressively tighter matching to an InLOC structure or InLOC definitions. I’ll take the WebLitStd as illustrative material here.

First, someone may develop a badge for something that is not yet well-defined anywhere — it could have been conceived without reference to any existing standards. To illustrate this case, an example of a title could be “using Web sites”. There is no one component of the WebLitStd that covers “using the web”, and yet “using” it doesn’t really cover Web literacy as a whole. In this case, the Badge criteria would need to be detailed by the Badge awarder, specifically for that badge. What can still be done within OpenBadges is that there could be alignment information; however it is not always entirely clear what the relationship is meant to be between a badge and a standard it is “aligned” to. The simplest possibility is that the alignment is to some kind of educational level. Beyond this it gets trickier.

A second possibility for a single badge would be to refer to an existing “rankable” definition. For example, consider the WebLitStd skill, “co-creating web resources”, which is part of the “sharing & collaborating” competency of the “Connecting” strand. To think in detail about how this kind of thing could be badged, we need to understand what would count (in the eye of the badge issuer) as “co-creating web resources”. There are very many possible examples that readily come to mind, from talking about what a web page could have on it, to playing a vital part in a team building a sophisticated web service. One may well ask, “what experiences do you have of co-creating web resources?” and, depending on the answer, one could roughly rank people in some kind of order of amount and depth of experience in this area. To create a meaningful badge, a more clearly cut line needs to be drawn. Just talking about what could be on a web page is probably not going to be very significant for anyone, as it is an extremely common experience. So what counts as significant? It depends on the badge issuer, of course, and to make a meaningful badge, the badge issuer will need to define what the criteria are for the badges to be issued.

A third and final stage, ideal for InLOC, would be if a badge is awarded with clearly binary criteria. In this case there is nothing standing in the way of having the criteria property of the Badge holding a URL for a concept directly represented as a binary InLOC LOCdefinition. There are some WebLitStd skills that could fairly easily be seen as binary. Take “distinguishing between open and closed licensing” as an example. You show people some licenses; either they correctly identify the open ones or they don’t. That’s (reasonably) clear cut. Or take “understanding and labeling the Web stack”. Given a clear definition of what the “Web stack” is, this appears to be a fairly clear-cut matter of understanding and memory.

Working back again, we can see that in the third stage, a Badge can have criteria (not just alignments) which refer directly to InLOC information. At the second and first stage, badge criteria need something more than is clearly set out in InLOC information already published elsewhere. So the options appear to be:

  1. describing what the criteria are in plain text, with reference to InLOC information only through alignment; and
  2. defining an InLOC structure specifically for the badge, detailing the criteria.

The first of these options has its own challenges. It will be vital to coherence to ensure that the alignments are consistent with each other. This will be possible, for example, if the aspects of competence covered are separate (independent; orthogonal even). So, if one alignment is to a level, and the second to a topic area, that might work. But it is much less promising if more specific definitions are referred to.

(I’d like to write an example at this point, but can’t decide on a topic area — I need someone to give me their example and we can discuss it and maybe put it here.)

From the point of view of InLOC, the second option is much more attractive. In principle, any badge criteria could be analysed in sufficient detail to draw out the components which can realistically be thought of as learning outcomes — properties of the learners — that may be knowledge, skill, competence, etc. No matter how unusual or complex these are, they can in principle be expressed in InLOC form, and that will clarify what is really “aligned” with what.

I’ll say again, I would really like to have some well-worked-out examples here. So please, if you’re interested, get in touch and let’s talk through some of interest to you. I hope to be starting that in Glasgow this week.

A new (for me) understanding of standardization

When engaging deeply in any standardization project, as I have with the InLOC project, one is likely to get new insights into what standardization is, or should be. I tried to encapsulate this in a tweet yesterday, saying “Standardization, properly, should be the process of formulation and formalisation of the terms of collective commitment”.

Then @crispinweston replied “Commitment to whom and why? In the market, fellow standardisers are competitors.” I continued, with the slight frustration at the brevity of the tweet format, “standards are ideally agreed between mutually recognising group who negotiate their common interest in commitment”. But when Crispin went on “What role do you give to the people expected to make the collective commitment in drafting the terms of that commitment?” I knew it was time to revert from micro-blogging to macro-blogging, so to speak.

Crispin casts me in the position of definer of roles — I disclaim that. I am trying, rather, firstly to observe and generalise from my observations about what standardization is, when it is done successfully, whether or not people use or think of the term “standardization”, and secondly, to intuit a good and plausible way forward, perhaps to help grow a consensus about what standardization ought to be, within the standardization community itself.

One of the challenges of the InLOC project was that the project team started from more or less carte blanche. Where there is a lot of existing practice, standardization can (in theory at least) look at existing practice, and attempt to promote standardization on the best aspects of it, knowing that people do it already, and that they might welcome (for various reasons) a way to do it in just one way, rather than many. But in the case of InLOC, and any other “anticipatory” standard, people aren’t doing closely related things already. What they are doing is publishing many documents about the knowledge, skills, competence, or abilities (or “competencies”) that people need for particular roles, typically in jobs, but sometimes as learners outside of employment. However, existing practice says very little about how these should be structured, and interrelated, in general.

So, following this “anticipatory” path, you get to the place where you have the specification, but not the adoption. How do you then get the adoption? It can only be if you have been either lucky, in that you’ve formulated a need that people naturally come to see, or that you are persuasive, in that you persuade people successfully that it is what they really (really) want.

The way of following, rather than anticipating, practice certainly does look the easier, less troubled, surer path. Following in that way, there will be a “community” of some sort. Crispin identifies “fellow standardisers” as “competitors” in the market. “Coopetition” is a now rather old neologism that comes to mind. So let me try to answer the spirit at least of Crispin’s question — not the letter, as I am seeing myself here as more of an ethnographer than a social engineer.

I envisage many possible kinds of community coming together to formulate the terms of their collective commitments, and there may be many roles within those communities. I can’t personally imagine standard roles. I can imagine the community led by authority, imposing a standard requirement, perhaps legally, for regulation. I can imagine a community where any innovator comes up with a new idea for agreeing some way of doing things, and that serves to focus a group of people keen to promote the emerging standard.

I can imagine situations where an informal “norm” is not explicitly formulated at all, and is “enforced” purely by social peer pressure. And I can imagine situations where the standard is formulated by a representative body of appointees or delegates.

The point is that I can see the common thread linking all kinds of these practices, across the spectrum of formality–informality. And my view is that perhaps we can learn from reflecting on the common points across the spectrum. Take an everyday example: the rules of the road. These are both formal and informal; and enforced both by traffic authorities (e.g. police) and by peer pressure (often mediated by lights and/or horn!)

When there is a large majority of a community in support of norms, social pressure will usually be adequate, in the majority of situations. Formal regulation may be unnecessary. Regulation is often needed where there is less of a complete natural consensus about the desirability of a norm.

Formalisation of a norm or standard is, to me, a mixed blessing. It happens — indeed it must happen at some stage if there is to be clear and fair legal regulation. But the formalisation of a standard takes away the natural flexibility of a community’s response both to changing circumstances in general, and to unexpected situations or exceptions.

Time for more comment? You would be welcome.

The pragmatics of InLOC competence logic

(21st in my logic of competence series.)

Putting together a good interoperability specification is hard, and especially so for competence. I’ve tried to work into InLOC as many of the considerations in this Logic of Competence series as I could, but these are all limited by the scope of a pragmatically plausible goal. My hypothesis is that it’s not possible to have a spec that is at the same time both technically simple and flexible, and intuitively understandable to domain practitioners.

Here I’ll write now about why I believe that, and later follow on to finalise on the pragmatics of the logic of competence as represented by InLOC.

Doing a specification like InLOC gives one an opportunity to attract all kinds of criticism from people, much of it constructive. No attempts to do such a spec in the past have been great successes, and one wonders why that is. Some of the criticism I have heard has helped me to formulate the hypothesis above, and I’ll try to explain my reasoning here.

Turn the hypothesis on its head. What would make it possible to have a spec that is technically simple, and at the same time intuitively understandable to domain practitioners? Fairly obviously, there would have to be a close correspondence between the objects of the domain of expertise, and the constructs of the specification.

For each reader, there may appear to be a simple solution. Skills, competences, learning outcomes, etc., have this structure — don’t they? — and so one just has to reproduce that structure in the information model to get a workable interoperability spec that is intuitively understandable to people — well, like me. Well, “not”, as people now say as a one-word sentence.

Actually, there is great diversity in the ways people conceive of and structure learning outcomes, competences and the like. Some structures have different levels of the same competence, others do not. Some competences are defined in a binary fashion, that allows one to say “yes” or “no” to whether people have that competence; other competences are defined in a way that allows people to be ranked in order of that competence. Some competence structures are quite vague, with what look like a few labels that give an indication of the kinds of quality that someone is looking for, without defining what exactly those labels mean. Some structures — particularly level frameworks like the EQF — are deliberately defined in generic terms that can apply across a wide range of areas of knowledge and skill. And so on.

This should really be no surprise, because it is clear from many people’s work (e.g. my PhD thesis) that different people simplify complex structures in their own different ways, to suit their own purposes, and in line with their own backgrounds and assumptions. There is, simply, no way in which all these different approaches to defining and structuring competence can be represented in a way that will make intuitive sense to everyone.

What one can do is to provide a relatively simple abstract representation that can cover all kinds of existing structures. This is just what InLOC is aiming to do, but up to now we haven’t been quite clear enough about that. To get to something that is intuitive for domain practitioners, one needs to rely on tools being built that reflect, in the user interface, the language and assumptions of that particular group of practitioners. The focus for the “direct” use of the spec then clearly shifts onto developers. What, I suggest, developers need is a specification adapted to their needs — to build those interfaces for domain practitioners. The main requirements of this seem to me to be that the spec:

  1. gives enough structure so that developers can map any competence structure into that format;
  2. does not have any unnecessary complexity;
  3. gives a readily readable format, debuggable by developers (not domain practitioners).

So when you look at the draft InLOC CWAs, or even better if you come to the InLOC dissemination event in Brussels on 16th April, you know what to expect, and you know the aims against which to evaluate InLOC. InLOC offers no magic wand to bring together incompatible views of diverse learning outcome and competence structures. But it does offer a relatively simple technical solution, that allows developers who have little understanding of competence domains to develop tools that really do match the intuitions of various domain practitioners.

Three InLOC drafts for CEN Workshop Agreements are currently out for public comment — links from the InLOC home page — please do comment if you possibly can, and please consider coming to our dissemination event in Brussels, April 16th.

Future Learners, new Opportunities and Technology

The wider CETIS community has often appreciated meeting up, with others sharing the same “special interests”, in “SIG” meetings. That kind of meeting took place, including old “Portfolio” SIG participants, on 11th Dec in Nottingham, and many interesting points came up.

The people who came to the meeting would not all use the label “portfolio”. We billed the meeting as exploring issues from the viewpoint of the learner, so neither institutions, nor providers of learning resources, were the focus. The e-portfolio community has indeed had the learner at the centre of thinking, but this meeting had many ideas that were not specifically “portfolio”.

Indeed, the main attraction of the day was Doug Belshaw talking, and leading a workshop, on the Mozilla Open Badges concept and technology. Badges are not in themselves portfolios, though they do seem to fit well into the same “ecosystem”, which perhaps may come gradually to supplant the current system of the established educational institutions monopolising the award of degrees, with those being necessary for many jobs. And Doug converted people! Several attendees who had not previously been convinced of the value of badges now saw the light. That can only be good.

For those with doubts, Doug also announced that the Mozilla team had agreed to introduce a couple more pieces of metadata into the Open Badges specification. That is definitely worth looking at closely, to see if we can use that extra information to fill gaps that have been perceived. One of these new metadata elements looks like it will naturally link to a definition of skill, competence, or similar, in the style of InLOC, which of course I think is an excellent idea!

The “lightning talks” model worked well, with 10 speakers given only 5 minutes each to speak. The presentations remain listed on the meeting web page, with a link to the slides. Topics included:

  • board games
  • peer assessment
  • students producing content
  • placement and employability

My own contribution was an outline argument of the case that InLOC is positioned to unlock a chain of events, via the vital link of employers taking non-institutional credentials seriously, towards “reinvigorating the e-portfolio landscape”.

So learner-focused learning technology community is alive and well, and doing many good things.

In parallel with the badges workshop, a small group including me talked over more subtle issues. For me, a key point is the need to think through the bigger picture of how badges may be used in practice. How will we differentiate between the likely plethora of badges that will be created and displayed? How will employers, for example, distinguish the ones that are both relevant to their interests, and issued by reputable people or bodies? Looking at the same question another way, what does it take to be the issuer of badges that are genuinely useful, and that will really help the labour market move on? Employers are no more going to wade through scores of badges than they currently wade through the less vital sections of an e-portfolio.

We could see a possible key idea here as “badging the badgers”. If we think through what is needed to be responsible for issuing badges that are really useful, we could turn that into a badge. And a very significant badge it would be, too!

The local arrangements were ably looked after by the Nottingham CIePD group, which seems to be the most active and highly-regarded current such group in UK HE. Ever since, under Angela Smallwood, Nottingham pioneered the ePARs system, they have consistently been in the forefront of developments in this area of learning technology. I hope that they, as well as other groups, will be able to continue work in this area, and continue to act as focal points for the learner-centric learning technology community.