The key to competence frameworks

(27th in my logic of competence series.)

So here I am … continuing the thread of the logic of competence, nearly 7 years on. I’m delighted to see renewed interest from several quarters in the field of competence frameworks. There’s work being done by the LRMI; and much potential interest from those interested in various kinds of soft skills. And some kinds of “badges” – open credentials intended to be displayed and easily recognised – often rely on competence definitions for their award criteria.

I just have to say to everyone who explores this area, beware! There are two different kinds of things that are both called similar things: “competencies”; “competences”; “competence definitions”; skills; etc.

  1. There is one kind of statements of ability that people measure up to or not. My favourite simple understandable examples are things like “can juggle 5 balls for a minute without dropping any” or “can type at 120 words per minute from dictation making fewer than 10 mistakes”. But there are many less exact examples of similar things, that clearly either do or do not apply to individuals at a given time of testing. “Knows how to solve quadratic equations using the formula” or “can apply Pythagoras’ theorem to find the length of the third side of a right-angled triangle” might be two from mathematics. Many more from the vocational world, but they would mean less to those not in that profession or occupation.
  2. Then there is another kind, more of a statement indicating an ability or area of competence in which someone can be more or less proficient. Taking the examples above, these might be: “can juggle” or “juggling skills”; “can type” or “typing ability”; “knows about mathematics” or “mathematical ability”. There are vast numbers of these, because they are easier to construct than the other kind. “Can manage a small business”; “good communicator”; “can speak French”; “good at knitting”; “a good diplomat”; “programming”; “chess”; you think of your own.

What you can see quite plainly, on looking, is that with the first kind of statement, it is possible to say whether or not someone comes up to that standard; while with the second kind of phrase, either there is no standard defined, or the standard is too vague to judge whether or not someone “has” that ability or not — it’s more like, how much of that ability do you have?

In the past, I’ve called the first kind form of words a “binary” competence definition, and the second kind “rankable”. (Just search for “binary rankable” and you’ll get plenty.) But these are so unmemorable that I even forgot myself what I had called them. I’m looking for better names, that people (including myself) can easily remember.

Woe betide anyone who mixes the two kinds without realising what they are doing! Woe betide also anyone who uses one kind only, and imagines that the other kind either don’t exist or don’t matter.

The world is full of lists of skills which people should have some of. “Communication skills”. “Empathy”. “Resilience”. Loads of them. And in most cases, these are just of the second kind. They have not defined any particular level of the skill, and expect people to produce evidence about how good they are at the given skill, when asked.

In the vocational world of occupations and professions, however, we see very many well-defined statements that are of the first kind. This is to be expected, because to give someone a professional qualification requires that they are assessed as possessing skills to a certain, sufficient level.

The two kinds of statements are intimately related. Take any statement of the first kind. What would be better, or not so good? Juggling 3 balls for 30 seconds? Typing a 60 words per minute? These belong, as points on scales, respectively, of juggling skills and typing ability. Thus, every statement of the first kind has at least one scale that it is a point on. Conversely, every scale description, of the second kind, can, with sufficient insight, be detailed with positions on that scale, which will be statements of the first kind.

In the InLOC information model, these reciprocal relationships are given identifiers hasDefinedLevel and isDefinedLevelOf. These is perhaps the most essential and vital pair of relationships in InLOC.

So what about competence frameworks? Well, a framework, whether explicitly or implicitly, is about relating these two kind of statements together. It is about defining areas of ability that are important, perhaps to an activity or a role; and then also defining levels of those abilities that people can be assessed at. It’s only when these levels are defined that one has criteria, not only for passing exams or recruiting employees, but also for awarding badges. And the interest in badges has held this space open for the seven years I’ve been writing about the logic of competence. Thank you, those working with badges!

Now I’ve explained this again, could you help me by saying which pair of terms would best describe for you the two kinds of statements, better than “binary” and “rankable”? I’d be most grateful.

“After Sustainability” – education for noble savages

I came across John Foster’s blog post, introducing his recent book “After Sustainability”, first through resilience.org. Lancaster University being where he teaches, and near where I live, we met up for a rich conversation, and he kindly lent me a copy of the book. Very interesting reading it is, too! So here I am writing a kind of review, for the Cetis blog, because I do think that the kind of thinking he is championing has implications for educational technology. I add more of my views towards the end.

The message of the book’s nicely chosen title should be clear enough. The idea of “sustainability” has, in many parts of society, taken over the mainstream from ideas of growth and development. It’s easy to criticise the idea of limitless growth, so there have always been its critics. This book focuses criticism around the word “progress”, which I see as neatly ambivalent between growth and development: could it even be too ambivalent to base a clear argument on? Could there still be development, of our consciousness at least, even while in material terms we head for “degrowth”?

I remember an excellent history teacher at school pointing out that popular culture has swung, over centuries, between, on the one hand, looking back to a “golden age” which we might strive to work back towards, and on the other hand, something more like “we’ve never had it so good”, and presumably, with more “progress”, it will get better and better. I wonder whether (as I guess my teacher believed) the truth might be more ambivalent than that. Perhaps, with one pair of spectacles, one may see progress; but with another, at the same time, decline and fall. T. S. Eliot seems to be saying something very similar, in his “Four Quartets”. And again, compare the long history of the idea of the “noble savage”.

The main thrust of the book’s argument is well made and well received. It does seem clear that many people are clinging on to implausible optimism in the face of the mounting evidence of climate change: change at a level that will lead to severe, if not catastrophic consequences. Foster is asking us to acknowledge that: to stop the denial, and shift our hopes across to something deeper and more realistic. To explore this territory, he probes the philosophical foundations of why it is so hard to look behind the self, into the darkness. Even the concept of “resilience”, which is so well represented in the vanguard of environmental thinking these days, is really quite problematic. If one prepares in detail to be resilient to one kind of predicted shock, the risk is that one may be even less well prepared for other unexpected shocks. Can we imagine a good, general purpose, cybernetic resilience, perhaps, even in the face of what happened to Beer’s experiments with Allende in Chile?

Foster puts more of his personal view in the third and final part of the book, corresponding to his use of the term “retrieval”. (This is where the book extract in his blog post is taken from.) “Retrieval […] means learning from environmental tragedy to recognise the essential human wholeness that contemporary progressive civilisation denies and thwarts.” It is both a practical and philosophical task. I won’t go into the philosophical side here, though it seems to make sense within the philosophical tradition. And I’m a little uncomfortable with the term “retrieval”, which to most people in IT will conjure up “information retrieval” — surely not the intended connotation! I would personally prefer simply “recovery”, which I take in Eliot’s sense: “There is only the fight to recover what has been lost / And found and lost again and again: and now, under conditions / That seem unpropitious.” (That was around 1940.)

But the main point of contact with educational technology, to me, comes along with Foster’s pointing to the constructs: predictable ↔ unpredictable; and planned ↔ wild. Indeed, in Chapter 8, “Towards a toolkit”, the book has a section headed “Education in transition: knowledge for its own sake, or for the sake of retrieval?” He is pointing out that it is all very well training people in the kinds of skills that are likely to be useful in a “transition” economy, but also that we need wider, more general “education that empowers us to make sense of some things as intrinsically valuable, and so to create for ourselves any ends we have.”

So we see here a different take on the debates in my two areas of specialism: e-portfolios, and skills and competence. If e-portfolios are merely glorified electronic CVs, showing incumbent employers the things that they have said they want, then they are surely doing us a disservice. But the other, strong trend in e-portfolio practice is rather the opposite, towards reflection — towards critical thinking, not towards conformity to past predictions.

And in the areas of defining needed skills and competence, I see a parallel debate going on. It is relatively easy to take what is past and current practice, and to analyse the skills and competence needed to perform in those current roles in current contexts. A narrow portfolio based on a reductionist approach to skills is, according to the view I share, going nowhere fast. But, despite the challenge of grasping, let alone doing something better, I believe it is perfectly possible to conceive of a structure of higher level skills that should indeed be the basis of any transition — to retrieval, recovery, or however you want to put your vision of what realistic hope there may be in our very uncertain future.

I think I take a rather different tack to Foster here. Where he is talking about “wildness”, about what comes across to me as more tribal, intuitive loyalties perhaps based on place, I would rather emphasise the necessary skills in living and working with each other as equals. I think this is more of a different “take” than a real disagreement. There are many of these skills, and among them a set to do with finding consensus, that are equally in place in the standardization community where I have been working for several years now. To my mind, tribal loyalties can be fickle and conflicted, and while they are held together by instinctive bonds of kinship, they are more prone to loss of trust when hierarchical forms of control lead to large inequalities of power, and opposing interests, reminiscent of class interests.

What I look for includes education for collaboration; for consensus; for peer governance; for the resilience gained through using everyone’s intelligence together. The richness and variety available from properly peer-to-peer processes is, it seems to me, much more likely to be able to cope with the unexpected. Even the darkness within ourselves is less dark to others, if we can trust them in a spirit of mutual respect. Foster uses the interesting term “existential resilience”, and that relates in me to what develops over time with other people, through trusting relationships that allow vulnerability. It takes an exceptional individual to have that existential resilience alone.

One of the ways we can back up discussion of, and education about, personal resilience is to appeal to theories such as George Kelly’s Personal Construct Theory. To quote a useful current piece from Wikipedia:

Transitional periods in a person’s life occur when he or she encounters a situation that changes his or her naive theory (or system of construction) of the way the world is ordered. They can create anxiety, hostility, and/or guilt and can also be opportunities to change one’s constructs and the way one views the world.

Vulnerability could be a useful term to indicate the mental state of someone who is going beyond anxiety, hostility and guilt to change their personal construct system to cope better with a changed world. This again looks close to Rob Hopkins’ working definition of resilience from 2011 in transitionculture.org:

“The capacity of an individual, community or system to adapt in order to sustain an acceptable level of function, structure, and identity”

So, today I’ll conclude by putting the “Cetis” question again, how can we use technology to support, enable, enhance, facilitate (etc.) education that has enduring relevance after sustainability? I hope I’ve given some leads above as to the ground from which answers might be explored.

How do I go about doing InLOC?

(26th in my logic of competence series.)

It’s been three years now since the European expert team started work on InLOC, working out a good model for representing structures and frameworks of learning outcomes, skill and competence. As can be expected of forward-looking, provisional work, there has not yet been much take-up, but it’s all in place, and more timely now than ever.

Then yesterday I received a most welcome call from a training company involved in one particular sector, who are interested in using the principles of InLOC to help their LMS map course and module information to qualification frameworks. Yes! I enthusiastically replied.

What might help people in that situation is a simple, basic approach that sets you on the right path for doing things the InLOC way. I realised that this isn’t so easy to find in the main documentation, so here I set out this basic approach, which will reliably get anyone started on mapping anything to the InLOC model, and cross-references the InLOC documentation.

One description of what to do is documented in the section How to follow InLOC, but, for all the reasons above, here I will try going back to basics and starting again, in the hope that describing the approach in a different way may be helpful.

LOC definitions

The most basic feature that occurs many many times in any published framework is called, by InLOC, a “LOC definition”. This is, simply, any concept, described by any form of words, that indicates an ability – whether it be knowledge, skill, competence or any other learning outcome – that can be attributed to an individual person, and in some way – any way – assessed. It’s hard to define more clearly or succinctly than that, and to get a better understanding you may want to look at examples.

In the documentation, the best place to start is probably the section on InLOC explained through example. In that section, a framework (the European e-Competence Framework, e-CF) is thoroughly analysed. You can see in Figure 2 how, for just one page of the documentation, each LOC definition has been picked out separately.

A LOC definition includes at least these overlapping classes of concept:

  • anything that is listed as a learning outcome, a skill, a competency, an ability;
  • any separate parts of any learning outcomes;
  • anything that expresses an assessment criterion;
  • any level of any outcome, skill, competence, etc. (at any granularity);
  • a generic definition of what is required by a level.

Pieces of text that relate to the same concept – e.g. title and description of the same thing – are treated together. Everything that can be assessed separately is treated as a separate LOC definition. The grammatical structure of the text is of little importance. Often, though, in amongst the documentation, you read text that is not to do with abilities. Just pass over this for the moment.

One thing I’ve noticed sometimes is that some concepts, which could have their own LOC definitions, are implied but not explicit in the documentation. In yesterday’s discussion, one example was the levels of the unit as a whole. Assessment criteria are often specified for different levels of particular abilities, but the level as a whole is implied.

The first step, then, is to look for all the LOC definitions in your documentation, and any implied ones that are not explicitly documented. ANY piece of text that represents something that could potentially be assessed as an outcome of learning is most likely a LOC definition.

Binary and rankable

If you’ve looked through the documentation, you’ve probably come across this distinction, and it is very helpful if you are going to structure something in the InLOC way. But when I was writing the documentation, I don’t think I had grasped quite how central it is. It is so central that more recently I have come to putting it as a vital first concept to grasp. Very recently I quickly put together a slide deck about this, on Slideshare now, under the title Distinguishing binary and rankable definitions is key to structuring competence frameworks.

I first publicly clarified this distinction in a blog post before InLOC even started: Representing level relationships; and more recently mentioned in InLOC and OpenBadges: a reprise.

In essence: a binary learning outcome or competence (LOC) concept is one where it makes sense to ask, have you reached this level or standard? Are you as good as this? The answer gives a binary distinction between “yes”, for those people who have reached the level, and “not yet” for those who have not. The example I give in the recent slide deck is “can touch type in English at 60 wpm with fewer than 1 mistake per hundred words”. The answer is clearly yes or no. Or, “can juggle with three juggling balls for a minute or longer” (which I can’t yet).

On the other hand, a rankable concept is one where there is no clear binary criterion, but instead you can rank people in order of their ability in that concept. A rankable concept related to the previous binary one would simply be “touch typing” or “can touch type”. A good question for juggling would be “how well can you juggle?” You may want to analyse this more finely, and distinguish different independent dimensions of juggling ability, but more probably I guess you would be content to roughly rank people in order of a general juggling ability.

The second step is to look at all the LOC definitions you have isolated, and judge whether they are binary or (at least roughly) rankable.

Relating LOC definitions together

The third step is to relate all the LOC definitions you found to each other. It is commonplace that frameworks have a structure that is often hierarchical. An ability at a “high” level (of granularity) involves many abilities at “lower” levels. The simplest way of representing that is that the wider definition “has parts”, which are the narrower definitions, perhaps the products of “functional analysis” of the wider definition. InLOC allows you to relate definitions in this way, using the relationship “hasLOCpart”.

But InLOC also allows several other relationships between LOC definitions. These can be seen in the three tables on the relationships page in the documentation. To see how the relationships themselves are related, look at the third table, “ontology”. The tables together give you a clear and powerful vocabulary for describing relationships between LOC definitions. Naturally, it has been carefully thought through, and is a vital part of InLOC as a whole.

Very simple structures can be described using only the “hasLOCpart” relationship. However, when you have levels, you will need at least the “hasDefinedLevel” relationship as well. Broadly speaking, it will be a rankable LOC definition that “hasDefinedLevel” of a binary definition. Find these connections in particular!

For the other relationships, decide whether “hasLOCpart” is a good enough representation, or whether you need “hasNecessaryPart”, “hasOptionalPart” or “hasExample”. Each of these has a different meaning in the real world. Mostly, you will probably find that rankable definitions have rankable parts, and binary definitions have binary parts.

There is more related discussion in another of the blog posts from my “logic of competence” series, More and less specificity in competence definitions.

Putting together the LOC structure

In InLOC, a “LOC structure” is the collection of LOC definitions along with the relationships between them. Relationships between LOC definitions are only defined in LOC structures. This is to allow LOC definitions to appear in different structures, potentially with different relationships. You may think you know what comprises, for example, communication skills, but other people may have different opinions, and classify things differently.

A LOC structure often corresponds to a complete documented scheme of learning outcomes, and often has a name which is clearly not something that is a LOC definition, as described previously. You can’t assess how good someone is at “the European e-competence framework”(the e-CF) (unless you mean knowledge of that framework) but you can assess how good people are at its component parts, the LOC definitions (for rankable ones) or whether they reach the defined levels (for binary ones).

And the e-CF, analysed in detail in the InLOC documentation, is a good example where you can trace the structure down in two ways: either by topic, then later by levels; or by level, and then levelled (binary) topic definitions that are part of those levels.

Your aim is to document all the relationships between LOC definitions that are relevant to your application, and wrap those up with other related information in a LOC structure.

What you will have gained

The task of creating an InLOC structure is more than simply creating a file that can potentially be transmitted between web applications, and related to, referred to by, other structures that you are dealing with. It is also an exercise that can reveal more about the structure of the framework than was explicitly written into it. Often one finds oneself making explicit the relationships that are documented implicitly in terms of page and table layout. Often one fills in LOC definitions that have been left out. Whichever way you do it, you will be left with firmer, more principled structures on which to build your web applications.

We expect that sooner or later InLOC will be adopted as at least the basis of a model underlying interoperable and portable representations of frameworks of learning outcomes, skills, competences, abilities, and related knowledge structures. Much of the work has been done, but it may need revising in the light of future developments.

What is there to learn about standardization?

Cetis (the Centre for Educational Technology, Interoperability and Standards) and the IEC (Institute for Educational Cybernetics) are full of rich knowledge and experience in several overlapping topics. While the IEC has much expertise in learning technologies, it is Cetis in particular where there is a body of knowledge and experience of many kinds of standardization organisations and processes, as well as approaches to interoperability that are not necessarily based on formal standardization. We have an impressive international profile in the field of learning technology standards.

But how can we share and pass on that expertise? This question has arisen from time to time during the 12 years I’ve been associated with Cetis, including the last six working from our base in the IEC in Bolton. While Jisc were employing us to run Special Interest Groups, meetings, and conferences, and to support their project work, that at least gave us some scope for sharing. The SIGs are sadly long gone, but what about other ways of sharing? What about running some kind of courses? To run courses, we have to address the question of what people might want to learn in our areas of expertise. On a related question, how can we assemble a structured summary even of what have we ourselves have learned about this rich and challenging area?

These are my own views about what I sense I have learned and could pass on; but also about the topics where I would think it worthwhile to know more. All of these views are in the context of open standards in learning technology and related areas.

How are standards developed?

A formal answer for formal standards is straightforward enough. But this is only part of the picture. Standards can start life in many ways, from the work of one individual inventing a good way of doing something, through to a large corporation wanting to impose its practice on the rest of the world. It is perhaps more significant to ask …

How do people come up with good and useful standards?

The more one is involved in standardization, the richer and more subtle one’s answer to this becomes. There isn’t one “most effective” process, nor one formula for developing a good standard. But in Cetis, we have developed a keen sense of what is more likely to result in something that is useful. It includes the close involvement of the people who are going to implement the standard – perhaps software developers. Often it is a good idea to develop the specification for a standard hand in hand with its implementation. But there are many other subtleties which could be brought out here. This also begs a question …

What makes a good and useful standard?

What one comes to recognise with time and experience is that the most effective standards are relatively simple and focused. The more complex a standard is, the less flexible it tends to be. It might be well suited to the precise conditions under which it was developed, but those conditions often change.

There is much research to do on this question, and people in Cetis would provide an excellent knowledge base for this, in the learning technology domain.

What characteristics of people are useful for developing good standards?

Most likely anyone who has been involved in standardization processes will be aware of some people whose contribution is really helpful, and others who seem not to help so much. Standardization works effectively as a consensus process, not as a kind of battle for dominance. So the personal characteristics of people who are effective at standardization is similar to those who are good at consensus processes more widely. Obviously, the group of people involved must have a good technical knowledge of their domain, but deep technical knowledge is not always allied to an attitude that is consistent with consensus process.

Can we train, or otherwise develop, these useful characteristics?

One question that really interests me is, to what extent can consensus-friendly attitudes be trained or developed in people? It would be regrettable if part of the answer to good standardization process were simply to exclude unhelpful people. But if this is not to happen, those people would need to be to be open to changing their attitudes, and we would have to find ways of helping them develop. We might best see this as a kind of “enculturation”, and use sociological knowledge to help understand how it can be done.

After answering that question, we would move on to the more challenging “how can these characteristics be developed?”

How can standardization be most effectively managed?

We don’t have all the answers here. But we do have much experience of the different organisations and processes that have brought out interoperability standards and specifications. Some formal standardization bodies adopt processes that are not open, and we find this quite unhelpful to the management of standardization in our area. Bodies vary in how much they insist that implementation goes hand in hand with specification development.

The people who can give most to a standardization process are often highly valued and short of time. Conversely, those who hinder it most, including the most opinionated, often seem to have plenty of time to spare. To manage the standardization process effectively, this variety of people needs to be allowed for. Ideally, this would involve the training in consensus working, as imagined above, but until then, sensitive handling of those people needs considerable skill. A supplementary question would be, how does one train people to handle others well?

If people are competent at consensus working, the governance of standardization is less important. Before then, the exact mechanisms for decision making and influence, formal and informal, are significant. This means that the governance of standards organisations is on the agenda for what there is to learn. There is still much to learn here, through suitable research, about how different governance structures affect the standardization process and its outcomes.

Once developed, how are standards best managed?

Many of us have seen the development of a specification or standard, only for it never really to take hold. Other standards are overtaken by events, and lose ground. This is not always a bad thing, of course – it is quite proper for one standard to be displaced by a better one. But sometimes people are not aware of a useful standard at the right time. So, standards not only need keeping up to date, but they may also need to be continually promoted.

As well as promotion, there is the more straightforward maintenance and development. Web sites with information about the standard need maintaining, and there is often the possibility of small enhancements to a standard, such as reframing it in terms of a new technology – for instance, a newly popular language.

And talking of languages, there is also dissemination through translation. That’s one thing that working in a European context keeps high in one’s mind.

I’ve written before about management of learning technology standardization in Europe and about developments in TC353, the committee responsible for ICT in learning, education and training.

And how could a relevant qualification and course be developed?

There are several other questions whose answers would be relevant to motivating or setting up a course. Maybe some of my colleagues or readers have answers. If so, please comment!

  • As a motivation for development, how can we measure the economic value of standards, to companies and to the wider economy? There must be existing research on this question, but I am not familiar with it.
  • What might be the market for such courses? Which individuals would be motivated enough to devote their time, and what organisations (including governmental) would have an incentive to finance such courses?
  • Where might such courses fit? Perhaps as part of a technology MSc/MBA in a leading HE institution or business school?
  • How would we develop a curriculum, including practical experience?
  • How could we write good intended learning outcomes?
  • How would teaching and learning be arranged?
  • Who would be our target learners?
  • How would the course outcomes be assessed?
  • Would people with such a qualification be of value to standards developing organisations, or elsewhere?

I would welcome approaches to collaboration in developing any learning opportunity in this space.

And more widely

Looking again at these questions, I wonder whether there is something more general to grasp. Try reading over, substituting, for “standard”, other terms such as “agreement”, “law”, “norm” (which already has a dual meaning), “code of conduct”, “code of practice”, “policy”. Many considerations about standards seem to touch these other concepts as well. All of them could perhaps be seen as formulations or expressions, guiding or governing interaction between people.

And if there is much common ground between the development of all of these kinds of formulation, then learning about standardization might well be adapted to learn knowledge, skills, competence, attitudes and values that are useful in many walks of life, but particularly in the emerging economy of open co-operation and collaboration on the commons.

What is Open Knowledge culture?

At the recent Cetis conference#cetis14 on twitter – Brian Kelly and I ran a session called “Open Knowledge: Wikipedia and Beyond”. The outcomes were much more interesting than might have been guessed – worthy of a post!

Wikipedia has culture, or cultures. I personally have little experience of them, simply from doing little edits, but was prodding around recently while researching for this session. Some Wikipedia culture seems very “geek” – with in jokes, perhaps putting off the uninitiated. Maybe this comes from much older newsgroup culture. People got to misunderstand each other much too frequently, and flame wars resulted. Rather harsh words like “mercilessly” still appear in the Wikipedia documentation, despite being debated extensively. They stand as a warning of the apparent harshness that may be felt, and also serve to put people off.

For example, there is an in joke, abbreviated to “TINC”, standing for “there is no cabal”. The article that explains this gives a good flavour of the culture it comes from and belongs in. I don’t think anyone would claim that this culture is a majority culture, and it is prone to excluding people. If we want Wikipedia to be a universal open educational resource, part of a proper “knowledge commons”, we must open this up.

An example that came up in our session was Wikipedia’s use of the term “editor”. Now many of us may assume that we all know that a Wikipedia “editor” is simply anyone who chooses to edit any article, but that awareness is not in fact a majority awareness. In the rest of the world, an “editor” has connotations of a book editor, or a newspaper editor – someone with a particular structured role. A Wikipedia “101” course needs to explain that right away. Or could the term be changed?

This links to an issue that was of wider relevance to the conference. What does “open” mean? Yes, there is the helpful open definition. But also, “open” is used in the phrase “open for business”, which is too frequently understood as meaning low on regulation, with few if any barriers preventing corporate money-making, even if that tramples over people and things that are important to them. The word “open”, like the word “freedom”, carries with it much ambivalence. What is open, to who, and why? Open for some may imply closed for others.

Just on the morning of our session I picked up two leads from tweets on related topics. Back in 2008, Michel Bauwens was asking Is something fundamentally wrong with Wikipedia governance process? In reply, P2P Lab pointed to a First Monday article from 2010 about Wikipedia’s peer governance. Then today I see reference to a more recent article on Wikipedia’s problems by Deepak Chopra. People are not unaware of problematic issues with Wikipedia.

One approach to dealing with the issues arising is simply to arrange more Wikipedia training. Brian is rightly keen on that. But it does raise the question, what can be trained, and what is more a matter of culture? Is it possible to help cultures that are good for open knowledge and its governance?

What peer governance cultures are there, anyway? I’ve had experience of consensus governance in a number of contexts, and there seem to be common problems. First, though most people are reasonable at collaboration, there are some who seem to act in ways that are indifferent to the common good, and only promote their own interests: takers, rather than givers, in Adam Grant’s scheme of humanity. The problem comes when takers are not dealt with effectively. Even in structures and organisations that are supposed to be managed by consensus, there seems to be a tendency to form cabals, or cliques: small elites who take over governance processes in their own interests (though sometimes they manage to fool themselves and others that they are trying to further the common interest).

Shouldn’t this be one of the roles of education, to bring people up, not only to further the common good, but to detect and deal with people who are not doing so?

Knowledge of what makes the common good, and collaborative skills (including communication and working with others) are clearly important, but seem not to be sufficient. We also need effective enculturation. Some kind of enculturation is at the heart of the hidden curriculum of educational institutions. Maybe it should be less hidden, and more transparent?

I won’t go on to detail possible solutions here, but in terms of where I am, I could easily envisage

  • a framework for the competences, values or attitudes needed for effective peer-to-peer collaboration;
  • a set of peer-assessed badges attesting these;
  • related courses being set up as MOOCs;
  • a whole lot of relevant open educational resources

and so on.

Back to the conference title: “Building the Digital Institution”. Is it, I ask, an institution that we want, in any recognisable form, complete with a hidden curriculum of a culture that is unlikely to be collaborative? Or is it a radically different kind of social organisation, built around, and promoting, a positive learners’ culture of learning through and for collaboration, peer-to-peer, co-operative in the best sense? Maybe our ideas, as well as our new technologies, can now help us make new efforts in the right direction. Let us not apply technology to entrenching elitism and privilege, but rather towards co-creating a knowledge commons that is truly open and transparent.

Why, when and how should we use frameworks of skill and competence?

(25th in my logic of competence series.)

When we understand how frameworks could be used for badges, it becomes clearer that we need to distinguish between different kinds of ability, and that we need tools to manage and manipulate such open frameworks of abilities. InLOC gives a model, and formats, on which such tools can be based.

I’ll be presenting this material at the Crossover Edinburgh conference, 2014-06-05, though my conference presentation will be much more interactive and open, and without much of this detail below.

What are these frameworks?

Frameworks of skill or competence (under whatever name) are not as unfamiliar as they might sound to some people at first. Most of us have some experience or awareness of them. Large numbers of people have completed vocational qualifications — e.g. NVQs in England — which for a long time were each based on a syllabus taken from what are called National Occupational Standards (NOSs). Each NOS is a statement of what a person has to be able to do, and what they have to know to support that ability, in a stated vocational role, or job, or function. The scope of NOSs is very wide — to list the areas would take far too much space — so the reader is asked to take a look at the national database of current NOSs, which is hosted by the UKCES on their dedicated web site.

Several professions also have good reason to set out standards of competence for active members of that profession. One of the most advanced in this development, perhaps because of the consequences of their competence on life and death, is the medical profession. Documents like Good Medical Practice, published by the General Medical Council, starts by addressing doctors:

Patients must be able to trust doctors with their lives and health. To justify that trust you must show respect for human life and make sure your practice meets the standards expected of you in four domains.

and then goes on to detail those domains:

  • Knowledge, skills and performance
  • Safety and quality
  • Communication, partnership and teamwork
  • Maintaining trust

The GMC also publishes the related Tomorrow’s Doctors, in which it

sets the knowledge, skills and behaviours that medical students learn at UK medical schools: these are the outcomes that new UK graduates must be able to demonstrate.

These are the kinds of “framework” that we are discussing here. The constituent parts of these frameworks are sometimes called “competencies”, a term that is intended to cover knowledge, skills, behaviours, attitudes, etc., but as that word is a little unfriendly, and bearing in mind that practical knowledge is shown through the ability to put that knowledge into practice, I’ll use “ability” as a catch all term in this context.

Many larger employers have good reasons to know just what the abilities of their employees are. Often, people being recruited into a job are asked in person, and employers have to go through the process of weighing up the evidence of a person’s abilities. A well managed HR department might go beyond this to maintaining ongoing records of employees’ abilities, so that all kinds of planning can be done, skills gaps identified, people suggested for new roles, and training and development managed. And this is just an outsider’s view!

Some employers use their own frameworks, and others use common industry frameworks. One industry where common frameworks are widely used is information and communications technology. SFIA, the Skills Framework for the Information Age, sets out all kinds of skills, at various levels, that are combined together to define what a person needs to be able to do in a particular role. Similar to SFIA, but simpler, is the European e-Competence Framework, which has the advantage of being fully and openly available without charge or restriction.

Some frameworks are intended for wider use than just employment. A good example is Mozilla’s Web Literacy Map, which is “a map of competencies and skills that Mozilla and our community of stakeholders believe are important to pay attention to when getting better at reading, writing and participating on the web.” They say “map”, but the structure is the same as other frameworks. Their background page sets out well the case for their common framework. Doug Belshaw suggests that you could use the Web Literacy Map for “alignment” of the kind of Open Badges that are also promoted by Mozilla.

Links to badges

You can imagine having badges for keeping track of people’s abilities, where the abilities are part of frameworks. To help people move between different roles, from education and training to work, and back again, having their abilities recognised, and not having to retrain on abilities that have already been mastered, those frameworks would have to be openly published, able to be referenced in all the various contexts. It is open frameworks that are of particular interest to us here.

Badges are typically issued by organisations to individuals. Different organisations relate to abilities differently. Some organisations, doing business or providing a service, just use employees’ abilities to deliver products and services. Other organisations, focusing around education and training, just help people develop abilities, which will be used elsewhere. Perhaps most organisations, in practice, are somewhere on the spectrum between these two, where abilities are both used and developed, in varied proportions. Looking at the same thing from an individual point of view, in some roles people are just using their abilities to perform useful activities; in other roles they are developing their abilities to use in a different role. Perhaps there are many roles where, again, there is a mixture between these two positions. The value of using the common, open frameworks for badges is that the badges could (in principle) be valued across different kinds of organisation, and different kinds of role. This would then help people keep account of their abilities while moving between organisations and roles, and have those abilities more easily recognised.

The differing nature of different abilities

However, maybe we need to be more careful than simply to take every open framework, and turn it into badges. If all the abilities than were used in all roles and organisations had separate badges, vast numbers of badges would exist, and we could imagine the horrendous complexity of maintaining them and managing them. So it might make sense to select the most appropriate abilities for badging, as follows.

  • Some abilities are plentiful, and don’t need special training or rewarding — maybe organisations should just take them for granted, perhaps checking that what is expected is there.
  • Some abilities are hard, or impossible, to develop: you have them or you don’t. In this case, using badges would risk being discriminatory. Badges for e.g. how high a person can reach, or how long they can be in the sun without burning, would be unnecessary as well as seriously problematic, while one can think of many other personal characteristics, potentially framed as abilities, which might be less visible on the surface, but potentially lead to discrimination, as people can’t just change them.
  • Some abilities might only be able to be learned within a specific role. There is little point in creating badges for these abilities, if they do not transfer from role to role.
  • Some abilities can be developed, are not abundant, and can be transferred substantially from one role to another. These are the ones that deserve to be tracked, and for which badges are perhaps most worth developing. This still leaves open the question of the granularity of the badges.

Practical considerations governing the creation and use of frameworks

It’s hard to create a good, generally accepted common skills or competence framework. In order to do so, one has to put together several factors.

  • The abilities have to be sufficiently common to a number of different roles, between which people may want to move.
  • The abilities have to be described in a way that makes sense to all collaborating parties.
  • It must be practical to include the framework into other tools.
  • The framework needs to be kept up to date, to reflect changing abilities needed for actual roles.
  • In particular, as the requirements for particular jobs vary, the components of a framework need to be presented in such a way that they can be selected, or combined with components of other frameworks, to serve the variety of roles that will naturally occur in a creative economy.
  • Thus, the descriptions of the abilities, and the way in which they are put together, need all to be compatible.

Let’s look at some of this in more detail. What is needed for several purposes is the ability to create a tailored set of abilities. This would be clearly useful in describing both job opportunities, and actual personal abilities. It is of course possible to do all of this in a paper-like way, simply cutting and pasting between documents. But realistically, we need tools to help. As soon as we introduce ICT tools, we have the requirement for standard formats which these tools can work with. We need portability of the frameworks, and interoperability of the tools.

For instance, it would be very useful to have a tool or set of tools which could take frameworks, either ones that are published, or ones that are handed over privately, and manipulate them, perhaps with a graphical interface, to create new, bespoke structures.

Contrast with the actual position now. Current frameworks rarely attempt to use any standard format, as there are no very widely accepted standards for such a format. Within NOSs, there are some standards; the UK government has a list of their relevant documents including “NOS Quality Criteria” and a “NOS Guide for Developers” (by Geoff Carroll and Trevor Boutall). But outside this area practice varies widely. In the area of education and training, the scene is generally even less developed. People have started to take on the idea of specifying the “learning outcomes” that are intended to be achieved as a result of completing courses of learning, educaction or training, but practice is patchy, and there is very little progress towards common frameworks of learning outcomes.

We need, therefore, a uniform “model”, not for skills themselves, which are always likely to vary, but for the way of representing skills, and for the way in which they are combined into frameworks.

The InLOC format

Between 2011 and 2013 I led a team developing a specification for just this kind of model and format. The project was called “Integrating Learning Outcomes and Competences”, or InLOC for short. We developed CEN Workshop Agreement CWA 16655 in three parts, available from CEN in PDF format by ftp:

  1. Information Model for Learning Outcomes and Competences
  2. Guidelines including the integration of Learning Outcomes and Competences into existing specifications
  3. Application Profile of Europass Curriculum Vitae and Language Passport for Integrating Learning Outcomes and Competences

The same content and much extra background material is available on the InLOC project web site. This post is not the place to explain InLOC in detail, but anyone interested is welcome to contact me directly for assistance.

What can people do in the meanwhile?

I’ve proposed elsewhere often enough that we need to develop tools and open frameworks together, to achieve a critical mass where there enough frameworks published to make it worthwhile for tool developers, and sufficiently developed tools to make it worthwhile to make the extra effort to format frameworks in the common way (hopefully InLOC) that will work with the tools.

There will be a point at which growth and development in this area will become self-sustaining. But we don’t have to wait for that point. This is what I think we could usefully be doing in the meanwhile, if we are in a position to do so.

1. Build your own frameworks
It’s a challenge if you haven’t been involved in skill or competence frameworks before, but the principles are not too hard to grasp. Start out by asking what roles, and what functions, there are in your organisation, and try to work out what abilities, and what supporting knowledge, are needed for each role and for each function. You really need to do this, if you are to get started in this area. Or, if you are a microbusiness that really doesn’t need a framework, perhaps you can build one for a larger organisation.
2. Use parts of frameworks that are there already, where suitable
It may not be as difficult as you thought at first. There are many resources out there, such as NOSs, and the other frameworks mentioned above. Search, study, see if you can borrow or reuse. Not all frameworks allow it, but many do. So, some of your work may already be done for you.
3. Publish your frameworks, and their constituent abilities, each with a URL
This is the next vital step towards preparing your frameworks for open use and reuse. The constituent abilities (and levels, see the InLOC documentation) really need their own identifiers, as well as the overall frameworks, whether you call those identifiers URLs, URIs or IRIs.
4. Use the frameworks consistently throughout the organisation
To get the frameworks to stick, and to provide the motivation for maintaining them, you will have to use them in your organisation. I’m not an expert on this side of practice, but I would have thought that the principles are reasonably obvious. The more you have a uniform framework in use across your organisation, the more people will be able to see possibilities for transfer of skills, flexible working, moving across roles, job rotation, and other similar initiatives that can help satisfy employees.
5. Use InLOC if possible
It really does provide a good, general purpose model of how to represent a framework, so that it can be ready for use by ICT systems. Just ask if you need help on this!
6. Consider integrating open badges
It makes sense to consider your badge strategy and your framework strategy together. You may also find this old post of mine helpful.
7. Watch for future development of tools, or develop some yourself!
If you see any, try to help them towards being really useful, by giving constructive feedback. I’d be happy to help any tool developers “get” InLOC.

I hope these ideas offer people some pointers on a way forward for skill and competence frameworks. See other of my posts for related ideas. Comments or other feedback would be most welcome!

The growing need for open frameworks of learning outcomes

(A contribution to Open Education Week — see note at end.)

(24th in my logic of competence series.)

What is the need?

Imagine what could happen if we had a really good sets of usable open learning outcomes, across academic subjects, occupations and professions. It would be easy to express and then trace the relationships between any learning outcomes. To start with, it would be easy to find out which higher-level learning outcomes are composed, in a general consensus view, of which lower-level outcomes.

Some examples … In academic study, for example around a more complex topic from calculus, perhaps it would be made clear what other mathematics needs to be mastered first (see this recent example which lists, but does not structure). In management, it would be made clear, for instance, what needs to be mastered in order to be able to advise on intellectual property rights. In medicine, to pluck another example out of the air, it would be clarified what the necessary components of competent dementia care are. Imagine this is all done, and each learning outcome or competence definition, at each level, is given a clear and unambiguous identifier. Further, imagine all these identifiers are in HTTP IRI/URI/URL format, as is envisaged for Linked Data and the Semantic Web. Imagine that putting in the URL into your browser leads you straight to results giving information about that learning outcome. And in time it would become possible to trace not just what is composed of what, but other relationships between outcomes: equivalence, similarity, origin, etc.

It won’t surprise anyone who has read other pieces from me that I am putting forward one technical specification as part of an answer to what is needed: InLOC.

So what could then happen?

Every course, every training opportunity, however large or small, could be tagged with the learning outcomes that are intended to result from it. Every educational resource (as in “OER”) could be similarly tagged. Every person’s learning record, every person’s CV, people’s electronic portfolios, could have each individual point referred, unambiguously, to one or more learning outcomes. Every job advert or offer could specify precisely which are the learning outcomes that candidates need to have achieved, to have a chance of being selected.

All these things could be linked together, leading to a huge increase in clarity, a vast improvement in the efficiency of relevant web-based search services, and generally a much better experience for people in personal, occupational and professional training and development, and ultimately in finding jobs or recruiting people to fill vacancies, right down to finding the right person to do a small job for you.

So why doesn’t that happen already? To answer that, we need to look at what is actually out there, what it doesn’t offer, and what can be done about it.

What is out there?

Frameworks, that is, structures of learning outcomes, skills, competences, or similar things under other names, are surprisingly common in the UK. For many years now in the UK, Sector Skills Councils (SSCs), and other similar bodies, have been producing National Occupational Standards (NOSs), which provided the basis for all National Vocational Qualifications (NVQs). In theory at least, this meant that the industry representatives in the SSCs made sure that the needs of industry were reflected in the assessment criteria for awarding NVQs, generally regarded as useful and prized qualifications at least in occupations that are not classed as “professional”.

NOSs have always been published openly, and they are still available to be searched and downloaded at the UKCES’s NOS site. The site provides a search page. As one of my current interests is corporate governance, I put that phrase in to the search box giving several results, including a NOS called CFABAI131 Support corporate decision-making (which is a PDF document). It’s a short document, with a few lines of overview, six performance criteria, each expressed as one sentence, and 15 items of knowledge and understanding, which is what is seen to be needed to underpin competent performance. It serves to let us all know what industry representatives think is important in that support function.

In professional training and development, practice has been more diverse. At one pole, the medical profession has been very keen to document all the skills and competences that doctors should have, and keen to ensure that these are reflected in medical education. The GMC publishes Tomorrow’s Doctors, introduced as follows:

The GMC sets the knowledge, skills and behaviours that medical students learn at UK medical schools: these are the outcomes that new UK graduates must be able to demonstrate.

Tomorrow’s Doctors covers the outline of the whole syllabus. It prepares the ground for doctors to move on to working in line with Good Medical Practice — in essence, the GMC’s list of requirements for someone to be recognised as a competent doctor.

The medical field is probably the best developed in this way. Some other professions, for example engineering and teaching, have some general frameworks in place. Yet others may only have paper documentation, if any at all.

Beyond the confines of such enclaves of good practice, yet more diverse structures of learning outcomes can be found, which may be incoherent and conflicting, particularly where there is no authority or effective body charged with bringing people to consensus. There are few restrictions on who can now offer a training course, and ask for it to be accredited. It doesn’t have to be consistent with a NOS, let alone have the richer technical infrastructure hinted at above. In Higher Education, people have started to think in terms of learning outcomes (see e.g. the excellent Writing and using good learning outcomes by David Baume), but, lacking sufficient motivation to do otherwise, intended learning outcomes tend to be oriented towards institutional assessment processes, rather than to the needs of employers, or learners themselves. In FE, the standardisation influence of NOSs has been weakened and diluted.

In schools in the UK there is little evidence of useful common learning outcomes being used, though (mainly) for the USA there exists the Achievement Standards Network (ASN), documenting a very wide range of school curricula and some other things. It has recently been taken over by private interests (Desire2Learn) because no central funding is available for this kind of service in the USA.

What do these not offer?

The ASN is a brilliant piece of work, considering its age. Also related to its age, it has been constructed mainly through processing paper-style documentation into the ASN web site, which includes allocating ASN URIs. It hasn’t been used much for authorities constructing their own learning outcome frameworks, with URIs belonging to their own domains, though it could in principle be.

Apart from ASN, practically none of the other frameworks that are openly available (and none that are not) have published URIs for every component. Without these URIs, it is much harder to identify, unambiguously, which learning outcome one is referring to, and virtually impossible to check that automatically. So the quality of any computer assisted searching or matching will inevitably be at best compromised, at worst non-existent.

As learning outcomes are not easily searchable (outside specific areas like NOSs), the tendency is to reinvent them each time they are written. Even similar outcomes, whatever the level, routinely seem to be be reinvented and rewritten without cross-reference to ones that already exist. Thus it becomes impossible in practice to see whether a learning opportunity or educational resource is roughly equivalent to another one in terms of its learning outcomes.

Thus, there is little effective transparency, no easy comparison, only the confusion of it being practically impossible to do the useful things that were envisaged above.

What is needed?

What is needed is, on the one hand, much richer support for bodies to construct useful frameworks, and on the other hand, good examples leading the way, as should be expected from public bodies.

And as a part of this support, we need standard ways of modelling, representing, encoding, and communicating learning outcomes and competences. It was just towards these ends that InLOC was commissioned. There’s a hint in the name: Integrating Learning Outcomes and Competences. InLOC is also known as ELM 2.0, where ELM stands for European Learner Mobility, within which InLOC represents part of a powerful proposed infrastructure. It has been developed under the auspices of the CEN Workshop, Learning Technologies, and funded by the DG Enterprise‘s ICT Standardization Work Programme.

InLOC, fully developed, would really be the icing on the cake. Even if people just did no more than publishing stable URIs to go with every component of every framework or structure of learning outcomes or competencies, that would be a great step forward. The existence and openness of InLOC provides some of the motivation and encouragement for everyone to get on with documenting their learning outcomes in a way that is not only open in terms of rights and licences, but open in terms of practice and effect.


Open Education Week 2014 logoThe third annual Open Education Week takes place from 10-15 March 2014. As described on the Open Education Week web site “its purpose is to raise awareness about the movement and its impact on teaching and learning worldwide“.

Cetis staff are supporting Open Education Week by publishing a series of blog posts about open education activities. Cetis have had long-standing involvement in open education and have published a range of papers which cover topics such as OERs (Open Educational Resources) and MOOCs (Massive Open Online Courses).

The Cetis blog provides access to the posts which describe Cetis activities concerned with a range of open education activities.

InLOC and OpenBadges: a reprise

(23rd in my logic of competence series.)

InLOC is well designed to provide the conceptual “glue” or “thread” for holding together structures and planned pathways of achievement, which can be represented by Mozilla OpenBadges.

Since my last post — the last of the previous academic year, also about OpenBadges and InLOC — I have been invited to talk at OBSEG – the Open Badges in Scottish Education Group. This is a great opportunity, because it involves engaging with a community with real aspirations for using Open Badges. One of the things that interests people in OBSEG is setting up combinations of lesser badges, or pathways for several lesser badges to build up to greater badges. I imagine that if badges are set up in this way, the lesser badges are likely to become the stepping stones along the pathway, while it is the greater badge that is likely to be of direct interest to, e.g., employers.

All this is right in the main stream of what InLOC addresses. Remember that, using InLOC, one can set out and publish a structure or framework of learning outcomes, competenc(i)es, etc., (called “LOC definitions”) each one with its own URL (or IRI, to be technically correct), with all the relationships between them set out clearly (as part of the “LOC structure”).

The way in which these Scottish colleagues have been thinking of their badges brings home another key point to put the use of InLOC into perspective. As with so many certificates, awards, qualifications etc., part of the achievement is completion in compliance with the constraints or conditions set out. These are likely not to be learning outcomes or competences in their own right.

The simplest of these non-learning-outcome criteria could be attendance. Attendance, you might say, stands in for some kind of competence; but the kind of basic timekeeping and personal organisation ability that is evidenced by attendance is very common in many activities, so is unlikely to be significant in the context of a Badge awarded for something else. Other such criteria could be grouped together under “ability to follow instructions” or something similar. A different kind of criterion could be the kinds of character “traits” that are not expected to be learned. A person could be expected to be cheerful; respectful; tall; good-looking; or a host of other things not directly under their control, and either difficult or impossible to learn. These non learning outcome aspects of criteria are not what InLOC is principally designed for.

Also, over the summer, Mozilla’s Web Literacy Standard (“WebLitStd”) has been progressing towards version 1.0, to be featured in the upcoming MozFest in London. I have been tracking this with the help of Doug Belshaw, who after great success as an Open Badges evangelist has been focusing on the WebLitStd as its main protagonist. I’m hoping soon (hopefully by MozFest time) to have a version of the WebLitStd in InLOC, and this brings to the fore another very pragmatic question about using InLOC as a representation.

Many posts ago, I was drawing out the distinction between LOC (that is, Learning Outcome or Competence) definitions that are, on the one hand, “binary”, and on the other hand, “rankable”. This is written up in the InLOC documentation. “Binary” ones are the ones for which you can say, without further ado, that someone has achieved this learning outcome, or not yet achieved it. “Rankable” ones are ones where you can put people in order of their ability or competence, but there is no single set of criteria distinguishing two categories that one could call “achieved” and “not yet achieved”.

In the WebLitStd, it is probably fair to say that none of the “competencies” are binary in these terms. One could perhaps characterise them as rankable, though perhaps not fully, in that there may be two people with different configurations of that competency, as a result perhaps of different experiences, each of whom were better in some ways than the other, and each conversely less good in other ways. It may well be similar in some of the Scottish work, or indeed in many other Badge criteria. So what to do for InLOC?

If we recognise a situation where the idea is to issue a badge for an achievement that is clearly not a binary learning outcome, we can outline a few stages of development of their frameworks, which would result in a progressively tighter matching to an InLOC structure or InLOC definitions. I’ll take the WebLitStd as illustrative material here.

First, someone may develop a badge for something that is not yet well-defined anywhere — it could have been conceived without reference to any existing standards. To illustrate this case, an example of a title could be “using Web sites”. There is no one component of the WebLitStd that covers “using the web”, and yet “using” it doesn’t really cover Web literacy as a whole. In this case, the Badge criteria would need to be detailed by the Badge awarder, specifically for that badge. What can still be done within OpenBadges is that there could be alignment information; however it is not always entirely clear what the relationship is meant to be between a badge and a standard it is “aligned” to. The simplest possibility is that the alignment is to some kind of educational level. Beyond this it gets trickier.

A second possibility for a single badge would be to refer to an existing “rankable” definition. For example, consider the WebLitStd skill, “co-creating web resources”, which is part of the “sharing & collaborating” competency of the “Connecting” strand. To think in detail about how this kind of thing could be badged, we need to understand what would count (in the eye of the badge issuer) as “co-creating web resources”. There are very many possible examples that readily come to mind, from talking about what a web page could have on it, to playing a vital part in a team building a sophisticated web service. One may well ask, “what experiences do you have of co-creating web resources?” and, depending on the answer, one could roughly rank people in some kind of order of amount and depth of experience in this area. To create a meaningful badge, a more clearly cut line needs to be drawn. Just talking about what could be on a web page is probably not going to be very significant for anyone, as it is an extremely common experience. So what counts as significant? It depends on the badge issuer, of course, and to make a meaningful badge, the badge issuer will need to define what the criteria are for the badges to be issued.

A third and final stage, ideal for InLOC, would be if a badge is awarded with clearly binary criteria. In this case there is nothing standing in the way of having the criteria property of the Badge holding a URL for a concept directly represented as a binary InLOC LOCdefinition. There are some WebLitStd skills that could fairly easily be seen as binary. Take “distinguishing between open and closed licensing” as an example. You show people some licenses; either they correctly identify the open ones or they don’t. That’s (reasonably) clear cut. Or take “understanding and labeling the Web stack”. Given a clear definition of what the “Web stack” is, this appears to be a fairly clear-cut matter of understanding and memory.

Working back again, we can see that in the third stage, a Badge can have criteria (not just alignments) which refer directly to InLOC information. At the second and first stage, badge criteria need something more than is clearly set out in InLOC information already published elsewhere. So the options appear to be:

  1. describing what the criteria are in plain text, with reference to InLOC information only through alignment; and
  2. defining an InLOC structure specifically for the badge, detailing the criteria.

The first of these options has its own challenges. It will be vital to coherence to ensure that the alignments are consistent with each other. This will be possible, for example, if the aspects of competence covered are separate (independent; orthogonal even). So, if one alignment is to a level, and the second to a topic area, that might work. But it is much less promising if more specific definitions are referred to.

(I’d like to write an example at this point, but can’t decide on a topic area — I need someone to give me their example and we can discuss it and maybe put it here.)

From the point of view of InLOC, the second option is much more attractive. In principle, any badge criteria could be analysed in sufficient detail to draw out the components which can realistically be thought of as learning outcomes — properties of the learners — that may be knowledge, skill, competence, etc. No matter how unusual or complex these are, they can in principle be expressed in InLOC form, and that will clarify what is really “aligned” with what.

I’ll say again, I would really like to have some well-worked-out examples here. So please, if you’re interested, get in touch and let’s talk through some of interest to you. I hope to be starting that in Glasgow this week.