The key to competence frameworks

(27th in my logic of competence series.)

So here I am … continuing the thread of the logic of competence, nearly 7 years on. I’m delighted to see renewed interest from several quarters in the field of competence frameworks. There’s work being done by the LRMI; and much potential interest from those interested in various kinds of soft skills. And some kinds of “badges” – open credentials intended to be displayed and easily recognised – often rely on competence definitions for their award criteria.

I just have to say to everyone who explores this area, beware! There are two different kinds of things that are both called similar things: “competencies”; “competences”; “competence definitions”; skills; etc.

  1. There is one kind of statements of ability that people measure up to or not. My favourite simple understandable examples are things like “can juggle 5 balls for a minute without dropping any” or “can type at 120 words per minute from dictation making fewer than 10 mistakes”. But there are many less exact examples of similar things, that clearly either do or do not apply to individuals at a given time of testing. “Knows how to solve quadratic equations using the formula” or “can apply Pythagoras’ theorem to find the length of the third side of a right-angled triangle” might be two from mathematics. Many more from the vocational world, but they would mean less to those not in that profession or occupation.
  2. Then there is another kind, more of a statement indicating an ability or area of competence in which someone can be more or less proficient. Taking the examples above, these might be: “can juggle” or “juggling skills”; “can type” or “typing ability”; “knows about mathematics” or “mathematical ability”. There are vast numbers of these, because they are easier to construct than the other kind. “Can manage a small business”; “good communicator”; “can speak French”; “good at knitting”; “a good diplomat”; “programming”; “chess”; you think of your own.

What you can see quite plainly, on looking, is that with the first kind of statement, it is possible to say whether or not someone comes up to that standard; while with the second kind of phrase, either there is no standard defined, or the standard is too vague to judge whether or not someone “has” that ability or not — it’s more like, how much of that ability do you have?

In the past, I’ve called the first kind form of words a “binary” competence definition, and the second kind “rankable”. (Just search for “binary rankable” and you’ll get plenty.) But these are so unmemorable that I even forgot myself what I had called them. I’m looking for better names, that people (including myself) can easily remember.

Woe betide anyone who mixes the two kinds without realising what they are doing! Woe betide also anyone who uses one kind only, and imagines that the other kind either don’t exist or don’t matter.

The world is full of lists of skills which people should have some of. “Communication skills”. “Empathy”. “Resilience”. Loads of them. And in most cases, these are just of the second kind. They have not defined any particular level of the skill, and expect people to produce evidence about how good they are at the given skill, when asked.

In the vocational world of occupations and professions, however, we see very many well-defined statements that are of the first kind. This is to be expected, because to give someone a professional qualification requires that they are assessed as possessing skills to a certain, sufficient level.

The two kinds of statements are intimately related. Take any statement of the first kind. What would be better, or not so good? Juggling 3 balls for 30 seconds? Typing a 60 words per minute? These belong, as points on scales, respectively, of juggling skills and typing ability. Thus, every statement of the first kind has at least one scale that it is a point on. Conversely, every scale description, of the second kind, can, with sufficient insight, be detailed with positions on that scale, which will be statements of the first kind.

In the InLOC information model, these reciprocal relationships are given identifiers hasDefinedLevel and isDefinedLevelOf. These is perhaps the most essential and vital pair of relationships in InLOC.

So what about competence frameworks? Well, a framework, whether explicitly or implicitly, is about relating these two kind of statements together. It is about defining areas of ability that are important, perhaps to an activity or a role; and then also defining levels of those abilities that people can be assessed at. It’s only when these levels are defined that one has criteria, not only for passing exams or recruiting employees, but also for awarding badges. And the interest in badges has held this space open for the seven years I’ve been writing about the logic of competence. Thank you, those working with badges!

Now I’ve explained this again, could you help me by saying which pair of terms would best describe for you the two kinds of statements, better than “binary” and “rankable”? I’d be most grateful.

How do I go about doing InLOC?

(26th in my logic of competence series.)

It’s been three years now since the European expert team started work on InLOC, working out a good model for representing structures and frameworks of learning outcomes, skill and competence. As can be expected of forward-looking, provisional work, there has not yet been much take-up, but it’s all in place, and more timely now than ever.

Then yesterday I received a most welcome call from a training company involved in one particular sector, who are interested in using the principles of InLOC to help their LMS map course and module information to qualification frameworks. Yes! I enthusiastically replied.

What might help people in that situation is a simple, basic approach that sets you on the right path for doing things the InLOC way. I realised that this isn’t so easy to find in the main documentation, so here I set out this basic approach, which will reliably get anyone started on mapping anything to the InLOC model, and cross-references the InLOC documentation.

One description of what to do is documented in the section How to follow InLOC, but, for all the reasons above, here I will try going back to basics and starting again, in the hope that describing the approach in a different way may be helpful.

LOC definitions

The most basic feature that occurs many many times in any published framework is called, by InLOC, a “LOC definition”. This is, simply, any concept, described by any form of words, that indicates an ability – whether it be knowledge, skill, competence or any other learning outcome – that can be attributed to an individual person, and in some way – any way – assessed. It’s hard to define more clearly or succinctly than that, and to get a better understanding you may want to look at examples.

In the documentation, the best place to start is probably the section on InLOC explained through example. In that section, a framework (the European e-Competence Framework, e-CF) is thoroughly analysed. You can see in Figure 2 how, for just one page of the documentation, each LOC definition has been picked out separately.

A LOC definition includes at least these overlapping classes of concept:

  • anything that is listed as a learning outcome, a skill, a competency, an ability;
  • any separate parts of any learning outcomes;
  • anything that expresses an assessment criterion;
  • any level of any outcome, skill, competence, etc. (at any granularity);
  • a generic definition of what is required by a level.

Pieces of text that relate to the same concept – e.g. title and description of the same thing – are treated together. Everything that can be assessed separately is treated as a separate LOC definition. The grammatical structure of the text is of little importance. Often, though, in amongst the documentation, you read text that is not to do with abilities. Just pass over this for the moment.

One thing I’ve noticed sometimes is that some concepts, which could have their own LOC definitions, are implied but not explicit in the documentation. In yesterday’s discussion, one example was the levels of the unit as a whole. Assessment criteria are often specified for different levels of particular abilities, but the level as a whole is implied.

The first step, then, is to look for all the LOC definitions in your documentation, and any implied ones that are not explicitly documented. ANY piece of text that represents something that could potentially be assessed as an outcome of learning is most likely a LOC definition.

Binary and rankable

If you’ve looked through the documentation, you’ve probably come across this distinction, and it is very helpful if you are going to structure something in the InLOC way. But when I was writing the documentation, I don’t think I had grasped quite how central it is. It is so central that more recently I have come to putting it as a vital first concept to grasp. Very recently I quickly put together a slide deck about this, on Slideshare now, under the title Distinguishing binary and rankable definitions is key to structuring competence frameworks.

I first publicly clarified this distinction in a blog post before InLOC even started: Representing level relationships; and more recently mentioned in InLOC and OpenBadges: a reprise.

In essence: a binary learning outcome or competence (LOC) concept is one where it makes sense to ask, have you reached this level or standard? Are you as good as this? The answer gives a binary distinction between “yes”, for those people who have reached the level, and “not yet” for those who have not. The example I give in the recent slide deck is “can touch type in English at 60 wpm with fewer than 1 mistake per hundred words”. The answer is clearly yes or no. Or, “can juggle with three juggling balls for a minute or longer” (which I can’t yet).

On the other hand, a rankable concept is one where there is no clear binary criterion, but instead you can rank people in order of their ability in that concept. A rankable concept related to the previous binary one would simply be “touch typing” or “can touch type”. A good question for juggling would be “how well can you juggle?” You may want to analyse this more finely, and distinguish different independent dimensions of juggling ability, but more probably I guess you would be content to roughly rank people in order of a general juggling ability.

The second step is to look at all the LOC definitions you have isolated, and judge whether they are binary or (at least roughly) rankable.

Relating LOC definitions together

The third step is to relate all the LOC definitions you found to each other. It is commonplace that frameworks have a structure that is often hierarchical. An ability at a “high” level (of granularity) involves many abilities at “lower” levels. The simplest way of representing that is that the wider definition “has parts”, which are the narrower definitions, perhaps the products of “functional analysis” of the wider definition. InLOC allows you to relate definitions in this way, using the relationship “hasLOCpart”.

But InLOC also allows several other relationships between LOC definitions. These can be seen in the three tables on the relationships page in the documentation. To see how the relationships themselves are related, look at the third table, “ontology”. The tables together give you a clear and powerful vocabulary for describing relationships between LOC definitions. Naturally, it has been carefully thought through, and is a vital part of InLOC as a whole.

Very simple structures can be described using only the “hasLOCpart” relationship. However, when you have levels, you will need at least the “hasDefinedLevel” relationship as well. Broadly speaking, it will be a rankable LOC definition that “hasDefinedLevel” of a binary definition. Find these connections in particular!

For the other relationships, decide whether “hasLOCpart” is a good enough representation, or whether you need “hasNecessaryPart”, “hasOptionalPart” or “hasExample”. Each of these has a different meaning in the real world. Mostly, you will probably find that rankable definitions have rankable parts, and binary definitions have binary parts.

There is more related discussion in another of the blog posts from my “logic of competence” series, More and less specificity in competence definitions.

Putting together the LOC structure

In InLOC, a “LOC structure” is the collection of LOC definitions along with the relationships between them. Relationships between LOC definitions are only defined in LOC structures. This is to allow LOC definitions to appear in different structures, potentially with different relationships. You may think you know what comprises, for example, communication skills, but other people may have different opinions, and classify things differently.

A LOC structure often corresponds to a complete documented scheme of learning outcomes, and often has a name which is clearly not something that is a LOC definition, as described previously. You can’t assess how good someone is at “the European e-competence framework”(the e-CF) (unless you mean knowledge of that framework) but you can assess how good people are at its component parts, the LOC definitions (for rankable ones) or whether they reach the defined levels (for binary ones).

And the e-CF, analysed in detail in the InLOC documentation, is a good example where you can trace the structure down in two ways: either by topic, then later by levels; or by level, and then levelled (binary) topic definitions that are part of those levels.

Your aim is to document all the relationships between LOC definitions that are relevant to your application, and wrap those up with other related information in a LOC structure.

What you will have gained

The task of creating an InLOC structure is more than simply creating a file that can potentially be transmitted between web applications, and related to, referred to by, other structures that you are dealing with. It is also an exercise that can reveal more about the structure of the framework than was explicitly written into it. Often one finds oneself making explicit the relationships that are documented implicitly in terms of page and table layout. Often one fills in LOC definitions that have been left out. Whichever way you do it, you will be left with firmer, more principled structures on which to build your web applications.

We expect that sooner or later InLOC will be adopted as at least the basis of a model underlying interoperable and portable representations of frameworks of learning outcomes, skills, competences, abilities, and related knowledge structures. Much of the work has been done, but it may need revising in the light of future developments.

Why, when and how should we use frameworks of skill and competence?

(25th in my logic of competence series.)

When we understand how frameworks could be used for badges, it becomes clearer that we need to distinguish between different kinds of ability, and that we need tools to manage and manipulate such open frameworks of abilities. InLOC gives a model, and formats, on which such tools can be based.

I’ll be presenting this material at the Crossover Edinburgh conference, 2014-06-05, though my conference presentation will be much more interactive and open, and without much of this detail below.

What are these frameworks?

Frameworks of skill or competence (under whatever name) are not as unfamiliar as they might sound to some people at first. Most of us have some experience or awareness of them. Large numbers of people have completed vocational qualifications — e.g. NVQs in England — which for a long time were each based on a syllabus taken from what are called National Occupational Standards (NOSs). Each NOS is a statement of what a person has to be able to do, and what they have to know to support that ability, in a stated vocational role, or job, or function. The scope of NOSs is very wide — to list the areas would take far too much space — so the reader is asked to take a look at the national database of current NOSs, which is hosted by the UKCES on their dedicated web site.

Several professions also have good reason to set out standards of competence for active members of that profession. One of the most advanced in this development, perhaps because of the consequences of their competence on life and death, is the medical profession. Documents like Good Medical Practice, published by the General Medical Council, starts by addressing doctors:

Patients must be able to trust doctors with their lives and health. To justify that trust you must show respect for human life and make sure your practice meets the standards expected of you in four domains.

and then goes on to detail those domains:

  • Knowledge, skills and performance
  • Safety and quality
  • Communication, partnership and teamwork
  • Maintaining trust

The GMC also publishes the related Tomorrow’s Doctors, in which it

sets the knowledge, skills and behaviours that medical students learn at UK medical schools: these are the outcomes that new UK graduates must be able to demonstrate.

These are the kinds of “framework” that we are discussing here. The constituent parts of these frameworks are sometimes called “competencies”, a term that is intended to cover knowledge, skills, behaviours, attitudes, etc., but as that word is a little unfriendly, and bearing in mind that practical knowledge is shown through the ability to put that knowledge into practice, I’ll use “ability” as a catch all term in this context.

Many larger employers have good reasons to know just what the abilities of their employees are. Often, people being recruited into a job are asked in person, and employers have to go through the process of weighing up the evidence of a person’s abilities. A well managed HR department might go beyond this to maintaining ongoing records of employees’ abilities, so that all kinds of planning can be done, skills gaps identified, people suggested for new roles, and training and development managed. And this is just an outsider’s view!

Some employers use their own frameworks, and others use common industry frameworks. One industry where common frameworks are widely used is information and communications technology. SFIA, the Skills Framework for the Information Age, sets out all kinds of skills, at various levels, that are combined together to define what a person needs to be able to do in a particular role. Similar to SFIA, but simpler, is the European e-Competence Framework, which has the advantage of being fully and openly available without charge or restriction.

Some frameworks are intended for wider use than just employment. A good example is Mozilla’s Web Literacy Map, which is “a map of competencies and skills that Mozilla and our community of stakeholders believe are important to pay attention to when getting better at reading, writing and participating on the web.” They say “map”, but the structure is the same as other frameworks. Their background page sets out well the case for their common framework. Doug Belshaw suggests that you could use the Web Literacy Map for “alignment” of the kind of Open Badges that are also promoted by Mozilla.

Links to badges

You can imagine having badges for keeping track of people’s abilities, where the abilities are part of frameworks. To help people move between different roles, from education and training to work, and back again, having their abilities recognised, and not having to retrain on abilities that have already been mastered, those frameworks would have to be openly published, able to be referenced in all the various contexts. It is open frameworks that are of particular interest to us here.

Badges are typically issued by organisations to individuals. Different organisations relate to abilities differently. Some organisations, doing business or providing a service, just use employees’ abilities to deliver products and services. Other organisations, focusing around education and training, just help people develop abilities, which will be used elsewhere. Perhaps most organisations, in practice, are somewhere on the spectrum between these two, where abilities are both used and developed, in varied proportions. Looking at the same thing from an individual point of view, in some roles people are just using their abilities to perform useful activities; in other roles they are developing their abilities to use in a different role. Perhaps there are many roles where, again, there is a mixture between these two positions. The value of using the common, open frameworks for badges is that the badges could (in principle) be valued across different kinds of organisation, and different kinds of role. This would then help people keep account of their abilities while moving between organisations and roles, and have those abilities more easily recognised.

The differing nature of different abilities

However, maybe we need to be more careful than simply to take every open framework, and turn it into badges. If all the abilities than were used in all roles and organisations had separate badges, vast numbers of badges would exist, and we could imagine the horrendous complexity of maintaining them and managing them. So it might make sense to select the most appropriate abilities for badging, as follows.

  • Some abilities are plentiful, and don’t need special training or rewarding — maybe organisations should just take them for granted, perhaps checking that what is expected is there.
  • Some abilities are hard, or impossible, to develop: you have them or you don’t. In this case, using badges would risk being discriminatory. Badges for e.g. how high a person can reach, or how long they can be in the sun without burning, would be unnecessary as well as seriously problematic, while one can think of many other personal characteristics, potentially framed as abilities, which might be less visible on the surface, but potentially lead to discrimination, as people can’t just change them.
  • Some abilities might only be able to be learned within a specific role. There is little point in creating badges for these abilities, if they do not transfer from role to role.
  • Some abilities can be developed, are not abundant, and can be transferred substantially from one role to another. These are the ones that deserve to be tracked, and for which badges are perhaps most worth developing. This still leaves open the question of the granularity of the badges.

Practical considerations governing the creation and use of frameworks

It’s hard to create a good, generally accepted common skills or competence framework. In order to do so, one has to put together several factors.

  • The abilities have to be sufficiently common to a number of different roles, between which people may want to move.
  • The abilities have to be described in a way that makes sense to all collaborating parties.
  • It must be practical to include the framework into other tools.
  • The framework needs to be kept up to date, to reflect changing abilities needed for actual roles.
  • In particular, as the requirements for particular jobs vary, the components of a framework need to be presented in such a way that they can be selected, or combined with components of other frameworks, to serve the variety of roles that will naturally occur in a creative economy.
  • Thus, the descriptions of the abilities, and the way in which they are put together, need all to be compatible.

Let’s look at some of this in more detail. What is needed for several purposes is the ability to create a tailored set of abilities. This would be clearly useful in describing both job opportunities, and actual personal abilities. It is of course possible to do all of this in a paper-like way, simply cutting and pasting between documents. But realistically, we need tools to help. As soon as we introduce ICT tools, we have the requirement for standard formats which these tools can work with. We need portability of the frameworks, and interoperability of the tools.

For instance, it would be very useful to have a tool or set of tools which could take frameworks, either ones that are published, or ones that are handed over privately, and manipulate them, perhaps with a graphical interface, to create new, bespoke structures.

Contrast with the actual position now. Current frameworks rarely attempt to use any standard format, as there are no very widely accepted standards for such a format. Within NOSs, there are some standards; the UK government has a list of their relevant documents including “NOS Quality Criteria” and a “NOS Guide for Developers” (by Geoff Carroll and Trevor Boutall). But outside this area practice varies widely. In the area of education and training, the scene is generally even less developed. People have started to take on the idea of specifying the “learning outcomes” that are intended to be achieved as a result of completing courses of learning, educaction or training, but practice is patchy, and there is very little progress towards common frameworks of learning outcomes.

We need, therefore, a uniform “model”, not for skills themselves, which are always likely to vary, but for the way of representing skills, and for the way in which they are combined into frameworks.

The InLOC format

Between 2011 and 2013 I led a team developing a specification for just this kind of model and format. The project was called “Integrating Learning Outcomes and Competences”, or InLOC for short. We developed CEN Workshop Agreement CWA 16655 in three parts, available from CEN in PDF format by ftp:

  1. Information Model for Learning Outcomes and Competences
  2. Guidelines including the integration of Learning Outcomes and Competences into existing specifications
  3. Application Profile of Europass Curriculum Vitae and Language Passport for Integrating Learning Outcomes and Competences

The same content and much extra background material is available on the InLOC project web site. This post is not the place to explain InLOC in detail, but anyone interested is welcome to contact me directly for assistance.

What can people do in the meanwhile?

I’ve proposed elsewhere often enough that we need to develop tools and open frameworks together, to achieve a critical mass where there enough frameworks published to make it worthwhile for tool developers, and sufficiently developed tools to make it worthwhile to make the extra effort to format frameworks in the common way (hopefully InLOC) that will work with the tools.

There will be a point at which growth and development in this area will become self-sustaining. But we don’t have to wait for that point. This is what I think we could usefully be doing in the meanwhile, if we are in a position to do so.

1. Build your own frameworks
It’s a challenge if you haven’t been involved in skill or competence frameworks before, but the principles are not too hard to grasp. Start out by asking what roles, and what functions, there are in your organisation, and try to work out what abilities, and what supporting knowledge, are needed for each role and for each function. You really need to do this, if you are to get started in this area. Or, if you are a microbusiness that really doesn’t need a framework, perhaps you can build one for a larger organisation.
2. Use parts of frameworks that are there already, where suitable
It may not be as difficult as you thought at first. There are many resources out there, such as NOSs, and the other frameworks mentioned above. Search, study, see if you can borrow or reuse. Not all frameworks allow it, but many do. So, some of your work may already be done for you.
3. Publish your frameworks, and their constituent abilities, each with a URL
This is the next vital step towards preparing your frameworks for open use and reuse. The constituent abilities (and levels, see the InLOC documentation) really need their own identifiers, as well as the overall frameworks, whether you call those identifiers URLs, URIs or IRIs.
4. Use the frameworks consistently throughout the organisation
To get the frameworks to stick, and to provide the motivation for maintaining them, you will have to use them in your organisation. I’m not an expert on this side of practice, but I would have thought that the principles are reasonably obvious. The more you have a uniform framework in use across your organisation, the more people will be able to see possibilities for transfer of skills, flexible working, moving across roles, job rotation, and other similar initiatives that can help satisfy employees.
5. Use InLOC if possible
It really does provide a good, general purpose model of how to represent a framework, so that it can be ready for use by ICT systems. Just ask if you need help on this!
6. Consider integrating open badges
It makes sense to consider your badge strategy and your framework strategy together. You may also find this old post of mine helpful.
7. Watch for future development of tools, or develop some yourself!
If you see any, try to help them towards being really useful, by giving constructive feedback. I’d be happy to help any tool developers “get” InLOC.

I hope these ideas offer people some pointers on a way forward for skill and competence frameworks. See other of my posts for related ideas. Comments or other feedback would be most welcome!

The growing need for open frameworks of learning outcomes

(A contribution to Open Education Week — see note at end.)

(24th in my logic of competence series.)

What is the need?

Imagine what could happen if we had a really good sets of usable open learning outcomes, across academic subjects, occupations and professions. It would be easy to express and then trace the relationships between any learning outcomes. To start with, it would be easy to find out which higher-level learning outcomes are composed, in a general consensus view, of which lower-level outcomes.

Some examples … In academic study, for example around a more complex topic from calculus, perhaps it would be made clear what other mathematics needs to be mastered first (see this recent example which lists, but does not structure). In management, it would be made clear, for instance, what needs to be mastered in order to be able to advise on intellectual property rights. In medicine, to pluck another example out of the air, it would be clarified what the necessary components of competent dementia care are. Imagine this is all done, and each learning outcome or competence definition, at each level, is given a clear and unambiguous identifier. Further, imagine all these identifiers are in HTTP IRI/URI/URL format, as is envisaged for Linked Data and the Semantic Web. Imagine that putting in the URL into your browser leads you straight to results giving information about that learning outcome. And in time it would become possible to trace not just what is composed of what, but other relationships between outcomes: equivalence, similarity, origin, etc.

It won’t surprise anyone who has read other pieces from me that I am putting forward one technical specification as part of an answer to what is needed: InLOC.

So what could then happen?

Every course, every training opportunity, however large or small, could be tagged with the learning outcomes that are intended to result from it. Every educational resource (as in “OER”) could be similarly tagged. Every person’s learning record, every person’s CV, people’s electronic portfolios, could have each individual point referred, unambiguously, to one or more learning outcomes. Every job advert or offer could specify precisely which are the learning outcomes that candidates need to have achieved, to have a chance of being selected.

All these things could be linked together, leading to a huge increase in clarity, a vast improvement in the efficiency of relevant web-based search services, and generally a much better experience for people in personal, occupational and professional training and development, and ultimately in finding jobs or recruiting people to fill vacancies, right down to finding the right person to do a small job for you.

So why doesn’t that happen already? To answer that, we need to look at what is actually out there, what it doesn’t offer, and what can be done about it.

What is out there?

Frameworks, that is, structures of learning outcomes, skills, competences, or similar things under other names, are surprisingly common in the UK. For many years now in the UK, Sector Skills Councils (SSCs), and other similar bodies, have been producing National Occupational Standards (NOSs), which provided the basis for all National Vocational Qualifications (NVQs). In theory at least, this meant that the industry representatives in the SSCs made sure that the needs of industry were reflected in the assessment criteria for awarding NVQs, generally regarded as useful and prized qualifications at least in occupations that are not classed as “professional”.

NOSs have always been published openly, and they are still available to be searched and downloaded at the UKCES’s NOS site. The site provides a search page. As one of my current interests is corporate governance, I put that phrase in to the search box giving several results, including a NOS called CFABAI131 Support corporate decision-making (which is a PDF document). It’s a short document, with a few lines of overview, six performance criteria, each expressed as one sentence, and 15 items of knowledge and understanding, which is what is seen to be needed to underpin competent performance. It serves to let us all know what industry representatives think is important in that support function.

In professional training and development, practice has been more diverse. At one pole, the medical profession has been very keen to document all the skills and competences that doctors should have, and keen to ensure that these are reflected in medical education. The GMC publishes Tomorrow’s Doctors, introduced as follows:

The GMC sets the knowledge, skills and behaviours that medical students learn at UK medical schools: these are the outcomes that new UK graduates must be able to demonstrate.

Tomorrow’s Doctors covers the outline of the whole syllabus. It prepares the ground for doctors to move on to working in line with Good Medical Practice — in essence, the GMC’s list of requirements for someone to be recognised as a competent doctor.

The medical field is probably the best developed in this way. Some other professions, for example engineering and teaching, have some general frameworks in place. Yet others may only have paper documentation, if any at all.

Beyond the confines of such enclaves of good practice, yet more diverse structures of learning outcomes can be found, which may be incoherent and conflicting, particularly where there is no authority or effective body charged with bringing people to consensus. There are few restrictions on who can now offer a training course, and ask for it to be accredited. It doesn’t have to be consistent with a NOS, let alone have the richer technical infrastructure hinted at above. In Higher Education, people have started to think in terms of learning outcomes (see e.g. the excellent Writing and using good learning outcomes by David Baume), but, lacking sufficient motivation to do otherwise, intended learning outcomes tend to be oriented towards institutional assessment processes, rather than to the needs of employers, or learners themselves. In FE, the standardisation influence of NOSs has been weakened and diluted.

In schools in the UK there is little evidence of useful common learning outcomes being used, though (mainly) for the USA there exists the Achievement Standards Network (ASN), documenting a very wide range of school curricula and some other things. It has recently been taken over by private interests (Desire2Learn) because no central funding is available for this kind of service in the USA.

What do these not offer?

The ASN is a brilliant piece of work, considering its age. Also related to its age, it has been constructed mainly through processing paper-style documentation into the ASN web site, which includes allocating ASN URIs. It hasn’t been used much for authorities constructing their own learning outcome frameworks, with URIs belonging to their own domains, though it could in principle be.

Apart from ASN, practically none of the other frameworks that are openly available (and none that are not) have published URIs for every component. Without these URIs, it is much harder to identify, unambiguously, which learning outcome one is referring to, and virtually impossible to check that automatically. So the quality of any computer assisted searching or matching will inevitably be at best compromised, at worst non-existent.

As learning outcomes are not easily searchable (outside specific areas like NOSs), the tendency is to reinvent them each time they are written. Even similar outcomes, whatever the level, routinely seem to be be reinvented and rewritten without cross-reference to ones that already exist. Thus it becomes impossible in practice to see whether a learning opportunity or educational resource is roughly equivalent to another one in terms of its learning outcomes.

Thus, there is little effective transparency, no easy comparison, only the confusion of it being practically impossible to do the useful things that were envisaged above.

What is needed?

What is needed is, on the one hand, much richer support for bodies to construct useful frameworks, and on the other hand, good examples leading the way, as should be expected from public bodies.

And as a part of this support, we need standard ways of modelling, representing, encoding, and communicating learning outcomes and competences. It was just towards these ends that InLOC was commissioned. There’s a hint in the name: Integrating Learning Outcomes and Competences. InLOC is also known as ELM 2.0, where ELM stands for European Learner Mobility, within which InLOC represents part of a powerful proposed infrastructure. It has been developed under the auspices of the CEN Workshop, Learning Technologies, and funded by the DG Enterprise‘s ICT Standardization Work Programme.

InLOC, fully developed, would really be the icing on the cake. Even if people just did no more than publishing stable URIs to go with every component of every framework or structure of learning outcomes or competencies, that would be a great step forward. The existence and openness of InLOC provides some of the motivation and encouragement for everyone to get on with documenting their learning outcomes in a way that is not only open in terms of rights and licences, but open in terms of practice and effect.


Open Education Week 2014 logoThe third annual Open Education Week takes place from 10-15 March 2014. As described on the Open Education Week web site “its purpose is to raise awareness about the movement and its impact on teaching and learning worldwide“.

Cetis staff are supporting Open Education Week by publishing a series of blog posts about open education activities. Cetis have had long-standing involvement in open education and have published a range of papers which cover topics such as OERs (Open Educational Resources) and MOOCs (Massive Open Online Courses).

The Cetis blog provides access to the posts which describe Cetis activities concerned with a range of open education activities.

What is my work?

Is there a good term for my specialist area of work for CETIS? I’ve been trying out “technology for learner support”, but that doesn’t fully seem to fit the bill. If I try to explain, reflecting on 10 years (as of this month) involvement with CETIS, might readers be able to help me?

Back in 2002, CETIS (through the CRA) had a small team working with “LIPSIG”, the CETIS special interest group involved with Learner Information (the “LI” of “LIPSIG”). Except that “learner information” wasn’t a particularly good title. It was also about the technology (soon to be labelled “e-portfolio”) that gathered and managed certain kinds of information related to learners, including their learning, their skills – abilities – competence, their development, and their plans. It was therefore also about PDP — Personal Development Planning — and PDP was known even then by its published definition “a structured and supported process undertaken by an individual to reflect upon their own learning, performance and/or achievement and to plan for their personal, educational and career development”.

There’s that root word, support (appearing as “supported”), and PDP is clearly about an “individual” in the learner role. Portfolio tools were, and still are, thought of as supporting people: in their learning; with the knowledge and skills they may attain, and evidence of these through their performance; their development as people, including their learning and work roles.

If you search the web now for “learner support”, you may get many results about funding — OK, that is financial support. Narrowing the search down to “technology for learner support”, the JISC RSC site mentions enabling “learners to be supported with their own particular learning issues”, and this doesn’t obviously imply support for everyone, but rather for those people with “issues”.

As web search is not much help, let’s take a step back, and try to see this area in a wider perspective. Over my 10 years involvement with CETIS, I have gradually come to see CETIS work as being in three overlapping areas. I see educational (or learning) technology, and related interoperability standards, as being aimed at:

  • institutions, to help them manage teaching, learning, and other processes;
  • providers of learning resources, to help those resources be stored, indexed, and found when appropriate;
  • individual learners;
  • perhaps there should be a branch aimed at employers, but that doesn’t seem to have been salient in CETIS work up to now.

Relatively speaking, there have always seemed to be plenty of resources to back up CETIS work in the first two areas, perhaps because we are dealing with powerful organisations and large amounts of money. But, rather than get involved in those two areas, I have always been drawn to the third — to the learner — and I don’t think it’s difficult to understand why. When I was a teacher for a short while, I was interested not in educational adminstration or writing textbooks, but in helping individuals learn, grow and develop. Similar themes pervade my long term interests in psychology, psychotherapy, counselling; my PhD was about cognitive science; my university teaching was about human-computer interaction — all to do with understanding and supporting individuals, and much of it involving the use of technology.

The question is, what does CETIS do — what can anyone do — for individual learners, either with the technology, or with the interoperability standards that allow ICT systems to work together?

The CETIS starting point may have been about “learner information”, but who benefits from this information? Instead of focusing on learners’ needs, it is all too easy for institutions to understand “learner information” as information than enables institutions to manage and control the learners. Happily though, the group of e-portfolio systems developers frequenting what became the “Portfolio” SIG (including Pebble, CIEPD and others) were keen to emphasise control by learners, and when they came together over the initiative that became Leap2A, nearly six years ago, the focus on supporting learners and learning was clear.

So at least then CETIS had a clear line of work in the area of e-portfolio tools and related interoperability standards. That technology is aimed at supporting personal, and increasingly professional, development. Partly, this can be by supporting learners taking responsibility for tracking the outcomes of their own learning. Several generic skills or competences support their development as people, as well as their roles as professionals or learners. But also, the fact that learners enter information about their own learning and development on the portfolio (or whatever) system means that the information can easily be made available to mentors, peers, or whoever else may want to support them. This means that support from people is easier to arrange, and better informed, thus likely to be more effective. Thus, the technology supports learners and learning indirectly, as well as directly.

That’s one thing that the phrase “technology for learner support” may miss — support for the processes of other people supporting the learner.

Picking up my personal path … building on my involvement in PDP and portfolio technology, it became clear that current representations of information about skills and competence were not as effective as they could be in supporting, for instance, the transition from education to work. So it was, that I found myself involved in the area that is currently the main focus of my work, both for CETIS, and also on my own account, through the InLOC project. This relates to learners rather indirectly: InLOC is enabling the communication and reuse of definitions and descriptions of learning outcomes and competence information, and particularly structures of sets of such definitions — which have up to now escaped an effective and well-adopted standard representation. Providing this will mean that it will be much easier for educators and employers to refer to the same definitions; and that should make a big positive difference to learners being able to prepare themselves effectively for the demands of their chosen work; or perhaps enable them to choose courses that will lead to the kind of work they want. Easier, clearer and more accurate descriptions of abilities surely must support all processes relating to people acquiring and evidencing abilities, and making use of related evidence towards their jobs, their well-being, and maybe the well-being of others.

My most recent interests are evidenced in my last two blog posts — Critical friendship pointer and Follower guidance: concept and rationale — where I have been starting to grapple with yet more complex issues. People benefit from appropriate guidance, but it is unlikely there will ever be the resources to provide this guidance from “experts” to everyone — if that is even what we really wanted.

I see these issues also as part of the broad concern with helping people learn, grow and develop. To provide full support without information technology only looks possible in a society that is stable — where roles are fixed and everyone knows their place, and the place of others they relate to. In such a traditionalist society, anyone and everyone can play their part maintaining the “social order” — but, sadly, such a fixed social order does not allow people to strike out in their own new ways. In any case, that is not our modern (and “modernist”) society.

I’ve just been reading Herman Hesse’s “Journey to the East” — a short, allegorical work. (It has been reproduced online.) Interestingly, it describes symbolically the kind of processes that people might have to go through in the course of their journey to personal enlightenment. The description is in no way realistic. Any “League” such as Hesse described, dedicated to supporting people on their journey, or quest, would practically be able to support only very few at most. Hesse had no personal information technology.

Robert K. Greenleaf was inspired by Hesse’s book to develop his ideas on “Servant Leadership“. His book of that name was put together in 1977, still before the widespread use of personal information techology, and the recognition of its potential. This idea of servant leadership is also very clearly about supporting people on their journey; supporting their development, personally and professionally. What information would be relevant to this?

Providing technology to support peer-to-peer human processes seems a very promising approach to allowing everyone to find their own, unique and personal way. What I wrote about follower guidance is related to this end: to describe ways by which we can offer each other helpful mutual support to guide our personal journeys, in work as well as learning and potentially other areas of life. Is there a short name for this? How can technology support it?

My involvement with Unlike Minds reminds me that there is a more important, wider concept than personal learning, which needs supporting. We should be aspiring even more to support personal well-being. And one way of doing this is through supporting individuals with information relevant to the decisions they make that affect their personal well-being. This can easily be seen to include: what options there are; ideas on how to make decisions; what the consequences of those decision may be. It is an area which has been more than touched on under the heading “Information, Advice and Guidance”.

I mentioned the developmental models of William G Perry and Robert Kegan back in my post earlier this year on academic humility. An understanding of these aspects of personal development is an essential part of what I have come to see as needed. How can we support people’s movement through Perry’s “positions”, or Kegan’s “orders of consciousness”? Recognising where people are in this, developmental, dimension is vital to informing effective support in so many ways.

My professional interest, where I have a very particular contribution, is around the representation of the information connected with all these areas. That’s what we try to deal with for interoperability and standardisation. So what do we have here? A quick attempt at a round-up…

  • Information about people (learners).
  • Information about what they have learned (learning outcomes, knowledge, skill, competence).
  • Information that learners find useful for their learning and development.
  • Information about many subtler aspects of personal development.
  • Information relevant to people’s well-being, including
    • information about possible choices and their likely outcomes
    • information about individual decision-making styles and capabilities
    • and, as this is highly context-dependent, information about contexts as well.
  • Information about other people who could help them
    • information supporting how to find and relate to those people
    • information supporting those relationships and the support processes
    • and in particular, the kind of information that would promote a trusting and trusted relationship — to do with personal values.

I have the strong sense that this all should be related. But the field as a whole doesn’t seem have a name. I am clear that it is not just the same as the other two areas (in my mind at least) of CETIS work:

  • information of direct relevance to institutions
  • information of direct relevance to content providers.

Of course my own area of interest is also relevant to those other players. Personal well-being is vital to the “student experience”, and thus to student retention, as well as to success in learning. That is of great interest to institutions. Knowing about individuals is of great value to those wanting to sell all kinds of services to to them, but particularly services to do with learning and resources supporting learning.

But now I ask people to think: where there is an overlap between information that the learner has an interest in, and information about learners of interest to institutions and content providers, surely the information should be under the control of the individual, not of those organisations?

What is the sum of this information?

Can we name that information and reclaim it?

Again, can people help me name this field, so my area of work can be better understood and recognised?

If you can, you earn 10 years worth of thanks…

Developing a new approach to competence representation

InLOC is a European project organised to come up with a good way of communicating structures or frameworks of competence, learning outcomes etc. We’ve now produced our interim reports for consultation: the Information Model and the Guidelines. We welcome feedback from everyone, to ensure this becomes genuinely useful and not just another academic exercise.

The reason I’ve not written any blog posts for a few weeks is that so much of my energy has been going into InLOC, and for good reason. It has been a really exciting time working with the team to develop a better approach to representing these things. Many of us have been pushing in this direction for years, without ever quite getting there. Several projects have been nearby, including, last year, InteropAbility (JISC page; project wiki) and eCOTOOL (project web site; my Competence Model page) — I’ve blogged about these before, and we have built on ideas from both of them, as well as from several other sources: you may be surprised at the range and variety of “stakeholders” in this area that we have assembled within InLOC. Doing the thinking for the Logic of Competence series was of course useful background, but nor did it quite get there.

What I want to announce now is that we are looking for the widest possible feedback as further input to the project. It’s all too easy for people like us, familiar with interoperability specifications, simply to cook up a new one. It is far more of a challenge, as well as hugely more worthwhile and satisfying, to create something genuinely useful, which people will actually use. We have been looking at other groups’ work for several months now, and discussing the rich, varied, and sometimes confusing ideas going around the community. Now we have made our own initial synthesis, and handed in the “interim” draft agreements, it is an excellent time to carry forward the wide and deep consultation process. We want to discuss with people whether our InLOC format will work for them; whether they can adopt, use or recommend it (or whatever their role is to do with specifications; or, what improvements need to be made so that they are most likely to take it on for real.

By the end of November we are planning to have completed this intense consultation, and we hope to end up with the desired genuinely useful results.

There are several features of this model which may be innovative (or seem so until someone points out somewhere they have been done before!)

  1. Relationships aren’t just direct as in RDF — there is a separate class to contain the relationship information. This allows extra information, including a number, vital for defining levels.
  2. We distinguish the normal simple properties, with literal objects, which are treated as integral parts of whatever it is (including: identifier, title, description, dates, etc.) from what could be called “compound properties”. Compound properties, that have more than one part to their range, are a little like relationships, and we give them a special property class, allowing labels, and a number (like in relationships).
  3. We have arranged for the logical structure, including the relationships and compound properties, to be largely independent of the representation structure. This allows several variant approaches to structuring, including tree structures, flat structures, or Atom-like structures.

The outcome is something that is slightly reminiscent both of Atom itself, and of Topic Maps. Both are not so like RDF, which uses the simplest possible building blocks, but resulting in the need for harder-to-grasp constructs like blank nodes. The fact of being hard to grasp leads to people trying different ways of doing things, and possibly losing interoperability on the way. Both Atom and Topic Maps, in contrast, add a little more general purpose structure, which does make quite a lot of intuitive sense in both cases, and they have been used widely, apparently with little troublesome divergence.

Are we therefore, in InLOC, trying to feel our way towards a general-purpose way of representing substantial hierarchical structures of independently existing units, in a way that makes more intuitive sense that elementary approaches to representing hierarchies? General taxonomies are simply trying to represent the relationships between concepts, whereas in InLOC we are dealing with a field where, for many years, people have recognised that the structure is an important entity in its own right — so much so that it has seemed hard to treat the components of existing structures (or “frameworks”) as independent and reusable.

So, see what you think, and please tell me, or one of the team, what you do honestly think. And let’s discuss it. The relevant links are also available straight from the InLOC wiki home page. And if you are responsible for creating or maintaining structures of intended learning outcomes, skills, competences, competencies, etc., then you are more than welcome to try out our new approach, that we hope combines ease of understanding with the power to express just what you want to express in your “framework”, and that you will be persuaded to use it “for real”, perhaps when we have made the improvements that you need.

We envisage a future when many ICT tools can use the same structures of learning outcomes and competences, saving effort, opening up interoperability, and greatly increasing the possibilities for services to build on top of each other. But you probably don’t need reminding of the value of those goals. We’re just trying to help along the way.

The logic of tourism as an analogy for competence

(20th in my logic of competence series.)

Modelling competence is too far removed from common experience to be intuitive. So I’ve been thinking of what analogy might help. How about the analogy of tourism? This may help particularly with understanding the duality between competence frameworks (like tourist itineraries) and competence concept definitions (like tourist destinations).

The analogy is helped by the fact that last week I was in Lisbon for the first time, at work (the CEN WS-LT and TC 353), but also more relevantly as a tourist. (If you don’t know Lisbon, think of examples to suit your own chosen place to visit, that you know better.) I’ll start with the aspects of the analogy that seem to be most straightforward, and go on to more subtle features.

First things first, then: a tourist itinerary includes a list of destinations. This can be formalised as a guided tour, or left informal as a “things you should see” list given by a friend who has been there. A destination can be in any number of itineraries, or none. An itinerary has to include some destinations, but in principle it doesn’t have any upper limits: it could be a very detailed itinerary that takes a year to properly acquaint a newcomer with the ins and outs of the city. Different itineraries for the same place may have more, or fewer, destinations within that place. They may or may not agree on the destinations included. If there were destinations included by the large majority of guides, another guide could select these as the “essential” Lisbon or wherever. In this case, perhaps that would include visiting the Belem tower; the Castle of St George; Sintra; experiencing Fado; sampling the local food, particularly fish dishes; and a ride on one of the funicular trams that climb the steep hills. Or maybe not, in each case. There again, you could debate whether Sintra should be included in a guide to Lisbon, or just mentioned as a day trip.

A small itinerary could be made for a single destination, if desired. Some guides may just point you to a museum or destination as a whole; others may give detailed suggestions for what you should see within that destination. A cursory guide might say that you should visit Sintra; a detailed one might say that you really must visit the Castle of the Moors in Sintra, as well as other particular places in Sintra. A very detailed guide might direct you to particular things to see in the Castle of the Moors itself.

It should be clear from the above discussion that a place to visit should not be confused with an itinerary for that place. Any real place has an unlimited number of possible itineraries for it. An itinerary for a city may include a museum; an itinerary for a museum may include a painting; there may sometimes even be guides to a painting that direct the viewer to particular features of that painting. The guide to the painting is not the painting; the guide to the museum is not the museum; the guide to the city is not the city.

There might also be guides that do not propose particular itineraries, but list many places you might go, and you select yourself. In these cases, some kind of categorisation might be used to help you select the places of interest to you. What period of history do they come from? Are they busy or quiet? What do they cost? How long do they take to visit? Or a guide with itineraries may also categorise attractions, and make them explicitly optional. Optionality might be particularly helpful in guided tours, so that people can leave out things of less interest.

If a set of guides covered several whole places, not just one, it may make comparisons across the different places. If you liked the Cathar castles in the South of France, you may like the Castle of the Moors in Sintra. Those who like stately homes, on the other hand, may be given other suggestions.

A guide to a destination may also contain more than an itinerary of included destinations within it. A guidebook may give historical or cultural background information, which goes beyond the description of the destinations. Guides may also propose a visit sequence, which is not inherent in the destinations.

The features I have described above are reasonably replicated in discussion of competence. A guide or itinerary corresponds to a competence framework; a destination corresponds to a competence concept. This is largely intended to throw further light on what I discussed in number 12 in this series, Representing the interplay between competence definitions and structures.

Differences

One difference is that tourist destinations have independent existence in the physical world, whereas competence concepts do not. It may therefore be easier to understand what is being referred to in a guide book, from a short description, than in a competence framework. Both guide book and competence framework may rely on context. When a guide book says “the entrance”, you know it means the entrance to the location you are reading about, or may be visiting.

Physical embodiment brings clarity and constraints. Smaller places may be located within larger places, and this is relatively clear. But it is less clear whether lesser competence concepts are part of greater competence concepts. What one can say (and this carries through from the tourism analogy) is that concepts are included in frameworks (or not), and that any concept may be detailed by (any number of) frameworks.

Competence frameworks and concepts are more dependent on the words used in description, and because a description necessarily chooses particular words, it is easy to confuse the concept with the framework if they use the same words. It is easy to use the words of a descriptive framework to describe a concept. It is not so common, though perfectly possible, to use the description of an itinerary as a description of a place. It is because of this greater dependence on words (compared with tourist guides) that it may be more necessary to clarify the context of a competence concept definition, in order to understand what it actually means.

Where the analogy with competence breaks down more seriously is that high stakes decisions rarely depend on exactly where someone has visited. But at a stretch of the imagination, they could: recruitment for a relief tour guide could depend on having visited all of a given set of destinations, and being able to answer questions about them. What high stakes promotes is the sense that a particular structure (as defined or adopted by the body controlling the high-stakes decisions) defines a particular competence concept. Despite that, I assert that the competence structure and the separate competence concept remain strictly separate kinds of thing.

Understanding the logic of competence through this analogy

The features of competence models that are illustrated here are these.

  • Competence frameworks or structures may include relevant competence concepts, as well as other material. (See № 12.)
  • Competence concept definitions may be detailed by a framework structure for that competence concept. Nevertheless the structure does not fully define the concept. (See № 12 and № 13.)
  • Competence frameworks may include optional competences (as well as necessary or mandatory ones). (See № 15 and № 7.)
  • Both frameworks and concepts may be categorised. (See also № 5.)
  • Frameworks may contain sub-frameworks (just as itineraries may contain sub-itineraries).
  • But frameworks don’t contain concepts in the same way: they just include them (or not).
  • A framework may be simply an unstructured list of defined concepts.

I hope that helps anyone to understand more of the logic of competence, and I hope that also helps InLOC colleagues come to consensus on the related matters.

More and less specificity in competence definitions

(19th in my logic of competence series.)

Descriptions of personal ability can serve either as claims, like “This is what I am good at …”, or as answers to questions like “What are you good at?” or “can you … ?” In conversations — whether informally, or formally as in a job interview — the claims, questions, and answers may be more or less specific. That is a necessary and natural feature of communication. It is the implications of this that I want to explore here, as they bear on my current work, in particular including the InLOC project.

This is a new theme in my logic of competence series. Since the previous post in that series, I had to focus on completing the eCOTOOL competence model and managing the initial phases of InLOC, which left little time for following up earlier thinking. But there were ideas clearly evident in my last post in this series (representing level relationships) and now is the time for followup and development. The terms introduced previously there can be linked to this new idea of specificity. Simply: binarily assessable concepts are ones that are defined specifically enough for a yes/no judgement about a person’s ability; rankably assessable concepts have an intermediate degree of specificity, and are complemented by level definitions; while unorderly assessable concepts are ones that are less specifically defined, requiring more specificity to be properly assessable. (See that previous post for explanation of those terms.) The least specific competence-related concepts are not properly assessable at all, but serve as tags or headings.

As well as giving weight and depth to this idea of specificity in competence definitions, in this post I want to explore the connection between competence definitions and answering questions, because I think this will help to explain the ideas, because it is relatively straightforward to understand that questions and answers can be more or less specific.

Since the previous post in the series, my terminology has shifted slightly. The goals of InLOC — Integrating Learning Outcomes and Competences — have made it plain that we need to deal equally with learning outcomes and with competence or ability concepts. So I include “learning outcomes” more liberally, always meaning intended learning outcomes.

Job interviews

Imagine you are interviewing someone for a job. To make it more interesting, let’s make it an informal one: perhaps a mutual business contact has introduced you to a promising person at a business event. Add a little pressure by imagining that you have just a few minutes to make up your mind whether you want to ask this person to go through a longer, formal process. How would you structure the interview, and what questions would you ask?

As I envisage the process, one would probably start off with quite general, less specific questions, and then go into more detail where appropriate, where it mattered. So, for instance, one might ask “are you a programmer?”, and if the answer was yes, go into more detail about languages, development environments, length of experience, type of experience, etc. etc. The useful detail in this case would depend entirely on the circumstances of the job. For a graduate to be recruited into a large company, what matters might be aptitude, as it would be likely that full training would be supplied (which you could perhaps see as a kind of technical “enculturation”). On the other hand, for a specialist to join a short-term high-stakes project, even small details might matter a lot, as learning time would probably be minimal.

In reality, most job interviews start, not from a blank sheet, but from the basis of a job advert, and an application form, or CV and covering letter. A job advert may specify requirements; an application form may contain specific questions for which answers are expected, but in the absence of an appliation form, a CV and covering letter needs to try to answer, concisely, some of the key questions that would be asked first in an informal, unprepared job interview. This naturally explains the universal advice that CVs should be designed specifically for each job application. What you say about yourself unprompted not only reveals that information itself, but also says much about what you expect the other person to reckon as significant or interesting.

So, in the job interview, we notice the natural importance of varying specificity in descriptions and questions about abilities and experience.

Recruitment

This then carries over to the wider recruitment process. Potential employers often formulate a list of what is required of prospective employees, in terms of which abilities and experience are essential or desirable, but the detail and specificity of each item will naturally vary. The evidence for a less specific requirement may be assessed at interview with some quick general questions, but a more exacting requirement may want harder evidence such as a qualification, certificate or testimonial from an expert witness.

For example, in a regulated world such as pesticides that I wrote about recently, an employer might well want a prospective employee to have obtained a relevant certificate or qualification, so that they can legally do their job. Even when a certificate is not a legal requirement, some are widely asked for. A prospective sales employee with a driving licence or an office employee with an ECDL might be preferred over one without, and it would be perfectly reasonable for an employer to insist that non-native speakers had obtained a given certified level of proficiency in the principle workplace language. In each case, because the certificate is awarded only to people who have passed a carefully controlled test, the test result serves to answer many quite specific questions about the holder’s abilities, as well as the potential legal fact of their being allowed to perform certain actions in regulated occupations.

Vocational qualifications often detail quite specifically what holders are able to do. This is clearly the intention of the Europass Certificate Supplement (ECS), and has been in the UK, through the system of National Vocational Qualifications, relying on National Occupational Standards. So we could expect that employers with specific learning outcome or competence requirements may specify that candidates should have particular vocational qualifications; but what about less specific requirements? My guess is that those employers who have little regard for vocational qualifications are just those whose requirements are less specific. Time was when many employers looked only for a “good degree”, which in the UK often meant a “2:1″, an upper second class. This was supposed to answer generic questions, as typically the specific subject of the degree was not specified. Now there is a growing emphasis on the detail of the degree transcript or Europass Diploma Supplement (EDS), from which a prospective employer can read at least assessment results, if not yet explicit details of learning outcomes or competences. There is also a increasing trend towards making explicit the intended learning outcomes of courses at all levels, so the course information might be more informative than the transcript of EDS.

Interestingly, the CVs of many technical workers contain highly unspecific lists of programming languages that the individual implicitly claims, stating nothing about the detailed abilities and experience. These lists answer only the most general questions, and serve effectively only to open a conversation about what the person’s actual experience and achievements have been in those programming languages. At least for human languages there is the increasingly used CEFR; there does not appear to be any such widely recognised framework for programming languages. Perhaps, in the case of programming languages, it would be clumsy and ineffective to give answers to more detailed questions, because the individual does not know what those detailed questions would be.

Specificity in frameworks

Frameworks seem to gravitate towards specificity. Given that some people want to know the answers to specific questions, this is quite reasonable; but where does that leave the expression of the less specific requirements? For examples of curriculum frameworks, there is probably nowhere better than the American Achievement Standards Network (ASN). Here, as in many other places, learning outcomes are defined only in one or two levels. The ASN transcribes documents faithfully, then among many other things marks the “indexing status” of the various components. For an arbitrary example, see Earth and Space Science, which is a topic heading and not “indexable”. The heading below just states what the topic is about, and is not “indexable”. It is below this that the content becomes “indexable”, with first some less specific statements about what should be achieved by the end of fourth grade, broken down into the smallest components such as Identify characteristics of soils, minerals, rocks, water, and the atmosphere. It looks like it is just the “indexable” resources that are intended to represent intended learning outcome definitions.

At fourth grade, this is clearly nothing to do with employment, but even so, identifying characteristics of soils etc. is something that students may or may not be able to do, and this is part of the less specifically defined (but still “indexable”) “understanding of the characteristics of earth materials”. It strikes me that the item about identifying characteristics would fit reasonably (in my scheme of the previous post) as a “rankably assessible” concept, and its parent item about understanding might be classified (in my scheme) as unorderly assessable.

How to represent varying specificity

Having pointed out some of the practical examples of varying specificity in definitions of learning outcome or competence, the important issue for work such as InLOC is to provide some way of representing, not only different levels of specificity, but also how they relate to one another.

An approach through considering questions and answers

Any concept that is related to learning outcomes or competence can provide the basis for questions of an individual. Some of these questions have yes/no answers; some invite answers on a scale; some invite a longer, less straightforward reply, or a short reply that invites further questions. A stated concept can be both the answer to a question, and the ground for further questions. So, to go back to some of the above examples, a CV might somewhere state “French” or “Java”. These might be answers to the questions “what languages have you studied?” or “what languages do you use?” They also invite further questions, such as “how well do you know …?”, or “how much have you used …, and in what contexts?”, or “how good are you at …?” – which, if there is an appropriate scale, could be reformulated as “what level is your ability in …?”

Questions could be found corresponding to the ASN examples as well. “Identify characteristics of soils, minerals, rocks, water, and the atmosphere” has the same format that allows “can you …?” or “I can …”. The less specific statement — “By the end of fourth grade, students will develop an understanding of the characteristics of earth materials,” — looks like it corresponds with questions more like “what do you understand about earth materials?”.

As well as “summative” questions, there are related questions that are used in other ways than assessment. “How confident are you of your ability in …?” and “is your ability in … adequate in your current situation?” both come to mind (stimulated by considerations in LUSID).

What I am suggesting here is that we can adapt some of the natural properties of questions and answers to fit definitions of competence and ability. So what properties do I have in mind? Here is a provisional and tentative list.

  • Questions can be classified as inviting one of four kinds of answer:
    1. yes or no;
    2. a value on a (predefined) scale;
    3. examples;
    4. an explanation that is more complex than a simple value.
  • These types of answer probably need little explanation – many examples can readily be imagined.
  • The same form of answer can relate to more than one question, but usually the answer will mean different things. To be fully and clearly understood, an answer should relate to just one question. Using the above example, “French” as the answer to “what languages have you studied?” means something substantially different from “French” as the answer to “what languages are you fluent in?”
  • A more specific question may imply answers to less specific questions. For example, “what programming languages have you used in software development?” implies answers such as “software development” to the question “what competences do you have in ICT?” Many such implied questions and answers can be formulated. What matters in a particular framework is the other answers in that particular framework that can be inferred.
  • An answer to a less specific question may invite further more specific questions.
    1. Conversely to the example just above, if the question “what competences do you have in ICT?” includes the answer “software development”, a good follow-up question might be “what programming languages have you used in software development?” Similar patterns could be seen for any technical specialty. Often, answers like this may be taken from a known list of options. There are only so many languages, both human and computer.
    2. Where an answer is a rankable concept, questions about the level of that ability are invited. For instance, the question “what foreign languages can you speak?”, answered with “French” and “Italian”, invites questions such as “what is your European Language Passport level of ability in spoken interaction in French?”
    3. Where an answer has been analysed into its component parts, questions about each component part make sense. For example, if the answer to “are you able to clear sites for tree planting?”, following the LANTRA Treework NOS (2009) was “yes”, that invites the narrower implied questions set out in that NOS, like “can you select appropriate clearance methods …?” or “do you understand the potential impacts of your work on the environment …?”
    4. Unless the question is fully specific, admitting only the answers yes and no, and even in that case many times, it is nearly always possible to ask further questions, and give further answers. But everyone’s interest in detail stops sooner or later. The place to stop asking more specific questions is when the answer does not significantly affect the outcome you are looking for. And that varies between different interested parties.
  • Questions may be equivalent to other questions in other frameworks. This will come out from the answers given. If the answers given by the same person in the same context are always the same for two questions, they are effectively equivalent. It is genuinely helpful to know this, as it means that one can save time not repeating questions.
  • Answers to some questions may imply answers to other questions in different frameworks, without being equivalent. The answers may contain, or be contained by, their counterparts. This is another way of linking together questions from different frameworks, and saving asking unnecessary extra questions.

That covers a view of how to represent varying specificity in questions and answers, but not yet frameworks as they are at present.

Back to frameworks as they are at present

At present, it is not common practice to set out frameworks of competence or ability in terms of questions and answers, but only in terms of the concepts themselves. But, to me, it helps understanding enormously to imagine the frameworks as frameworks of questions, and the learning outcome or competence concepts as potential answers. In practice, all you see in the frameworks is the answers to the implied questions.

Perhaps this has come about through a natural process of doing away with unnecessary detail. The overall question in occupational competence frameworks is, “are you competent to do this job?”, so it can go unstated, with the title of the job standing in for the question. The rest of the questions in the framework are just the detailed questions about the component parts of that competence (see Carroll and Boutall’s ideas of Functional Analysis in their Guide to Developing National Occupational Standards). The formulation with action verbs helps greatly in this approach. To take NOS examples from way back in the 3rd post in this series, the units themselves and the individual performance criteria share a similar structure. Less specifically, “set out and establish crops” relates both to the question “are you able to set out and establish crops” and the competence claim “I am able to set out and establish crops”. More specifically, “place equipment and materials in the correct location ready for use” can be prefixed with “are you able to …” for a question, or “I am able to …” as a claim. Where all the questions take a form that invites answers yes or no, one really does not need to represent the questions at all.

With a less uniform structure, one would need mentally to remove all the questions to get a recognisable framework; or conversely, to understand a framework in terms of questions, one needs to add in those implied questions. This is not as easy, and perhaps that is why I have been drawn to elaborating all those structuring relationships between concepts.

We are left in a place that is very close to where we were before in the previous post. At simplest, we have the individual learning outcome or competence definitions (which are the answers) and the frameworks, which show how the answers connect up, without explicitly mentioning the questions themselves. The relations between the concepts can be factored out, and presented either together in the framework, or separately together with the concepts that are related by those relations.

If the relationships are simply “broader” and “narrower”, things are pretty straightforward. But if we admit less specific concepts and questions, because the questions are not explicitly represented, the structure needs a more elaborate set of relationships. In particular, we have to make particular provision for rankable concepts and levels. I’ll leave detailing the structures we are left with for later.

Before that, I’d like to help towards better grasp of the ideas through the analogy with tourism.