Representing level relationships

(18th in my logic of competence series)

Having prepared the ground, I’m now going to address in more detail how levels of competence can best be represented, and the implications for the rest of representing competence structures. Levels can be represented similar to other competence concept definitions, but need different relationships.

I’ve written about how giving levels to competence reflects common usage, at least for competence concepts that are not entirely assessable, and that the labels commonly used for levels are not unique identifiers; about how defining levels of assessment fits into a competence structure; and lately about how defining levels is one approach to raising the assessability of competence concepts.

Later: shortly after first writing this, I put together the ideas on levels more coherently in a paper and a presentation for the COME-HR conference, Brussels.

Some new terms

Now, to take further this idea of raising assessability of concepts, it would be useful to define some new terms to do with assessability. It would be really good to know if anyone else has thought along this direction, and how their thoughts compare.

First, may we define a binarily assessable concept, or “binary” for short, as a concept typically formulated as something that a person either has or does not have, and where there is substantial agreement between assessors over whether any particular person actually has or does not have it. My understanding is that the majority of concepts used in NOSs are intended to be of this type.

Second, may we define a rankably assessable concept, or “rankable” for short, as a concept typically formulated as something a person may have to varying degrees, and where there is substantial agreement between assessors over whether two people have a similar amount of it, or who has more. IQ might be a rather old-fashioned and out-of-favour example of this. Speed and accuracy of performing given tasks would be another very common example (and widely used in TV shows), though that would be more applicable to simpler skills than occupational competence. Sports have many scales of this kind. On the occupational front, a rankable might be a concept where “better” means “more additional abilities added on”, while still remaining the same basic concept. Many complex tasks have a competence scale, where people start off knowing about it and being able to follow someone doing it, then perform the tasks in safe environments under supervision, working towards independent ability and mastery. In effect, what is happening here is that additional abilities are being added to the core of necessary understanding.

Last, may we define a unorderly assessable concept, or “unordered” for short, as any concept that is not binary or rankable, but still assessable. For it to remain assessable despite possible disagreement about who is better, there at least has to be substantial agreement between assessors about the evidence which would be relevant to an assessment of the ability of a person in this area. In these cases, assessors would tend to agree about each others’ judgements, though they might not come up with the same points. Multi-faceted abilities would be good examples: take management competence. I don’t think there is just one single accepted scale of managerial ability, as different managers are better or worse at different aspects of management. Communication skills (with no detailed definition of what is meant) might be another good example. Any vague competence-related concept that is reasonably meaningful and coherent might fall into this category. But it would probably not include concepts such as “nice person” where people would disagree even about what evidence would count in its support.

Defining level relationships

If you allow these new terms, definitions of level relationships can be more clearly expressed. The clearest and most obvious scenario is that levels can be defined as binaries related to rankables. Using an example from my previous post, success as a pop songwriter based on song sales/downloads is rankable, and we could define levels of success in that in terms of particular sales, hits in the top ten, etc. You could name the levels as you liked — for instance, “beginner songwriter”, “one hit songwriter”, “established songwriter”, “successful songwriter”, “top flight songwriter”. You would write the criteria for each level, and those criteria would be binary, allowing you to judge clearly which category would be attributed to a given songwriter. Of course, to recall, the inner logic of levels is that higher levels encompass lower levels. We could give the number 1 to beginner, up to number 5 for top flight.

To start formalising this, we would need an identifier for the “pop songwriter” ability, and then to create identifiers for each defined level. Part of a pop songwriter competence framework could be the definitions, along with their identifiers, and then a representation of the level relationships. Each level relationship, as defined in the framework, would have the unlevelled ability identifier, the level identifier, the level number and the level label.

If we were to make an information model of a level definition/relationship as an independent entity, this would mean that it would include:

  • the fact that this is a level relationship;
  • the levelled, binary concept ID;
  • the framework ID;
  • the level number;
  • the unlevelled, rankable concept ID;
  • the level label.

If this is represented within a framework, the link to the containing framework is implicit, so might not show clearly. But the need for this should be clear if a level structure is represented separately.

As well as defining levels for a particular area like songwriting, it is possible similarly (as many actual level frameworks do) to define a set of generic levels that can apply to a range of rankable, or even unordered, concepts. This seems to me to be a good way of understanding what frameworks like the EQF do. Because there is no specific unlevelled concept in such a framework, we have to make inclusion of the unlevelled concept within the information model optional. The other thing that is optional is the level label. Many levels have labels as well as numbers, but not all. The number, however, though it is frequently left out from some level frameworks, is essential if the logic of ordering is to be present.

Level attribution

A key point that has been growing in conviction in me is that relationships for level attribution and level definition need to be treated separately. In this context, the word “attribution” suggests that a level is an attribute, either of a competence concept or of a person. It feels quite close to other sorts of categorisation.

Representing the attribution of levels is pretty straightforward. Whether levels are educational, professional, or developmental, they can be attributed to competence concepts, to individual claims and to requirements. Such an attribution can be expressed using the identifier of the competence concept, a relationship meaning “… is attributed the level …”, and an identifier for the level.

If we say that a certain well-defined and binarily assessable ability is at, say, EQF competence level 3, it is an aid to cross-referencing; an aid to locating that ability in comparison with other abilities that may be at the same or different levels.

A level can be attributed to:

  • a separate competence concept definition;
  • an ability item claimed by an individual;
  • an ability item required in a job specification;
  • a separate intended learning outcome for a course or course unit;
  • a whole course unit;
  • to a whole qualification, but care needs to be exercised, as many qualifications have components at mixed levels.

An assessment can result in the assessor or awarding body attributing an ability level to an individual in a particular area. This means that, in their judgement, that individual’s ability in the area is well described by the level descriptors.

Combining generic levels with areas of skill or competence

Let’s look more closely at combining generic levels with general areas of skill or competence, in such a way that the combination is more assessable. A good example of this is associated with the Europass Language Passport (ELP) that I mentioned in post 4. The Council of Europe’s “Common European Framework of Reference for Languages” (CEFRL), embodied in the ELP, make little sense without the addition of specific languages in which proficiency is assessed. Thus, the CEFRL’s “common reference levels” are not binarily assessable, just as “able to speak French” is not. The reference levels are designed to be independent of any particular language.

Thus, to represent a claim or a requirement for language proficiency, one needs both a language identifier and an identifier for the level. It would be very easy in practice to construct a URI identifier for each combination. The exact method of construction would need to be widely agreed, but as an example, we could define a URI for the CEFRL — e.g. — and then binary concept URIs expressing levels could be constructed something like this:

where “language” is replaced by the appropriate IETF language tag; “mode” is replaced by one of “listening”, “reading”, “spoken_interaction”, “spoken_production” or “writing” (or agreed equivalents, possibly in other languages); “level” is replaced by one of “basic_user”, “independent_user”, “proficient_user”, “A1″, “A2″, “B1″, “B2″, “C1″, “C2″; and “number” is replaced by, say, 10, 20, 30, 40 , 50 or 60 corresponding to A1 through to C2. (These numbers are not part of the CEFRL, but are needed for the formalisation proposed here.) A web service would be arranged where putting the URI into a browser (making an http request) would return a page with a description of the level and the language, plus other appropriate machine readable metadata, including links to components that are not binarily assessable in themselves. “Italian reading B1″ could be a short description, generated by formula, not separately, and a long description could also be generated automatically combining the descriptions of the language, reading, and the level criteria.

In principle, a similar approach could be taken for any other level system. The defining authority would define URIs for all separate binarily assessable abilities, and publish a full structure expressing how each one relates to each other one. Short descriptions of the combinations could simply combine the titles or short descriptions from each component. No new information is needed to combine a generic level with a specific area. With a new URI to represent the combination, a request for information about that combination can return information already available elsewhere about the generic level and the specific area. If a new URI for the combination is not defined, it is not possible to represent the combination formally. What one can do instead is to note a claim or a requirement for the generic level, and give the particular area in the description. This seems like a reasonable fall-back position.

Relating levels to optionality

Optionality was one of the less obvious features discussed previously, as it does not occur in NOSs. It’s informative to consider how optionality relates to levels.

I’m not certain about this, but I think we would want to say that if a definition has optional parts, it is not likely to be binarily assessable, and that levelled concepts are normally binarily assessable. A definition with optional parts is more likely to be rankable than binary, and it could even fail to be rankably assessable, rather being merely unordered. So, on the whole, defining levels should surely reduce, and ideally eliminate, optionality: levelled concepts should ideally have no optionality, or at least less than the “parent” unlevelled concept.


So in conclusion here are my proposals for representing levels, as level-defining relations.

  1. Use of levels Use levels as one way of relating binarily assessable concepts to rankable ones.
  2. The framework Define a set of related levels together in a coherent framework. Give this framework a URI identifier of its own. The framework may or may not include definitions of the related unlevelled and levelled concepts.
  3. The unlevelled concept In cases of levels of a concept more general than the set of levels you are defining, ensure the unlevelled concept has one URI. In a generic framework, this may not be present.
  4. The levels Represent each level as a competence concept in its own right, complete with short and long descriptions, and a URI as identifier.
  5. Level numbering Give each level a number, such that higher levels have higher numbers. Sometimes consecutive numbers from 0 or 1 will work, but if you think judgements of personal ability may lie in between the levels you define, you may want to choose numbers that make good sense to people who will use the levels.
  6. Level labels If you are trying to represent levels where labels already exist in common usage, record these labels as part of the structured definition of the appropriate level. Sometimes these labels may look numeric, but (as with UK degree classes) the numbers may be the wrong way round, so they really are labels, not level numbers. Labels are optional: if a separate label is not defined, the level number is used as the label.
  7. The level relationships These should be represented explicitly as part of the framework. This can either be separately, or within a hierarchical structure.

Representing level definitions allows me to add to the diagram that last appeared at the bottom of post 14, showing the idea of what should be there to represent levels. The diagram includes defining level relationships, but not yet attributing levels (which is more like categorising in other ways.)

information model diagram including levels

Later, I’ll go back to the overall concept map to see how the ideas that I’ve been developing in recent months fit in to the whole, and change the picture somewhat. But first, some long-delayed extra thoughts on specificity, questions and answers related to competence.

The logic of competence assessability

(17th in my logic of competence series)

The discussion of NOS in the previous post clearly implicated assessability. Actually, assessment has been on the agenda right from the start of this series: claims and requirements are for someone “good” for a job or role. How do we assess what is “good” as opposed to “poor”? The logic of competence partly relies on the logic of assessability, so the topic deserves a closer look.

“Assessability” isn’t a common word. I mean, as one might expect, the quality of being assessable. Here, this applies to competence concept definitions. Given a definition of skill or competence, will people be able to use that definition to consistently assess the extent to which an individual has that skill or competence? If so, the definition is assessable. Particular assessment methods are usually designed to be consistent and repeatable, but in all the cases I can think of, a particular assessment procedure implies the existence of a quality that could potentially be assessed in other ways. So “assessability” doesn’t necessarily mean that one particular assessment method has been defined, but rather that reliable assessment methods can be envisaged.

The contrast between outcomes and behaviours / procedures

One of the key things I learned from discussion with Geoff Carroll was the importance to many people of seeing competence in terms of assessable outcomes. The NOS Guide mentioned in the previous post says, among other things, that “the Key Purpose statement must point clearly to an outcome” and “each Main Function should point to a clear outcome that is valued in employment.” This is contrasted with “behaviours” — some employers “feel it is important to describe the general ways in which individuals go about achieving the outcomes”.

How much emphasis is put on outcomes, and how much on what the NOS Guide calls behaviours, depends largely on the job, and should determine the nature of the “performance criteria” written in a related standard. And, moreover, I think that this distinction between “outcomes” and “behaviours” is quite close to the very general distinction between “means” and “ends” that crops up as a general philosophical topic. To illustrate this, I’ll try giving two example jobs that differ greatly along this dimension: writing commercial pop songs; and flying commercial aeroplanes.

You could write outcome standards for a pop songwriter in terms of the song sales. It is very clear when a song reaches “the charts”, but how and why it gets there are much less clear. What is perhaps more clear is that the large majority of attempts to write pop songs result in — well — very limited success (i.e. failure). And although there are some websites that give e.g. Shortcuts to Hit Songwriting (126 Proven Techniques for Writing Songs That Sell), or How to Write a Song, other commentators e.g. in the Guardian are less optimistic: “So how do you write a classic hit? The only thing everyone agrees on is this: nobody has a bloody clue.”

The essence here is that the “hit” outcome is achieved, if it is achieved at all, through means that are highly individual. It seems unlikely that any standards setting organisation will write an NOS for writing hit pop songs. (On the other hand, some of the composition skills that underlie this could well be the subject of standards.)

Contrast this with flying commercial aeroplanes. The vast majority of flights are carried out successfully — indeed, flight safety is remarkable in many ways. Would you want your pilot to “do their own thing”, or try out different techniques for piloting your flight? A great deal of basic competence in flying is accuracy and reliability in following set procedures. (Surely set procedures are essentially the same kind of thing as behaviours?) There is a lot of compliance, checking and cross-checking, and little scope for creativity. Again it is interesting to note that there don’t seem to be any NOSs for airline pilots. (There are for ground and cabin staff, maintained by GoSkills. In the “National Occupational Standards For Aviation Operations on the Ground, Unit 42 – Maintain the separation of aircraft on or near the ground”, out of 20 performance requirements, no fewer than 11 start “Make sure that…”. Following procedures is explicitly a large part of other related NOSs.)

However, it is clear that there are better and worse pop songwriters, and better and worse pilots. One should be able to write some competence definitions in each case that are assessable, even if they might not be worth making into NOSs.

What about educational parallels for these, as most of school performance is assessed? Perhaps we could think of poetry writing and mathematics. Probably much of what is good in poetry writing is down to individual inspiration and creativity, tempered by some conventional rules. On the other hand, much of what is good in mathematics is the ability to remember and follow the appropriate procedures for the appropriate cases. Poetry, closely related to songwriting, is mainly to do with outcomes, and not procedures — ends, not means; mathematics, closer to airline piloting, is mainly to do with procedures, with the outcome pretty well assured as long as you follow the appropriate procedure correctly.

Both extremes of this “outcome” and “procedure” spectrum are assessable, but they are assessable in different ways, with different characteristics.

  1. Outcome-focused assessment (getting results, main effects, “ends”) allows variation in the component parts that are not standardised. What may be specified are the incidental constraints, or what to avoid.
  2. Assessment on procedures and conformance to constraints (how to do it properly, “means”, known procedures that minimise bad side effects) tends to have little variability in component procedural parts. As well as airline pilots, we may think of train drivers, power plant supervisors, captains of ships.

Of course, there is a spectrum between these extremes, with no clear boundary. Where the core is procedural conformance, handling unexpected problems may also feature (often trained through simulators). Coolness under pressure is vital, and could be assessed. We also have to face the philosophical point that someone’s ends may be another’s means, and vice versa. Only the most menial of means cannot be treated as an end, and only the greatest ends cannot be treated as a means to a greater end.

Outcomes are often quantitative in nature. The pop song example is clear — measures of songs sold (or downloaded, etc.) allow songwriters to be graded into some level scheme like “very successful”, “fairly successful”, “marginally successful” (or whatever levels you might want to establish). There is no obvious cut-off point for whether you are successful as a hit songwriter, and that invites people to define their own levels. On the other hand, conformance to defined procedures looks pretty rigid by comparison. Either you followed the rules or you didn’t. It’s all too clear when a passenger aeroplane crashes.

But here’s a puzzle for National Occupational Standards. According to the Guide, NOSs are meant to be to do with outcomes, and yet they admit no levels. If they acknowledged that they were about procedures, perhaps together with avoiding negative outcomes, then I could see how levels would be unimportant. And if they allowed levels, rather than being just “achieved” or “not yet achieved” I could see how they would cover all sorts of outcomes nicely. What are we to do about outcomes that clearly do admit of levels, as do many of the more complex kind of competences?

The apparent paradox is that NOSs deny the kind of level system that would allow them properly to express the kind of outcomes that they aspire to representing. But maybe it’s no paradox after all. It seems reasonable that NOSs actually just describe the known standards people need to reach to function effectively in certain kinds of roles. That standard is a level in itself. Under that reading, it would make little sense for a NOS to be subject to different levels, as it would imply that the level of competence for a particular role is unknown — and in that case it wouldn’t be a standard.

Assessing less assessable concepts

Having discussed assessable competence concepts from one extreme to the other, what about less assessable concepts? We are mostly familiar with the kinds of general headings for abilities that you get with PDP (personal/professional development planning) like teamwork, communication skills, numeracy, ICT skills, etc. You can only assess a person as having or not having a vague concept like “communication skills” after detailing what you include within your definition. With a competence such as the ability to manage a business, you can either assess it in terms of measurable outcomes valued by you (e.g. the business is making a profit, has grown — both binary — or perhaps some quantitative figure relating to the increase in shareholder value, or a quantified environmental impact) or in terms of a set of abilities that you consider make up the particular style of management you are interested in.

These less assessable concepts are surely useful as headings for gathering evidence about what we have done, and what kinds of skills and competences we have practiced, which might be useful in work or other situations. It looks to me that they can be made more assessable in one of a few ways.

  1. Detailing assessable component parts of the concept, in the manner of NOSs.
  2. Defining levels for the concept, where each level definition gives more assessable detail, or criteria.
  3. Defining variants for the concept, each of which is either assessable, or broken down further into assessable component parts.
  4. Using a generic level framework to supply assessable criteria to add to the concept.

Following this last possibility, there is nothing to stop a framework from defining generic levels as a shorthand for what needs to be covered at any particular level of any competence. While NOSs don’t have to define levels explicitly, it is still potentially useful to be able to have levels in a wider framework of competence.

[added 2011-09-04] Note that generic levels designed to add assessability to a general concept may not themselves be assessable without the general concept.

Assessability and values in everyday life

Defined concepts, standards, and frameworks are fine for established employers in established industries, who may be familiar with and use them, but what about for other contexts? I happen to be looking for a builder right now, and while my general requirements are common enough, the details may not be. In the “foreground”, so to speak, like everyone else, I want a “good” quality job done within a competitive time interval and budget. Maybe I could accept that the competence I require could be described in terms of NOSs, while price and availability are to do with the market, not competence per se. But when it comes to more “background” considerations, it is less clear. How do I rate experience? Well, what does experience bring? I suspect that experience is to do with learning the lessons that are not internalised in an educational or training setting. Perhaps experience is partly about learning to avoid “mistakes”. But, what counts as mistakes depends on one’s values. Individuals differ in the degree to which they are happy with “bending rules” or “cutting corners”. With experience, some people learn to bend rules less detectably, others learn more personal and professional integrity. If someone’s values agree with mine, I am more likely to find them pleasant.

There’s a long discussion here, which I won’t go into deeply, involving professional associations, codes of conduct and ethics, morality, social responsibility and so on. It may be possible to build some of these into performance criteria, but opinions are likely to differ. Where a standard talks about procedural conformance, it can sometimes be framed as knowing established procedures and then following them. A generic competence at handling clients might include the ability to find out what the client’s values are, and to go along with those to the extent that they are compatible with one’s own values. Where they aren’t, a skill in turning away work needs to be exercised in order to achieve personal integrity.


It’s all clearly a complex topic, more complex indeed than I had reckoned back last November. But I’d like to summarise what I take forward from this consideration of assessability.

  1. Less assessable concepts can be made more assessable by detailing them in any of several ways (see above).
  2. Goals, ends, aims, outcomes can be assessed, but say little about constraints, mistakes, or avoiding occasional problems. In common usage, outcomes (particularly quantitative ones) may often have levels.
  3. Means, procedures, behaviours, etc. can be assessed in terms of (binary) conformity to prescribed pattern, but may not imply outcomes (though constraints may be able to be formulated as avoidance outcomes).
  4. In real life we want to allow realistic competence structures with any of these features.

In the next post, I’ll take all these extra considerations forward into the question of how to represent competence structures, partly through discussing more about what levels are, along with how to represent them. Being clear about how to represent levels will leave us also clearer about how to represent the less precise, non-assessable concepts.

The logic of National Occupational Standards

(16th in my logic of competence series)

I’ve mentioned NOSs (UK National Occupational Standards) many times in earlier posts in this series, (3, 5, 6, 8, 9, 12, 14) but last week I was fortunate to visit a real SSCLANTRA — talk to some very friendly and helpful people there and elsewhere, and reflect further on the logic of NOSs.

One thing that became clear is that NOSs have specific uses, not exactly the same as some of the other competence-related concepts I’ve been writing about. Following this up, on the UKCES website I soon found the very helpful “Guide to Developing National Occupational Standards” (pdf) by Geoff Carroll and Trevor Boutall, written quite recently: March 2010. For brevity, I’ll refer to this as “the NOS Guide”.

The NOS Guide

I won’t review the whole NOS Guide, beyond saying that it is an invaluable guide to current thinking and practice around NOSs. But I will pick out a few things that are relevant: to my discussion of the logic of competence; to how to represent the particular features of NOS structures; and towards how we represent the kinds of competence-related structures that are not part of the NOS world.

The NOS Guide distinguishes occupational competence and skill. Its definitions aren’t watertight, but generally they are in keeping with the idea that a skill is something that is independent of its context, not necessarily in itself valuable, whereas an occupational competence in a “work function” involves applying skills (and knowledge). Occupational competence is “what it means to be competent in a work role” (page 7), and this seems close enough to my formulation “the ability to do what is required“, and with the corresponding EQF definitions. But this doesn’t help greatly in drawing a clear line between the two. What is considered a work function might depend not only on the particularities of the job itself, and also the detail in which it has been analysed for defining a particular job role. In the end, while the distinction makes some sense, the dividing line still looks fairly arbitrary, which justifies my support for not making a distinction in representation. This seems confirmed also by the fact that, later, when the NOS Guide discusses Functional Analysis (more of which below), the competence/skill distinction is barely mentioned.

The NOS Guide advocates a common language for representing skill or occupational competence at any granularity, ideally involving one brief sentence, containing:

  1. at least one action verb;
  2. at least one object for the verb;
  3. optionally, an indication of context or conditions.

Some people (including M. David Merrill, and following him, Lester Gilbert) advocate detailed vocabularies for the component parts of this sentence. While one may doubt the practicality of ever compiling complete general vocabularies, perhaps we ought to allow at least for the possiblity of representing verbs, objects and conditions distinctly, for any particular domain, represented in a domain ontology. If it were possible, this would help with:

  • ensuring consistency and comprehensibility;
  • search and cross-referencing;
  • revision.

But it makes sense not to make these structures mandatory, as most likely there are too many edge cases.

The whole of Section 2 of the NOS Guide is devoted to what the authors refer to as “Functional Analysis”. This involves identifying a “Key Purpose”, the “Main Functions” that need to happen to achieve the Key Purpose, and subordinate to those, the possible NOSs that set out what needs to happen to achieve each main function. (What is referred to in the NOS Guide as “a NOS” has also previously been called a “Unit”, and for clarity I’ll refer to them as “NOS units”.) Each NOS unit in turn contains performance criteria, and necessary supporting “knowledge and understanding”. However, these layers are not rigid. Sometimes, a wide-reaching purpose may be analysed by more than one layer of functions, and sometimes a NOS unit is divided into elements.

It makes sense not to attempt to make absolute distinctions between the different layers. (See also my post #14.) For the purposes of representation, this implies that each competence concept definition is represented in the same way, whichever layer it might be seen as belonging to; layers are related through “broader” and “narrower” relationships between the competence concepts, but different bodies may distinguish different layers. In eCOTOOL particularly, I’ve come to call competence concept definitions, in any layer, “ability items” for short, and I’ll use this terminology from here.

One particularly interesting section of the NOS Guide is its Section 2.9, where attention turns to the identification of NOS units themselves, as the component parts of the Main Functions. In view of the authority of this document, it is highly worthwhile studying what the Guide says about the nature of NOS units. Section 2.9 directly tackles the question of what size a NOS should be. Four relevant points are made, of which I’ll distinguish just two.

First, there is what we could call the criterion of individual activity. The Guide says: “NOS apply to the work of individuals. Each NOS should be written in such a way that it can be performed by an individual staff member.” I look at this both ways for complementary views. When two aspects of a role may reasonably and justifiably be performed separately by separate individuals, there should be separate NOS units. Conversely, when two aspects of a role are practically always performed by the same person, they naturally belong within the same NOS unit.

Second, I’ve put together manageability and distinctness. The Guide says that, if too large, the “size of the resulting NOS … could result in a document that is quite large and probably not well received by the employers or staff members who will be using them”, and also that it matters “whether or not things are seen as distinct activities which involve different skills and knowledge sets.” These seem to me both to be to do with fitting the size of the NOS unit to human expectations and requirements. In the end, however, the size of NOS units is a matter of good practice, not formal constraint.

Section 3 of the NOS Guide deals with using existing NOS units, and given the good sense of reuse, it seems right to discuss this before detailing creating your own. The relationship between the standards one is creating and existing NOS units could well be represented formally. Other existing NOS units may be

  • “imported” as is, with the permission of the originating body
  • “tailored”, that is modified slightly to suit the new context, but without any substantive change in what is covered (again, with permission)
  • used as the basis of a new NOS unit.

In the first two cases, the unit title remains the same; but in the other case where the content changes, the unit title should change as well. Interestingly, there seems no formal way of stating that a new NOS unit is based on an existing one, but changed too much to be counted as “tailored”.

Section 4, on creating your own NOSs, is useful particularly from the point of view of formalising NOS structures. The “mandatory NOS components” are set out as:

  1. Unique Reference Number
  2. Title
  3. Overview
  4. Performance Criteria
  5. Knowledge and Understanding
  6. Technical Data

and I’ll briefly go over each of these here.

It would be so easy, in principle, to recast a Unique Reference Number as a URI! However, the UKCES has not yet mandated this, and no SSC seems to have taken it up either. (I’m hoping to persuade some.) If a URI was also given to the broader items (e.g. key purposes and main functions) then the road would be open to a “linked data” approach to representing the relationships between structural components.

Title is standard Dublin Core, while Overview maps reasonably to dcterms:description.

Performance criteria may be seen as the finest granularity ability items represented in a NOS, and are strictly parts of NOS units. They have the same short sentence structure as both NOS units and broader functions and purposes. In principle, each performance criterion could also have its own URI. A performance criterion could then be treated like other ability items, and further analysed, explained or described elsewhere. An issue for NOSs is that performance criteria are not identified separately, and therefore there is no way within a NOS structure to indicate similarity or overlap between performance criteria appearing in different NOS units, whether or not the wording is the same. On the other hand, if NOS structures could give URIs to the performance criteria, they could be reused, for example to suggest that evidence within one NOS unit would provide also useful evidence within a different NOS unit.

Performance criteria within NOS units need to be valid across a sector. Thus they must not embody methods, etc., that are fine for one typical employer but wrong for another. They must also be practically assessable. These are reasons for avoiding evaluative adverbs, like the Guide’s example “promptly”, which may be evaluated differently in different contexts. If there are going to be contextual differences, they need to be more clearly signalled by referring e.g. to written guidance that forms part of the knowledge required.

Knowledge and understanding are clearly different from performance criteria. Items of knowledge are set out like performance criteria, but separately in their own section within a NOS unit. As hinted just above, the inclusion of explicit knowledge can mean that a generalised performance criterion can often work if the knowledge dependent on context is factored out, in places where there would otherwise be no common approach to assessment.

In principle, knowledge can be assessed, but the methods of assessment differ from those of performance criteria. Action verbs such as “state”, “recall”, “explain”, “choose” (on the basis on knowledge) might be introduced, but perhaps are not absolutely essential, in that a knowledge item may be assessed on the basis of various behaviour. Knowledge is then treated (by eCOTOOL and others) as another kind of ability item, alongside performance criteria. The different kinds of ability item may be distinguished — for example following the EQF, as knowledge, skills, and competence — but there are several possible categorisations.

The NOS Guide gives the following technical data as mandatory:

  1. the name of the standards-setting organisation
  2. the version number
  3. the date of approval of the current version
  4. the planned date of future review
  5. the validity of the NOS: “current”; “under revision”; “legacy”
  6. the status of the NOS: “original”; “imported”; “tailored”
  7. where the status is imported or tailored, the name of the originating organisation and the Unique Reference Number of the original NOS.

These could very easily be incorporated into a metadata schema. For imported and tailored NOS units, a way of referring to the original could be specified, so that web-based tools could immediately jump to the original for comparison. The NOS Guide goes on to give more optional parts, each of which could be included in a metadata schema as optional.

Issues emerging from the NOS Guide

One of the things that is stressed in the NOS Guide (e.g. page 32) is that the Functional Analysis should result in components (main functions, at least) that are both necessary and sufficient. That’s quite a demand — is it realistic, or could it be characterised as reductionist?


The issue of optionality has been covered in the previous post in this series. Clearly, if NOS structures are to be necessary and sufficient, logically there can be no optionality. It seems that, practically, the NOS approach avoids optionality in two complementary ways. Some options are personal ways of doing things, at levels more finely grained than NOS units. Explicitly, NOS units should be written to be inclusive of the diversity of practice: they should not prescribe particular behaviours that represent only some people’s ways of doing things. Other options involve broader granularity than the NOS unit. The NOS Guide implies this in the discussion of tailoring. It may be that one body wants to create a NOS unit that is similar to an existing one. But if the “demand” of the new version NOS unit is not the same as the original, it is a new NOS unit, not a tailored version of the original one.

The NOS Guide does not offer any way of formally documenting the relationship between variant ways of achieving the same aim, or function (other than, perhaps, simple reference). This may lead to some inefficiencies down the line, when people recognise that achieving one NOS unit is really good evidence for reaching the standard of a related NOS unit, but there is no general and automatic way of documenting that or taking it into account. We should, I suggest, be aiming at an overall structure, and strategy, that documents as much relationship as we can reliably represent. This suggests allowing for optionality in an overall scheme, but leaving it out for NOSs.

Levels and assessability

The other big issue is levels. The very idea of level is somehow anathema to the NOS view. A person either has achieved a NOS, and is competent in the area, or has not yet acheived that NOS. There is no provision for grades of achievement. Compare this with the whole of the academic world, where people almost always give marks and grades, comparing and ranking people’s performance. The vocational world does have levels — think of the EQF levels, that are intended for the vocational as well as the academic world — but often in the vocational world a higher level is seen as the addition of other separate skills or occupational competences, not as improving levels of the same ones.

A related idea came to me while writing this post. NOSs rightly and properly emphasise the need to be assessable — to have a effective standard, you must be able to tell if someone has reached the standard or not — though the assessment method doesn’t have to be specified in advance. But there are many vaguer competence-related concepts. Take “communication skills” as a common example. It is impossible to assess whether someone has communication skills in general, without giving a specification of just what skills are meant. Every wakeful person has some ability to communicate! But we frequently see cases where that kind of unassessably vague concept is used as a heading around which to gather evidence. It does make sense to ask a person about evidence for their “communication skills”, or to describe them, and then perhaps to assess whether these are adequate for a particular job or role.

But then, thinking about it, there is a correspondence here. A concept that is too vague to assess is just the kind of concept for which one might define (assessable) levels. And if a concept has various levels, it follows that whether a person has the (unlevelled) concept cannot be assessed in the binary way of “competent” and “not yet competent”. This explains why the NOS approach does not have levels, as levels would imply a concept that cannot be assessed in the required binary way. Rather than call unlevelled concepts “vague”, we could just call them something like “not properly assessable”, implying the need to add extra detail before the concept becomes assessable. That extra detail could be a whole level scheme, or simply a specification of a single-level standard (i.e. one that is simply reached or not yet reached).

In conclusion, I cannot see a problem with specifying a representation for skill and competence structures that includes non-assessable concepts, along with levels as one way of detailing them. The “profile” for NOS use can still explicity exclude them, if that is the preferred way forward.

Update 2011-08-22 and later

After talking further with Geoff Carroll I’ve clarified above that NOSs are to do specifically with occupational competence rather than, e.g. learning competence. And having been pushed into this particular can of worms, I’d better say more about assessability to get a clear run up to levels.

Optional parts of competence

(15th in my logic of competence series)

Discussion suggests that it is important to lay out the argument in favour of optionality in competence structures. I touched on this post 7 in this series, talking about component parts, and styles or variants. But here I want to challenge the “purist” view that competence structures should always be definite, and never optional.

Leaving aside the inevitable uncertainty at the edges of ability, what a certain person can do is at least reasonably definite at any one time. If you say that a person can do “A or B or C”, it is probably because you have not found out which of them they can actually do. A personal claim explicitly composed of options would seem rather strange. People do, however, claim broader abilities that could be fulfilled in diverse narrower ways, without always specifying which way they do it.

Requirements, on the other hand, involve optionality much more centrally. Because people vary enormously in the detail of their abilities, it is not hard to fall into the trap of over-specifying job requirements, to the point where no candidate fits the exact requirements (unless you have, unfairly, carefully tailored the requirements to fit one particular person who you want to take on). So, in practice, many detailed job requirements include reasonable alternatives. If there are three different reasonable ways to build a brick wall, I am unlikely to care too much which a builder users. What I care about is whether the end product is of good quality. There are not only different techniques in different trades and crafts, but also different approaches in professional life, to management, or even competences like surgery.

When we call something an ability or competence, sometimes we ignore the fact that different people have their own different styles of doing it. If you look in sufficiently fine detail, it could be that most abilities differ between different people. Certainly, the nature of complex tasks and problems means that, if people are not channelled strongly into using a particular approach, each person’s exploration, and the different past experiences they bring, tends to lead to idiosyncracies in the approach that they develop. Perhaps because people are particularly complex, managing people is one area that tends to be done in different ways, whether in the workplace or the school classroom. There are very many other examples of complex tasks done in (sometimes subtly) different ways by different people.

Another aspect of optionality is apparent in recruitment practice. Commonly, some abilities are classified as “essential”, others are “desirable”. The desirable ones are regarded as optional in terms of the judgement of whether someone is a good candidate for a particular role. Is that so different from “competence”? One of the basic foundations I proposed for the logic of competence is the formulation of job requirements.

Now, you may still argue that does not mean that the definitions of particular abilities necessarily involve optionality. And it is true that it is possible to avoid the appearance of optionality, by never naming an ability together with its optional constituents. But perhaps the question here should not be one of whether such purity is possible, but instead what is the approach that most naturally represents how people think, while not straying from other important insights. Purity is often reached only at the cost of practicality and understandability.

What does seem clear is that if we allow optionality in structural relationships for competence definitions, we can represent with the same structure both personal claims without options, requirements that have options, and teaching, learning, assessment and qualification structures, which very often have options. If, on the other hand, we were to choose a purist approach that disallows optionality for abilities as pure concepts, we would still have to introduce an extra optionality mechanism for requirements, for teaching structures, for learning structures, for assessment structures, and for qualification structures, all of which are closely related to competence. Having the possibility of optionality does not mean that optionality is required, so an approach that allows optionality will also cover non-optional personal abilities and claims. So why have two mechanisms when one will serve perfectly well?

To have optionality in competence structures, we have to have two kinds of “part of” relationship, or if you prefer, two kinds of both broader and narrower. Here I’ll call them “necessary part of” and “optional part of”, because to me, “necessary” sounds more generally applicable and natural than “mandatory” or “compulsory”.

If you have a wider ability, and if in some authority’s definition that wider ability has necessary narrower parts, then (according to that authority) you must have all the necessary parts. It doesn’t always work the other way round. If you have all the necessary parts, there may be more optional parts that are needed; or indeed there may be some aspects of the broader ability that are not easy to represent as distinct narrower parts at all.

Obviously, the implications between a wider ability and its optional parts are less definite. The way that optional parts contribute towards the whole broader ability is something that may best be described in the description of the broader ability. OK, you may be tempted to write down some kind of formula, but do you really need to write a machine-readable formula, when it may not be so clear in reality what the optional parts are? My guess is, no. So stating the relationship in a long description would seem to me to work just fine.

Real examples of this kind of optionality are easy to find in course and qualification structures, but I can’t find good simple examples in real-life competence structures. In place of this, to give a flavour of an example, try the pizza making example I wrote about in post 13.

I might claim I can make pizza. That is a reasonable claim, but there are many options about what exactly it might entail. Someone might want to employ a pizza maker, and it will be entirely up to them what exactly they require — they may or may not accept a number of different options.

In the fictional BSHAPM pizza making framework, there is optionality in the sense that there are three different ways to prepare dough, two different ways to form the dough into the base, and three different ways of baking the pizza. Defining each of these as options still leaves it open how they are used. For example, it may be that the BSHAPM award a certificate for demonstrating just one of each set of options, but a diploma for demonstrating all of the options in each case. I could claim some or all of the options. An employer could well require some or all of the options.

In the end, marking parts as options in a framework does not limit how they can be used. You could still superimpose necessity and optionality on top of a framework that was constructed simply in terms of broader and narrower. What it does do is allow tools that use a framework to calculate when something deemed necessary has been missed out. But marking a part as necessary doesn’t mean that a training course, or an assessment, necessarily has to cover that part. The training or assessment can be deliberatly incomplete. Where optionality is really useful is in specifying job requirements, course and assessment coverage, and the abilities that a qualification signifies in general, leaving it to individual records to specify which options have been covered by a particular person, and leaving it to individual people to claim particular sets of detailed abilities.

Perhaps the argument concludes there: it is useful (I have argued) for people constructing competence structures to be able to indicate which parts are logically necessary and which parts are logically optional. Other tools and applications can build on that.

Does that make sense? Over to you, readers…

Meanwhile, next, I’ll write something about the logic of National Occupational Standards, because they are a very important source of existing structured competence-related definitions.

Different ways to represent the same logic

(14th in my logic of competence series)

Earlier this week I was at a meeting where we were talking about interoperability for abilities, and there was much discussion about the niceties of representation. Human readability is significant — whether the representation reflects what is in people’s minds. The same logic can be represented in radically different ways that are still logically equivalent (and so interoperable); there remains the question of what is identified by identifiers.

A relatively well-known example of variation of readability between different representations involves RDF. RDF/XML has a tendency to make people run a mile, as it can be difficult to comprehend what is represented. Triples formats (e.g. Turtle) at least have a great simplicity to them, because you can see clearly the mapping between the triples and the RDF “graph” (of blobs and arrows) that represents your little corner of the Semantic Web. (I am one of those who only started to appreciate RDF after getting over RDF/XML.) The problem with triples formats is that the knowledge structure is finely fragmented, so you don’t get a clear overview from the triples of what is being expressed: you still need a diagram that represents that overall structure. This is not a surprise — it is generally very hard to serialise a network structure in a comprehensible way. Only particular forms lend themselves to serialisation: e.g. strict tree structures.

In the case of the logic of competence, as I discussed in post 12 in the series, we want to represent both individual competence concepts (or abilities) and structures or frameworks that include several of them.

Published competence frameworks generally use plain text as a medium — they are not primarily graphical or diagrammatic — and have therefore in a sense already been serialised by the publishers. They come across overall rather like a tree structure, though there are very often cross-references and/or repetitions that betray the fact that the information is in reality more complex than a simple tree. But the structure is close enough to a tree to tempt people to want to represent it as such. This is nicely illustrated in Alan Paull’s comment on post 12. Alan’s ability definitions are nested within each other: a depth-first traversal serialisation of the tree if you like.

As Alan and I have agreed in conversation, it is possible to convert a tree-like competence structure to and from other forms. I’ll now give three other forms, and explain how the conversion can be done, and following that I’ll discuss the implications. I’ll call the forms: Atom-like; disjoint; and triples.

First, it can be transformed to a format similar to Atom, where each separate thing (for Atom: entry; here: ability) is given in a flat list of things, each thing including the links between it and other things. (Atom is the format we adopted for Leap2A.) To do this, you take each ability from the tree structure and put it into a flat list, replacing the relation to the nested ability with one using only its identifier, and adding a reverse link from the narrower ability to the broader ability it was removed from. It is also possible to reverse this procedure — start by finding the broadest ability definition (that is, the one which is has no broader links) and then replace narrower links by the whole narrower ability definition, removing the narrower ability’s link to the broader ability. If a narrower ability has already been put in place, leave the reference in place, to avoid duplication.

Second, it can be transformed into a disjoint structure with all the relationships separated out. It’s perhaps easiest to imagine this starting from the Atom-like format, as in the Atom-like format each ability has already been separated out, and there are fewer steps to reach the disjoint form. For each link within each ability, convert it to a separate relationship whose subject is the ability where it is defined, and whose object is the ability referenced. Separate the relationships, leaving the ability definitions with no relationship information included within their structure. An extra step of de-duplication can then occur, because probably the Atom-like format had two representations of each relationship: A narrower B and also the equivalent B broader A. Only one of each pair like this is needed to represent the structure fully.

As in the previous case, it is straightforward to reverse this transformation. For each ability, find the relationships which involve that ability identifier. If the relationship has the ability identifier in the subject position, include a link to the object ability within the ability. If the ability identifier is in the object position, include a link with the reciprocal relationship to the other ability.

Third, it can be transformed by being broken right down into RDF triples. As before, it is easiest to start with the nearest other form — in this case the disjoint one. Take each disjoint ability definition (without relationships). This should convert to a set of triples each with the ability identifier as subject, and probably a literal object. The separate relationships are already in a triple-like format, so they can be converted very easily. To reverse this transformation, examine each triple in turn. If subject and object are ability identifiers, turn the triple into a relationship. Then, for each ability, find all triples that have the identifier of that ability as the subject, and have a literal object, and build a single ability structure out of that set.

Now we’ve seen that these different formats are interconvertable, so which one you use does not impede the communication of a complete ability or competence framework. Where they do differ, however, is in what identifiers are seen to identify, and that does have implications, at least for human use.

Identifiers in RDF triples don’t really identify anything by themselves. An RDF resource is simply a node, with a URI as an identifier. RDF relationships have been called predicates or properties, which is nicely ambivalent about how tight the relationship is. RDF doesn’t tell you which relations relate to things that should be considered as part of the essence of the identified “resource” — or what is inside the “skin” of the resource, if you like. The only thing you can say, when grouping RDF triples together, is that literal properties don’t make any sense by themselves, so they can be seen as attached to, or hang off, a “resource”. In the discussion above, we have assumed that the abilities are the only kind of resources we are dealing with, and that will guide the conversion from the “triples” form to the “disjoint” form.

In the disjoint form, literal properties are grouped with the abilities they are properties of. These properties are likely to include the very well-known ones of title and description at least. The fact that relations are listed separately implies that the relationships are less essential to the nature of an ability than its title and description. In the Atom-like form, an identifier looks like it refers to an ability together with all of its immediate relationships. But in the tree-like form, the identifier of a broader ability seems to refer to the complete structure branching down from it.

Which of these is the most useful or flexible way to identify abilities? That is a real question, and I believe it was the question implicitly underlying much of the discussion at the meeting I participated in earlier this week.

One way of tackling the issue of what is the most useful way of doing identifiers is to look at when you would want to change the identifier. There’s not much one can say about this for RDF triples. For the disjoint form, an identifier would want to change typically when the title or description changes. For Atom-like form, the identifier might reasonably change if any of the direct relationships changed. For broader tree-like structures, the implication is that the identifier should change if any of the structure changes.

When an ability identifier changes is significant. Effective connection between what is taught, learned, assessed, required, claimed or evidenced is only assured if the same identifier is used. If different ones are used for essentially the same ability, extra provision needs to be made to ensure, e.g., that evidence for the ability under one ID can be used to fulfill requirements under a different ID. That provision might be in terms of declaring that two ability IDs actually are equivalent. So, generally speaking, it is reasonable to have ability identifiers changing only when necessary — when what the ability means in practice has actually changed.

So now we can ask: which approach to structuring ability or competence definitions delivers this outcome of needing changed identifiers no more (and no less) than necessary?

The first sub-question I’d like to address is: should changing structure always require changing identifier? My answer is clearly, no, not always, and this is the reasoning. Yes, of course you should change the identifier if the content has changed. But structure change does not strictly imply content change. After thinking a long time about this, I think the clearest example is with intermediate layers of structure. And, happily, this is illustrated in real life with several UK National Occupational Standards. OK, so imagine we have a three-layer competence / ability structure.

Top ability A has two sub-abilities, B and C. B is further divided into P, Q and R, while C is further divided into X, Y and Z. (In real life, there would usually be more.)
The body that defines the structure decides that the justification for the intermediate layer is rather flimsy, and removes it, leaving the structure that ability A has direct sub-abilities P, Q, R, X, Y and Z. The title and long definition of A are unchanged. Is A the same ability? I would answer, unequivocally, yes it is, because all evidence the former A is also equally evidence for the latter A.
Or apply this to the BSHAPM pizza making ability example. A stands for the ability to make pizza the BSHAPM way. B could be baking pizza. P, Q and R could be the three approaches to pizza baking. The BSHAPM could decide that, for simplicity, they wanted to eliminate the node of baking pizza as a separate ability, and instead represent the three approaches to baking pizza as direct sub-abilities of pizza making.

Now if you cling to the view that changes in structure must result in changes of identifier, this means that you will need to declare, and process, a whole extra kind of relationship: that the former A is equivalent to the latter A. This strikes me as unnecessary and quite possibly confusing. Possible: yes; ideal: no. The same example also goes against the Atom-style idea of ability identity. The immediate relationships of ability A change in this scenario, without the ability itself changing at all.

Thus, if we still want to deliver the outcome of changing the identifier only as much as necessary, not more, we are driven to the next type of structural representation, the “disjoint” one. But this comes with a caution. If we are not including the structure as an essential part of the ability or competence definition, we need to be sure that we aren’t cutting corners, and omitting to give a full description of the ability that we can use as a proper definition. Sometimes this may happen in cases where the structure is defined at the same time as the contained abilities. We may simply say that ability A is defined as the sum of abilities B, C and D. Then we risk not noticing that the substance, the content, of an ability has changed, when we change it to being composed of B, C, D and E. So, there is a requirement, to use this “disjoint” approach, that we properly define the ability, in such a way that if an extra component is added, we feel we need to change the definition, and thus the identifier with it. I would say that amounts to no more than good practice. At very least, we should have a long description that states that ability A consists of B, C and D (or B, C, D and E). Or we may choose to make explicit, in text that is not formally structured, the fact that ability A is actually made up of the things that are grouped together under the headings, B, C, etc. Usually, ability A will actually have more to it than simply the sum of the parts. One would expect at least that ability A would include the ability to recognise when abilities B, C, etc. are appropriate, and apply them accordingly, or something like that. So, again, failing to write a full definition or long description is laziness and bad practice.

This reflects back on what I said earlier about a structure doubling as a concept in its own right, or in other words a framework doubling as an ability definition (which also I have actually changed now so as not to have too much hanging around that I no longer believe). Perhaps that needs qualifying and clarifying now. The way I would put it now is that in authoring a competence structure, I am usually implicitly defining a competence concept, but good practice demands that I define that concept explicitly in its own right. It is then true to say that the structure “gives structure to” the concept, in the sense that it details a certain set of narrower parts that the broader concept “contains”. But that is certainly not the only way of structuring the concept. My example based on real NOS cases is only the tip of the iceberg — it is very easy indeed to make up endless examples where the same broad ability is structured in different ways.

It is also not true that a structure necessarily defines a clear single concept. In many cases (such as my BSHAPM pizza making ability) it may, but in very broad cases it may not do. We cannot have that as a requirement for a representation. Thus, contrary to what I wrote at one point previously, it is plausible to have a structure or framework title and definition that is independent of an ability title or definition. It’s just that you can’t use one as the other, and it’s more usual, in less broad cases, to have the structure and the ability concept closely related, perhaps even sharing the title. The structure should not, however, have a long description anything like an ability concept.

Thus, the structure “gives structure to” the concept, and the concept “is structured by” the structure.

Perhaps it is worth remembering that a major envisaged use of these structures, in their structured electronic (rather than less explicitly formatted document) form, is to give learners a set of discrete concepts to which evidence can apply, which can be self-assessed or assessed by others, which can be claimed, or required. Some kind of “container” element at least is necessary in which an ability can be contained or not. The container seems to be exactly the explicit framework structure. In the three-layer example above, the definition, identifier and title of ability A can remain the same, while the framework structure can change from containing in the first case A, B, C, P, Q, R, X, Y and Z and in the latter case A, P, Q, R, X, Y and Z. Applied to pizza making, the frameworks would have to be given better titles than “structure for baking pizzas (layers)” and “structure for baking pizzas (flat)”!

I’d like to conclude by pointing out the trade-off involved in taking the different paths.

  • if you proliferate identifiers due to changing identifiers each time the structure changes, you’ll need extra mechanisms to pin together different “versions” of ability definitions that differ only in their structure, or the extent of their structure, not in their substance or content;
  • if you want to keep the same ability identifier when the structure changes but not the content, you’ll need to take care to make long descriptions explicit; and to separate identifiers for structure and ability, pinning them closely together where appropriate.

With the latter option, I’m claiming that the ability identifier will naturally change in just those cases where one would want it to change. The cost is getting one’s head round the difference between the ability and the structure. I firmly believe it is worth the effort for system designers to do this, so that the software can handle things properly behind the scenes, while not needing to trouble the end users with thinking about these matters. I’m also suggesting that the latter option requirements embody good practice, where the former ones do not.

This gives us one step forward on the structure diagram from the last two posts, 12 and 13.

Next I’ll firm up on why allowing optionality in competences structures is a good idea, before going on to saying more about how to represent level attributions and level definitions.

Structural relations for competence concepts

(13th in my logic of competence series)

My recent thoughts on how to represent the interplay between competence definitions and structures seem to be stimulating but probably not convincing enough. This post tries to clarify the issues more thoroughly, at the same time as introducing more proposals about the structural relationships themselves.

Clear explanation seems always to benefit from a good example. Let me introduce my example first of all. It is based on a real ability — the ability to make pizzas — but in the interests of being easy to follow, it is artificially simplified. It is unrealistic to expect anyone to define a structure for something that is so straightforward. But I hope it illustrates the points fairly clearly.

I’m imagining a British Society of Home and Amateur Pizza Makers (BSHAPM), dedicated to promoting pizza making competence at home, through various channels. (The Associazione Verace Pizza Napoletana does apparently exist, however, and organises training in pizza making, which does have some kind of syllabus, but it is not fully defined on their web site.) BSHAPM decides to create and publish a framework for pizza making competence, liberally licenced, that it can be referred to by schools, magazines, and cookery and recipe sites. A few BSHAPM members own traditional wood-fired pizza ovens which they occasionally share with other members. There are also some commercial pizza outlets that have an arrangement with the BSHAPM.

The BSHAPM framework is about their view of competence in making pizzas. In line with an “active verb” approach to defining skills, it is simply entitled “Make pizza”. The outline of the BSHAPM framework is here:

  • prepare pizza dough
    • with fresh yeast
    • with dried yeast
    • with non-yeast raising agents
  • form dough into a pizza base
    • by hand in the air
    • with a rolling pin on a work surface
  • prepare a pizza base sauce from available ingredients
  • select, prepare and arrange toppings according to eater’s needs and choices
  • prepare complete pizza for baking
  • bake pizza
    • in kitchen oven
    • in a traditional wood-fired oven
    • in a commercial pizza oven

The framework goes into more detail than that, which will not be needed here. It also specifies several knowledge items, both for the overall making pizza framework, and for the subsidiary abilities.

Clearly the ability detailed in the BSHAPM “make pizza” framework is a different ability to several other abilities that could also be called “make pizzas” — for instance, the idea that making pizzas involves going to a shop, buying ready-made pizzas, and putting them in the oven for an amount of time specified on the packaging.

As well as the BSHAPM framework, we could also imagine related structures or frameworks, for:

  • all food preparation
  • general baking
  • making bread
  • preparing dough for bread etc.

So, let’s start on what I hope can be common ground, clarifying the previous post, and referring to the pizza example as appropriate.

Point 1: Frameworks can often be seen as ability definitions. The BSHAPM concept of what it takes to “make pizza” represents an ability that people could potentially want to claim, provide evidence for, or (as employers) require from potential employees. It could be given a long description that explains all the aspects of making pizza, including the various subsidiary abilities. At the same time, it defines a way of analysing the ability to make pizza in terms of those subsidiary abilities. In this case, these are different aspects or presentations of the same concept.

Point 2: Each component concept definition may be used by itself. While it is unlikely that someone would want to evidence those subsidiary abilities, it is perfectly reasonable to suppose that they either could form part of a course curriculum, or could be items to check off in a computer-based system for someone to develop their own pizza-making skills. They are abilities in their own right. On the other hand, it is highly plausible that some other curriculum, or some other learning tracking system, might want not to represent the subsidiary abilities as separate items, particularly in cases where the overall competence claimed were at a higher level. In this case (though not generally) the subsidiary abilities are reasonably related to steps in the pizza making process, and we could imagine a pizza production line with the process at each stage focusing on just one subsidiary ability.

Point 3: The structure information can in principle be separated from the concept definitions. Each ability definition, including the main one of “make pizza” and each subsidiary one, can be defined in its own right and quoted separately. The “rest” of the framework in this case is simply documenting what is seen as part of what: the fact that preparing pizza dough is part of making pizza; baking the pizza is part of making pizza, etc., etc.

Point 4: The structure information by itself is not a competence concept, and does not look like a framework. One cannot claim, produce evidence for, nor require the structural links between concepts, but only items referred to by that structure information. It is stretching a point, probably beyond breakage, to call the structural information by itself a framework.

Point 5: Including a structured ability in a framework needs choice between whether to include only the self-standing concept or to include all the subsidiary definitions. To illustrate this, if one wanted to include “make pizza” into a more general framework for baking skills, there is the question of the level of granularity that is desired. If, for example, the subsidiary skills are regarded as too trivial to evidence, train, assess, etc., perhaps because it would be part of a higher level course, then “make pizza” by itself would suffice. It is clearly the same concept though, because it would be understood in the same way as it is with its subsidiary abilities separately defined. But if all the subsidiary concepts are wanted, it is in effect including a structure within a structure.

These initial points may be somewhat obscured by the fact that some frameworks are very broad — too broad to be mastered by any one person, or perhaps to broad to have any meaningful definition as a self-standing competence-related concept. Take the European Qualifications Framework (EQF), for example, that has been mentioned in previous posts (4; 10). We don’t think of EQF being a single concept. But that is fine, because the EQF doesn’t attempt to define abilities in themselves, just level characteristics of those abilities.

There are other large frameworks that might be seen as more borderline. Medicine and the health-related professions provide many examples of frameworks and structures. The UK General Medical Council (GMC) publishes Good Medical Practice (GMP), a very broad framework covering the breadth of being a medical practitioner. It could represent the structure of the GMC’s definition of what it is to “practice medicine well”, though that idea may take some getting used to. The question of how to include GMP in a broader framework will never practially arise, because it is already quite large enough to fill any professional life completely. (Ars longa, vita brevis…)

It is for the narrower ranges of ability, skill or competence that the cases for points 1 and 5 are clearest. This is why I have chosen a very narrow example, of making pizza. For this, we can reflect on two questions about representation, and the interplay between frameworks and self-standing concept definitions.

  • Question A: What would be a good representation of a structure to be included within a wider structure?
  • Question B: What difference is there between that and just the main self-standing concept being included?

So let’s try to elaborate and choose a method for Point 3 — separating self-standing concept definitions from structural information. Representing the self-standing concepts is relatively clear: they need separate identifiers so that they can be referred to separately, and reused in different structures. The question to answer first, before addressing A and B is how to represent the structure information.

  1. Option one is to strip out all the relations, and bundle them together separately from all the concept definitions. “Make pizza” at the broadest, and the other narrower abilities including “bake pizza”, would all be separate concepts; the “structure for” making pizza would be separately identified. The “make pizza” framework would then be the ensemble of concept definitions and structure.
  2. Option two is to give an identifier to the framework, where the framework consists of the concepts plus the structure information, and not give an identifier to the structure information by itself.

Let’s look again at this structural information with an eye on whether or not it could need an identifier. The structural information within the imagined BSAPHM framework for making pizza would contain the relations between the various ability concepts. There would be necessary part and optional part relations. A necessary part of making pizza the BSAPHM way is to make the dough, but how the dough is made has three options. Another necessary part is to form the pizza base, and that has two options. And so on.

So, perhaps now we are ready to compare the answers to the questions A and B asked above. To include one self-standing concept in another framework requires that the main concept is represented with its own identifier, because both an identifier for the framework, and an identifier for the structural information, would imply the inclusion of subsidiary abilities, and those are not wanted. To include the framework as a whole, on the other hand, there is a marked difference between options one and two. In option one, both the identifier for the main concept and the identifier for the structural information need to be associated with the broader concept, to indicate that the whole structure, not just the one self-standing concept, is included in the broader framework. Even if we still have the helpful duality of concept and structure, the picture looks something like this (representing option 1):


If we had to represent the concept and structure entirely separately, the implementation would surely look still worse.

Moving forward, option two looks a lot neater. If the framework URI is clearly linked (though one structural relation) to the main concept definition, there is no need for extra optional URIs in the relations. It’s worth saying at this point that, just as with so many computational concepts, it is possible to do it in many ways, but it is the one that makes most intuitive sense, and is easiest to work with, that is likely to be the one chosen. So here is my preferred solution (representing option 2):


To propose a little more detail here: the types of relationship could cover which concepts are defined within the framework, and specifying the main concept, as well as the general broader and narrower relationships, in two variants — necessaryPart and optionalPart. (I’ve now added more detailed justification for the value of this distinction in post 15 in this series.)

[The following paragraph was altered after reconsideration, 2011-05-26]

One of the practical considerations for implementation is, what has to happen when something that is likely to change, changes? What can and cannot change independently? It seems clear (to me at least) that if a framework changes substantially, so that it no longer expresses the same ability, the main concept expressed by that framework must also change. Evidence for a “make pizza” concept whose structure does not include preparing dough doesn’t provide full evidence for the BSHAPM concept. It is a different concept. On the other hand, if the long description of the concept remains the same, it is possible to have a different structure expressing the same concept. One obvious way in which this is possible is that one structure could for BSHAPM pizza making could include just the abilities listed above, while a different structure for exactly the same ability concept could include a further layer of detail, for example spelling out the subsidiary abilities needed to make a pizza base in the air without a rolling pin. (It looks tricky: I’ve seen it once but never tried it!) Nothing has actually changed, but it is more detailed with more of the subsidiary abilities made explicit.

These arrangements still support the main idea, valuable for reuse, that the concept definitions can remain the same even if they are combined differently in a different framework.

There are two other features that, for me, reinforce the desirability of option 2 over option 1. They are, first, that various metadata can be helpfully shared, and second, that a range of subsidiary competence concepts can be included in a framework. Explanation follows here.

[The following paragraph was altered after reconsideration, 2011-06-08]

First, I am now saying that you can’t change a framework structure substantially without changing the nature of the main competence concept that stands for competence in the framework abilities together as one. The structure or framework would probably be called be something like “the … framework” where the title of the main concept goes in place of the dots. The two titles are not truly independent, but need differentiation, because of the different usage (and indeed meaning) of the competence structure and the competence concept.

Second, if we have an identified framework including a main concept definition as in option 2, there seems no reason why it should not, in the same way, include all the other subsidiary definitions that are defined within the framework. This seems to me to capture very well the common-sense idea that the framework is the collection of things defined within it, plus their relationships. Concepts imported from elsewhere would be clearly visible. In contrast, if the structural information alone is highlighted as in option 1, there is no obvious way of telling, without an extra mechanism, which of the URIs used in the structure are native to this framework, and which are imported foreign ones.

There are probably more reasons for favouring option 2 over option 1 that I have not thought of for now — if on the other hand you can think of any arguments pointing the other way, please preferably comment below.

If I had more time to write this, it would probably be more coherent and persuasive. Which reminds me, I’m open to offers of collaboration for turning these ideas into more tightly argued and defended cases for publication somewhere.

But there is more to finish off — I would like to cover the rest of the relationships that I see as important in the practical as well as logical representation of competence.

However, after writing the original version of this post, I have had some very useful discussions with others involved in this area, reflections on which are given in the next post.

Representing the interplay between competence definitions and structures

(12th in my logic of competence series, heavily edited 2011-06-17)

One of the keys to a fuller understanding the logic of competence is the interplay between, on the one hand, the individual definition of an ability or competence concept, and on the other hand, a framework or structure containing several related definitions. Implementing a logically sound representation of competence concepts depends on this fuller understanding, and this was one of the requirements presented in the previous post.

A framework by its very nature “includes” (in some sense to be better understood later) several concept definitions. Equally, when one analyses any competence concept, it is necessarily in terms of other concepts at least related to competence. There would seem to be little difference in principle between these more and less extensive structures.

When we consider a small structure it is easier to see that the small structures usually double as a concepts in their own right. To illustrate this, let’s consider UK National Occupational Standards (NOSs) again. Back in post number 3 we met some LANTRA standards, where an example of a unit, within the general topic of “Production Horticulture”, is the one with the name “Set out and establish crops”. In this unit, there is a general description — which is a description or definition of the competence concept, not a definition of its components — and then lists of “what you must be able to do” and “what you must know and understand”. The ability items are probably not the kinds of things you would want to claim separately (there are too many of them), but nevertheless they could easily be used as in checklists both for learning, and for assessment. My guess is that a claim of competence would be much more likely to reference the unit title in this case.

From this it does appear that a NOS unit simultaneously defines a competence concept definition, and also gives structure to that concept.

It is when you consider the use of these competence-related structures that this point reveals its real significance. Perhaps the most important use of these structures by individuals is in their learning, education and training, and in the assessment of what they have learned. In learning a skill or competence, or learning what is required to fulfill an occupational role, the learner has many reasons to have some kind of checklist with which to monitor and measure progress. What exactly appears on that checklist, and how granular it is, makes a lot of difference to a learner. There can easily be too much or too little. Too few items on the list would mean that each item covers a lot of ground, and it may not be clear how they should assess their own ability in that area. To take the LANTRA example above, it is not clear to a learner what “Set out and establish crops” involves, and learners may have different ideas. The evidence a learner brings up may not satisfy an employer. At the other extreme, I really wouldn’t want a system to expect me to produce separate evidence for whether I can start a tractor engine, find the gears, and steer the machine. That would be onerous and practically useless.

Structures for practical use in ICT tools need, therefore, to be clear about the what is included as a separate competence-related definition within the structure, and what is not included, or included only as part of the description of a wider definition.

The “Set out and establish crops” LANTRA unit does have a clear structure, and the smallest parts of that structure are the individual ability and knowledge items — what someone is expected to be “able to do”, and to “know and understand”. And let us suppose that we formalise that unit in that way, so that an ICT learning or assessment support tool allowed learners to check off or provide evidence for the separate items — e.g. that they could “ensure the growing medium is in a suitable condition for planting” and that they knew and understood the “methods of preparing growing media for planting relevant to the enterprise”.

Then, suppose we wanted to include this unit in another structure or framework. Would we want, perhaps, just one “box” to be ticked only for the unit title, “Set out and establish crops”; or would we want two boxes corresponding to the elements of the unit, “Set out crops in growing medium” and “Establish crops in growing medium”; or would we rather have all the ability and knowledge items as separate boxes? None of these options are inherently implausible.

To put this in a different way: when we want to include one competence-related structure within another one, how do we say whether we just want it as a single item, or whether we are including all the structure that comes with it? The very fact that this is a meaningful question relies on the dual nature of a definition, as either a “stand-alone” concept, or structure.

The solution I propose here is that we have two identifiers, one for the concept definition itself, and one for the structure definition that includes the concept. I understand these as closely related. But the description of the structure would perhaps more naturally talk about what it is used for, and wouldn’t be the kind of thing that you can claim, while the description of the concept would indeed describe something that one could claim or require. The structure is structured, whereas the definition of the concept by itself is not, and the structure relies on definitions of subsidiary concepts, that are the parts of the whole original concept.

I illustrated a change of wording in the eighth post, raising, but not answering the question of whether the concept remained unchanged across the changed wording of the definition.

Let’s look at this from another angle. If you author a framework or structure, in doing that you are at the same time authoring at least one competence-related concept definition, and there may be one main concept for that structure or framework. You will often be authoring other subsidiary definitions, which are parts of your structure. You may also include, in your structure or framework, definitions authored by others, or at another time. Indeed, it is possible that all the other components come from elsewhere, in which case you will be authoring only the minimal one concept definition.

One more question that I hope will clarify things still further. What is the title of a structure? The LANTRA examples illustrate that there may be no separate title of a structure, from the title of the main concept definition contained in it. But really, and properly, the title of the structure should be something like “the structure for <this competence concept>”.

Contrast this with the subsidiary concept definitions given in the structure. Their titles and descriptions clearly must be different. They may be defined at the same time as the structure, or they may have been defined previously, and be being reused by the structure.

Exactly how all this is represented is a matter for the “binding” technology adopted. Representing in terms of an XML schema will look quite different from RDF triples, or XHTML with RDFa. I’ll deal with those matters in a later post. But, however implemented, I do think we have the beginnings of a list of information model features that are necessary (or at least highly desirable) for representing this interplay between competence definitions and structures. (I will here assume that identifiers are URIs.)

  1. The structure (or framework) has its own URI.
  2. The structure often has a “main” concept that represents the structure taken as a separate concept.
  3. The structure cannot be represented without referring to the concept definitions that are structured by it.
  4. Each concept definition, including the main concept and subsidiary concepts, has its own URI.

I’ve tried to represent this in a small diagram. See what you think…
(note that the colours bear no relation to colours in my other concept maps)


Of course, as well as the URIs, titles and descriptions, there is much more from structures or frameworks to represent, particularly about the relations between the concepts. So it is to the practical implementation of this that I turn next.

Requirements for implementing the logic of competence

(11th in my logic of competence series)

Having discussed, defined, and mapped the principal features of the concepts of ability and competence, we are left with the challenge of working towards “the practical implementation of such competence structures” (ninth post) by looking at the “detailed structure and relationships of ability concepts and structures that contain several of them” (tenth post) and working towards a particular formalisation that represents those concepts adequately for the uses that are envisaged.

At this point, I’ll look back over the posts so far to collect what look like the principle requirements for implementing representations of competence in an interoperable way.

The first post in the series noted that the basis of “what is required” is logically the claim of, or the need for, an ability or competence. Thus an implementation should represent the analysis of “what is required” in terms of abilities. On reaching the sixth post, it was clear that the description of what is required can be formalised to an arbitrary degree, and analysed to an arbitrary granularity, so the formal structures used will need to be flexible rather than rigid.

The second post in the series briefly details the issue of transferability or commonality between different roles. Any formalisation should NOT try to answer questions of transferability, but rather provide a good basis for posing and answering those questions within their own domains.

The third post introduces the idea of abstractions in competence or ability definitions, and “common language between the outcomes of learning, education and training on the one hand, and occupational requirements on the other”. A common language is a language that is reused in different contexts. Particularly when concepts are used in different contexts, it is vital to identify them clearly, so that there is a minimum of ambiguity. Here is not the place to argue for the obvious choice for unambiguous identifier being the URI, but that is what I assume. A URI needs to be given to any ability or competence concept or framework that may plausibly need to be referred to across different contexts or applications. This obviously includes both the case from the second post of transferring between different occupational context, and the case from the third and later posts about what is being learned in education or training contexts being used in occupational ones.

The third post also started to look at some of the large body of UK National Occupational Standards (NOSs). One common sense requirement is that any common representation needs to relate to existing relevant materials. Doing this sets up the possibility of broad and fast adoption (politics and other factor being favourable, and with a fair wind) whereas failure to do this sets up the barrier of having to revise existing materials before adoption. Each NOS is clearly a hierarchically structured document, so a common representation must at least deal with simple hierarchical structures.

The fourth post on levels suggests that a simple hierarchy will often not be sufficient. Both claims and requirements need to be able to include levels, and the representation of levels must allow automatic inferences about higher and lower levels.

The fifth post proposes the requirement for a formal representation to cover the kinds of conditions cited in personal claims and job specifications, that go beyond and detail abstract definitions.

The sixth post starts to suggest some technology ideas for the formal structures, starting with SKOS.

The seventh post points out that decomposition is not the only way of analysing competence concepts. We also need the idea of style, variant, or approach to doing “what is required”. (Though this post did not finally resolve how variants, optionality and levels relate to each other.)

The eighth and ninth posts recognise the value in being able to represent equivalencies and comparisons, across different structures or frameworks as well as within them, and propose using the SKOS Mapping Properties for this purpose.

Listing these requirements in brief, we seem to have something like this:

  1. represent competence concepts suitably for reuse
  2. represent analysis of competence in terms of abilities
  3. deal with levels helpfully
  4. cover claims and occupational requirements
  5. use SKOS as a basis
  6. represent styles or approaches as well as decomposition
  7. represent relations across different frameworks

Putting all my proposals for meeting these requirements here would make this post uncomfortably long, so instead I’ll break it down into more bite-sized chunks. (If I change my mind on how to structure the following posts, I’ll change it here as well, and in any case I’ll link from here to following posts when written.)

First, I’ll deal with how we can formally represent individual competence concepts and frameworks so that the structures contain existing materials, can work well together, and can be fully reused.

Next, I’ll put forward my developed ideas on how to represent the structural relationships between competence concepts, and tag on dealing with categories.

Later, I’ll deal properly with the tricky area of levels, for which up to now I have not come across any really convincing solutions.

I’ll do these all with the help of diagrams, representing not the conceptual connections of the previous post, but information modelling connections. This will come together in a big diagram.

I also want to compare and contrast with diagrams representing other past attempts to represent these things, but I haven’t yet decided whether to try to cover that bit by bit while first putting forward the ideas, or to do a big post that covers several alternatives.

After that, for real implementation, we’ll need to discuss the “binding” question — that is, the different ways of representing this emerging information model, particularly looking at XML, Atom, RDF triples, and XHTML+RDFa.

And I hope also to say a word about my great collaborators, the projects we have done or are doing together, and how this work relates to those projects.

At that point, I hope to be able to conclude the series, having outlined a fair solution to the practical representation of the logic of competence!

Now, on to the question of representing how definitions and structures relate to each other.

Competence concepts mapped

(10th in my logic of competence series)

In this series of posts I’ve used many terms as a part of my attempts to communicate on these topics. Now I offer definitions for or notes about both the concepts I’ve used in the blog posts so far, and related ones drawn from a range of other work, and I link to posts where the ideas behind these concepts are discussed or used prominently. Then, towards the end of this post (placed there solely for readability) there is a map of how the concepts I’ve used relate to each other.

There are two main sources for borrowed definitions: first, the European Qualifications Framework (EQF); and second, the European Standard that is currently in the process of being published, EN 15981, “European Learner Mobility Achievement Information”, and its published precursor, CEN Workshop Agreement CWA 16133. While I was nothing to do with the creation of the EQF, I am a joint author of CWA 16133 and EN 15981.

Definitions and notes

term in definition and notes
ability 1;
something that a person is able to do
(Abilities cover both skills and competences, and are normally expressible in the form of a clause starting with an active verb. EQF uses the word “ability” in both definitions. Many learning outcomes are also abilities.)
assessing body organisation that assesses or evaluates the actions or products of learners that indicate their knowledge, skill, competence, or any expected learning outcome [CWA 16133]
assessment process process of applying an assessment specification to a specific learner at a specific time or over a specific time interval [CWA 16133]
assessment result 5; recorded result of an assessment process [EN 15981]
assessment result pattern People most often look for patterns in assessment results, like “over 70%” or “rated at least as adequate” rather than specific results themselves: not many people are interested in whether someone has scored exactly 75%. This concept represents the idea of what people are looking for in terms of assessment results.
assessment specification description of methods used to evaluate learners’ achievement of expected learning outcomes [CWA 16133] This covers all the documentation (or the implicit understanding) that defines an assessment process.
awarding body organisation that awards credit or qualifications [EN 15981]
common contextual term 3;
In any domain, or any context, there are concepts (at various levels of abstraction) that are shared by the people in that domain, that serve as a vocabulary. It is important that the terms used within a domain for the related frameworks, standards, ability definitions, criteria and conditions are consistent in their meaning. This box indicates the need for these concepts to be common, and that terms should not be defined differently for different purposes within a domain.
criterion or condition of performance or assessment 5; (see below)
educational level one of a set of terms, properly defined within a framework or scheme, applied to an entity in order to group it together with other entities relevant to the same stage of education [EN 15981]
effect, product, material evidence material results of a person’s activity If something material endures, it can be used as evidence. If there is nothing enduring, the original evidence need to be observed by witnesses, after which the witness statements substitute for the evidence.
employer agent employing an individual
employer activity actions of the employer
framework or occupational standard 3;
description of an occupational or industry area, conceivably including or related to job profiles, occupational standards, occupational levels or grades, competence requirements, contexts, tools, techniques or equipment within the industry
generic work role what is signified by an appropriate simple phrase appearing in a job advertisement, job specification, or occupational standard
industry sector 4; system of employers, employees and jobs working in related areas that share some of: common concepts and terminology; contexts; a framework or standards; or job requirements
job description or requirement 1;
expression used to describe what abilities are required to perform a particular job or undertake a particular role
knowledge / understanding outcome of the assimilation of information through learning [EQF] (Knowledge is the body of facts, principles, theories and practices that is related to a field of work or study. In the context of the European Qualifications Framework, skills are described as cognitive (involving the use of logical, intuitive and creative thinking) or practical (involving manual dexterity and the use of methods, materials, tools and instruments).)
level 4; educational level (q.v.) or occupational level (q.v.)
material and social reality This means all of the common objective world, whether described scientifically, or according to social convention, or in any way.
occupational level 4; one of a set of terms, properly defined within an occupational framework, associated with criteria that distinguish different stages of development within an area of competence
(This is often related to responsibility and autonomy, as with the EQF concept of competence. There may be some correlation or similarity between the criteria distinguishing the same level in different competence areas.)
person as agent This represents the active, conscious, rational aspect of the individual.
personal activity set or sequence of actions by a person, intended or taken as a whole
(An activity may be focused on the performance of a task, or may be identified by location, time, or context. Activities may require abilities.)
personal claim 1;
statement that an individual is able to do specified things
practiced skill ability to apply knowledge and use know-how to complete tasks and solve problems [EQF]
(In the context of the European Qualifications Framework, skills are described as cognitive (involving the use of logical, intuitive and creative thinking) or practical (involving manual dexterity and the use of methods, materials, tools and instruments).)
qualification status awarded to or conferred on a learner
(Many formal learning opportunities are designed to prepare learners for the assessment that may lead to an awarding body awarding them a qualification.) [latest draft of MLO: prEN 15982]
record of experience or practice 3; (This refers to any record or reflection about things done, but particularly in this context about tasks undertaken.)
task specification for learner activity, including any constraints, performance criteria or completion criteria
(Performance of a task may be assessed or evaluated. Specified tasks are usually part of job descriptions.)

Criteria and conditions

One particular area that is harder than most to understand is represented by the box called "criterion or condition of performance or assessment" — and this is evidently fairly central to the map below, being the most connected box, and directly connected to the concepts which I originally proposed as logically basic: personal claims may be about meeting these conditions or criteria; job descriptions or requirements may have them included.

Assessment and performance criteria and conditions as general terms are fairly easy to understand in themselves. For assessment, they specify either the conditions under which the assessment takes place, or the criteria by which the assessment is measured. For performance, conditions in effect specify the task that is to be undertaken, while criteria specify what counts as successful performance.

What is less easy to see is the dividing line between these and the ability concepts and definitions themselves, and perhaps this is due to the same fact that we have reckoned with earlier — that how much is abstracted in an ability concept or definition is essentially arbitrary. One can easily read, or imagine, definitions of ability that include conditions and performance criteria; but some do not.

For the purposes of the concept map below, perhaps the best way of understanding this concept is to think of it as containing all the conditions or criteria that are not specified by the ability concept or definition itself; recognising that the boundary line is arbitrary.

To make common sense and to be usable, conditions and criteria have to be grounded in material or social reality — they have to be based on things that are commonly taken to be observable, rather than being based on theoretical constructs.

Concept map

The following diagram maps out several of the ways that the concepts above can be understood as relating to one other. Note that generic language is used in a neutral way, in that for instance the verbs are all in the present tense. However, many of these relationships are in fact tentative or possible, rather than definite, and they may be singular or plural.

Map of related competence concepts

Map of related competence concepts

The diagram is a concept map constructed with CmapTools, and includes various other concepts that I haven’t discussed explicitly, but on which I have suggested definitions or notes above. I reckoned that these other concepts might help explain how it all fits together. As always with these large diagrams, a few words of caution are in order.

  • This is of course only a small selection of what could be represented.
  • It is from a particular point of view, and cannot be perfect.
  • Such a map is best looked at a little at a time. Focus on one thing of interest, and follow through the connections from that.

I hope that the definitions and the concept map are of interest and of use. What the map does not clarify sufficiently is the detailed structure and relationships of ability concepts and structures that contain several of them. This will follow later, but before that, I will review the requirements I have collected for implementation.

Other cross-structure relationships

(9th in my logic of competence series)

My previous post covered how to do common competence features in different structures, typically where the structures share context. But what about when the two structures are from quite different starting points? Equivalences are harder to identify, but it may be useful to document other relationships.

My earlier post on the basic structures taken separately contrasted the UK LANTRA NOS’s with the QAA‘s Subject benchmark statement in the area. The way in which these are put together is quite different, and the language used is far from the same.

But there may be a good case for relating these two. What would happen if someone who has a qualification based on NOSs wanted to give evidence that they have attained Subject Benchmarks? Or, more likely, what if someone who has a vocational qualification in, say, agriculture wants to select modules from a degree course in agriculture, where the intended learning outcomes of the university’s degree course refer to the appropriate Subject Benchmark Statement? Even if there are no equivalences to note (as discussed in the previous post) we may see other useful relationships, such as when something in one structure is clearly part of something else in another structure, or where they are not equivalent, but they are meaningfully related. Let’s see what we can find for the (not atypical) examples we have been looking at.

Starting hopefully on familiar ground, let’s look at the generic skills related to the LANTRA unit CU5 that I’ve mentioned before. Element CU5.1, or unit CU5 in the 2010 Veterinary NOSs, is called “Maintain and develop personal performance”, and this seems related to the Benchmark’s “Self-management and professional development skills”. They appear not to be equivalent, so we aren’t justified in creating a skos:exactMatch or skos:closeMatch relationship between those two structures, but we could perhaps use skos:relatedMatch (another SKOS Mapping Property) to indicate that there is a meaningful relationship, even if not precisely specified. This might then be a helpful pointer to people about where to start looking for similar skill definitions, when comparing the two structures. The Benchmark seems to be generally wider than the NOS unit, and perhaps this would be expected, given that graduate level skills in agriculture should cover something that vocational skills do not. Here, “moral and ethical issues” and “professional codes of conduct” are not covered in the NOSs. Perhaps the closest correspondence can be seen with the Benchmark’s “targets for personal, career and academic development”, prefaced at “threshold” level by “identify…”, “typical” level by “identify and work towards…” and “excellent” level by “identify and work towards ambitious…”. In the NOS, the individual must be able to: “agree personal performance targets with the appropriate person”; “agree your development needs and methods of meeting these needs with the appropriate person”; “develop your personal performance according to your agreed targets, development needs and organisational requirements”; and “review personal performance with the appropriate person at suitable intervals”. They must also know and understand (among other things) “how to determine and agree development needs and personal targets”. Personally, I’m not sure whether anything deserves a skos:closeMatch property — probably what we would need to do would be to get the relevant people together to discuss the kinds of behaviour covered, and see if they actually agree or not on whether there was any practical equivalence worthy of a skos:closeMatch.

There is also a definite relationship between the Benchmark’s “Interpersonal and teamwork skills” and the NOS’s “Establish and maintain working relationships with others”. Again, it is difficult to identify any very clear relationships between the component parts of these, but despite this lack of correspondence at fine granularity, it seems to me that the five ability points from the NOS are more than covered by the five points appearing at the “typical” level of the Benchmark. There are two other SKOS Mapping Properties that might help us here: skos:broadMatch and skos:narrowMatch. These correspond to skos:broader and skos:narrower, but applied across different structures, rather than within one structure. Thus we could potentially represent that LANTRA CU5A (2010) has a “skos:broadMatch” in the Benchmark’s Interpersonal and teamwork skills, “typical” level. Conversely, that “typical” Benchmark component has a “skos:narrowMatch” in LANTRA’s CU5A.

On the subject-specific end, again there are plenty of areas where you can see some connection, but hard to see very clear distinct relationships. As you might expect, there is a tendency for the NOSs to deal with specific skills, while the Benchmark deals in more general knowledge and understanding. The horticultural PH16 NOS unit is titled “Respond to legislation affecting consumer rights”, while the Benchmark has various “subject-specific knowledge and understanding” to do with “social, economic, legal and technological principles underlying the business management of farm or horticultural enterprises”. Probably, people meeting this part of the Benchmark standard at a good enough level have skills that include that unit of the NOS, so we could in theory note a skos:broadMatch relationship between the NOS unit and that part of the Benchmark. But we could only do that (for any area) if we had URI identifiers available to mark the relevant sections unambiguously, and at present there are few if any competence structures where URIs have been officially assigned to the parts of the structure.

It seems unlikely that an agriculture graduate would be wanting accreditation of a LANTRA NOS unit, but if someone did, supporting systems could potentially make use of these relationships represented as SKOS Mapping Properties. More likely, someone who has covered the LANTRA NOS would be able to save a lot of time in putting together a shortened agriculture degree programme if all the skos:broadMatch relationships were documented, as it would be relatively easy to design a tool that allows efficient comparison of the relevant documentation, as a support to deciding whether a particular module at degree level needs to be taken, or not. This seems likely to be a similar process to Accreditation of Prior Learning (APL) in which the university accredits previous attainment in terms of their degree programme. It could also be adapted to APEL (E = “Experiential”) if the individual brought along a portfolio of evidence for attaining relevant NOSs. These processes are important in the likely future world where tailoring of degree courses becomes more common.

It looks like I have finished the coverage of the essential logical features of competence structures that I believe could usefully be incorporated in an interoperability specification. To repeat a point I have inserted in the introduction to this series, I would be delighted to discuss any of these posts one-to-one with interested people. It remains to bring all these points together in a way that is easier to follow, through the judicious use of diagrams, to discuss other emergent issues, and to talk about how we could work towards the practical implementation of such competence structures. The first diagram offered is a concept map, together with definitions.