The logic of National Occupational Standards

(16th in my logic of competence series)

I’ve mentioned NOSs (UK National Occupational Standards) many times in earlier posts in this series, (3, 5, 6, 8, 9, 12, 14) but last week I was fortunate to visit a real SSCLANTRA — talk to some very friendly and helpful people there and elsewhere, and reflect further on the logic of NOSs.

One thing that became clear is that NOSs have specific uses, not exactly the same as some of the other competence-related concepts I’ve been writing about. Following this up, on the UKCES website I soon found the very helpful “Guide to Developing National Occupational Standards” (pdf) by Geoff Carroll and Trevor Boutall, written quite recently: March 2010. For brevity, I’ll refer to this as “the NOS Guide”.

The NOS Guide

I won’t review the whole NOS Guide, beyond saying that it is an invaluable guide to current thinking and practice around NOSs. But I will pick out a few things that are relevant: to my discussion of the logic of competence; to how to represent the particular features of NOS structures; and towards how we represent the kinds of competence-related structures that are not part of the NOS world.

The NOS Guide distinguishes occupational competence and skill. Its definitions aren’t watertight, but generally they are in keeping with the idea that a skill is something that is independent of its context, not necessarily in itself valuable, whereas an occupational competence in a “work function” involves applying skills (and knowledge). Occupational competence is “what it means to be competent in a work role” (page 7), and this seems close enough to my formulation “the ability to do what is required“, and with the corresponding EQF definitions. But this doesn’t help greatly in drawing a clear line between the two. What is considered a work function might depend not only on the particularities of the job itself, and also the detail in which it has been analysed for defining a particular job role. In the end, while the distinction makes some sense, the dividing line still looks fairly arbitrary, which justifies my support for not making a distinction in representation. This seems confirmed also by the fact that, later, when the NOS Guide discusses Functional Analysis (more of which below), the competence/skill distinction is barely mentioned.

The NOS Guide advocates a common language for representing skill or occupational competence at any granularity, ideally involving one brief sentence, containing:

  1. at least one action verb;
  2. at least one object for the verb;
  3. optionally, an indication of context or conditions.

Some people (including M. David Merrill, and following him, Lester Gilbert) advocate detailed vocabularies for the component parts of this sentence. While one may doubt the practicality of ever compiling complete general vocabularies, perhaps we ought to allow at least for the possiblity of representing verbs, objects and conditions distinctly, for any particular domain, represented in a domain ontology. If it were possible, this would help with:

  • ensuring consistency and comprehensibility;
  • search and cross-referencing;
  • revision.

But it makes sense not to make these structures mandatory, as most likely there are too many edge cases.

The whole of Section 2 of the NOS Guide is devoted to what the authors refer to as “Functional Analysis”. This involves identifying a “Key Purpose”, the “Main Functions” that need to happen to achieve the Key Purpose, and subordinate to those, the possible NOSs that set out what needs to happen to achieve each main function. (What is referred to in the NOS Guide as “a NOS” has also previously been called a “Unit”, and for clarity I’ll refer to them as “NOS units”.) Each NOS unit in turn contains performance criteria, and necessary supporting “knowledge and understanding”. However, these layers are not rigid. Sometimes, a wide-reaching purpose may be analysed by more than one layer of functions, and sometimes a NOS unit is divided into elements.

It makes sense not to attempt to make absolute distinctions between the different layers. (See also my post #14.) For the purposes of representation, this implies that each competence concept definition is represented in the same way, whichever layer it might be seen as belonging to; layers are related through “broader” and “narrower” relationships between the competence concepts, but different bodies may distinguish different layers. In eCOTOOL particularly, I’ve come to call competence concept definitions, in any layer, “ability items” for short, and I’ll use this terminology from here.

One particularly interesting section of the NOS Guide is its Section 2.9, where attention turns to the identification of NOS units themselves, as the component parts of the Main Functions. In view of the authority of this document, it is highly worthwhile studying what the Guide says about the nature of NOS units. Section 2.9 directly tackles the question of what size a NOS should be. Four relevant points are made, of which I’ll distinguish just two.

First, there is what we could call the criterion of individual activity. The Guide says: “NOS apply to the work of individuals. Each NOS should be written in such a way that it can be performed by an individual staff member.” I look at this both ways for complementary views. When two aspects of a role may reasonably and justifiably be performed separately by separate individuals, there should be separate NOS units. Conversely, when two aspects of a role are practically always performed by the same person, they naturally belong within the same NOS unit.

Second, I’ve put together manageability and distinctness. The Guide says that, if too large, the “size of the resulting NOS … could result in a document that is quite large and probably not well received by the employers or staff members who will be using them”, and also that it matters “whether or not things are seen as distinct activities which involve different skills and knowledge sets.” These seem to me both to be to do with fitting the size of the NOS unit to human expectations and requirements. In the end, however, the size of NOS units is a matter of good practice, not formal constraint.

Section 3 of the NOS Guide deals with using existing NOS units, and given the good sense of reuse, it seems right to discuss this before detailing creating your own. The relationship between the standards one is creating and existing NOS units could well be represented formally. Other existing NOS units may be

  • “imported” as is, with the permission of the originating body
  • “tailored”, that is modified slightly to suit the new context, but without any substantive change in what is covered (again, with permission)
  • used as the basis of a new NOS unit.

In the first two cases, the unit title remains the same; but in the other case where the content changes, the unit title should change as well. Interestingly, there seems no formal way of stating that a new NOS unit is based on an existing one, but changed too much to be counted as “tailored”.

Section 4, on creating your own NOSs, is useful particularly from the point of view of formalising NOS structures. The “mandatory NOS components” are set out as:

  1. Unique Reference Number
  2. Title
  3. Overview
  4. Performance Criteria
  5. Knowledge and Understanding
  6. Technical Data

and I’ll briefly go over each of these here.

It would be so easy, in principle, to recast a Unique Reference Number as a URI! However, the UKCES has not yet mandated this, and no SSC seems to have taken it up either. (I’m hoping to persuade some.) If a URI was also given to the broader items (e.g. key purposes and main functions) then the road would be open to a “linked data” approach to representing the relationships between structural components.

Title is standard Dublin Core, while Overview maps reasonably to dcterms:description.

Performance criteria may be seen as the finest granularity ability items represented in a NOS, and are strictly parts of NOS units. They have the same short sentence structure as both NOS units and broader functions and purposes. In principle, each performance criterion could also have its own URI. A performance criterion could then be treated like other ability items, and further analysed, explained or described elsewhere. An issue for NOSs is that performance criteria are not identified separately, and therefore there is no way within a NOS structure to indicate similarity or overlap between performance criteria appearing in different NOS units, whether or not the wording is the same. On the other hand, if NOS structures could give URIs to the performance criteria, they could be reused, for example to suggest that evidence within one NOS unit would provide also useful evidence within a different NOS unit.

Performance criteria within NOS units need to be valid across a sector. Thus they must not embody methods, etc., that are fine for one typical employer but wrong for another. They must also be practically assessable. These are reasons for avoiding evaluative adverbs, like the Guide’s example “promptly”, which may be evaluated differently in different contexts. If there are going to be contextual differences, they need to be more clearly signalled by referring e.g. to written guidance that forms part of the knowledge required.

Knowledge and understanding are clearly different from performance criteria. Items of knowledge are set out like performance criteria, but separately in their own section within a NOS unit. As hinted just above, the inclusion of explicit knowledge can mean that a generalised performance criterion can often work if the knowledge dependent on context is factored out, in places where there would otherwise be no common approach to assessment.

In principle, knowledge can be assessed, but the methods of assessment differ from those of performance criteria. Action verbs such as “state”, “recall”, “explain”, “choose” (on the basis on knowledge) might be introduced, but perhaps are not absolutely essential, in that a knowledge item may be assessed on the basis of various behaviour. Knowledge is then treated (by eCOTOOL and others) as another kind of ability item, alongside performance criteria. The different kinds of ability item may be distinguished — for example following the EQF, as knowledge, skills, and competence — but there are several possible categorisations.

The NOS Guide gives the following technical data as mandatory:

  1. the name of the standards-setting organisation
  2. the version number
  3. the date of approval of the current version
  4. the planned date of future review
  5. the validity of the NOS: “current”; “under revision”; “legacy”
  6. the status of the NOS: “original”; “imported”; “tailored”
  7. where the status is imported or tailored, the name of the originating organisation and the Unique Reference Number of the original NOS.

These could very easily be incorporated into a metadata schema. For imported and tailored NOS units, a way of referring to the original could be specified, so that web-based tools could immediately jump to the original for comparison. The NOS Guide goes on to give more optional parts, each of which could be included in a metadata schema as optional.

Issues emerging from the NOS Guide

One of the things that is stressed in the NOS Guide (e.g. page 32) is that the Functional Analysis should result in components (main functions, at least) that are both necessary and sufficient. That’s quite a demand — is it realistic, or could it be characterised as reductionist?

Optionality

The issue of optionality has been covered in the previous post in this series. Clearly, if NOS structures are to be necessary and sufficient, logically there can be no optionality. It seems that, practically, the NOS approach avoids optionality in two complementary ways. Some options are personal ways of doing things, at levels more finely grained than NOS units. Explicitly, NOS units should be written to be inclusive of the diversity of practice: they should not prescribe particular behaviours that represent only some people’s ways of doing things. Other options involve broader granularity than the NOS unit. The NOS Guide implies this in the discussion of tailoring. It may be that one body wants to create a NOS unit that is similar to an existing one. But if the “demand” of the new version NOS unit is not the same as the original, it is a new NOS unit, not a tailored version of the original one.

The NOS Guide does not offer any way of formally documenting the relationship between variant ways of achieving the same aim, or function (other than, perhaps, simple reference). This may lead to some inefficiencies down the line, when people recognise that achieving one NOS unit is really good evidence for reaching the standard of a related NOS unit, but there is no general and automatic way of documenting that or taking it into account. We should, I suggest, be aiming at an overall structure, and strategy, that documents as much relationship as we can reliably represent. This suggests allowing for optionality in an overall scheme, but leaving it out for NOSs.

Levels and assessability

The other big issue is levels. The very idea of level is somehow anathema to the NOS view. A person either has achieved a NOS, and is competent in the area, or has not yet acheived that NOS. There is no provision for grades of achievement. Compare this with the whole of the academic world, where people almost always give marks and grades, comparing and ranking people’s performance. The vocational world does have levels — think of the EQF levels, that are intended for the vocational as well as the academic world — but often in the vocational world a higher level is seen as the addition of other separate skills or occupational competences, not as improving levels of the same ones.

A related idea came to me while writing this post. NOSs rightly and properly emphasise the need to be assessable — to have a effective standard, you must be able to tell if someone has reached the standard or not — though the assessment method doesn’t have to be specified in advance. But there are many vaguer competence-related concepts. Take “communication skills” as a common example. It is impossible to assess whether someone has communication skills in general, without giving a specification of just what skills are meant. Every wakeful person has some ability to communicate! But we frequently see cases where that kind of unassessably vague concept is used as a heading around which to gather evidence. It does make sense to ask a person about evidence for their “communication skills”, or to describe them, and then perhaps to assess whether these are adequate for a particular job or role.

But then, thinking about it, there is a correspondence here. A concept that is too vague to assess is just the kind of concept for which one might define (assessable) levels. And if a concept has various levels, it follows that whether a person has the (unlevelled) concept cannot be assessed in the binary way of “competent” and “not yet competent”. This explains why the NOS approach does not have levels, as levels would imply a concept that cannot be assessed in the required binary way. Rather than call unlevelled concepts “vague”, we could just call them something like “not properly assessable”, implying the need to add extra detail before the concept becomes assessable. That extra detail could be a whole level scheme, or simply a specification of a single-level standard (i.e. one that is simply reached or not yet reached).

In conclusion, I cannot see a problem with specifying a representation for skill and competence structures that includes non-assessable concepts, along with levels as one way of detailing them. The “profile” for NOS use can still explicity exclude them, if that is the preferred way forward.

Update 2011-08-22 and later

After talking further with Geoff Carroll I’ve clarified above that NOSs are to do specifically with occupational competence rather than, e.g. learning competence. And having been pushed into this particular can of worms, I’d better say more about assessability to get a clear run up to levels.