The logic of competence assessability

(17th in my logic of competence series)

The discussion of NOS in the previous post clearly implicated assessability. Actually, assessment has been on the agenda right from the start of this series: claims and requirements are for someone “good” for a job or role. How do we assess what is “good” as opposed to “poor”? The logic of competence partly relies on the logic of assessability, so the topic deserves a closer look.

“Assessability” isn’t a common word. I mean, as one might expect, the quality of being assessable. Here, this applies to competence concept definitions. Given a definition of skill or competence, will people be able to use that definition to consistently assess the extent to which an individual has that skill or competence? If so, the definition is assessable. Particular assessment methods are usually designed to be consistent and repeatable, but in all the cases I can think of, a particular assessment procedure implies the existence of a quality that could potentially be assessed in other ways. So “assessability” doesn’t necessarily mean that one particular assessment method has been defined, but rather that reliable assessment methods can be envisaged.

The contrast between outcomes and behaviours / procedures

One of the key things I learned from discussion with Geoff Carroll was the importance to many people of seeing competence in terms of assessable outcomes. The NOS Guide mentioned in the previous post says, among other things, that “the Key Purpose statement must point clearly to an outcome” and “each Main Function should point to a clear outcome that is valued in employment.” This is contrasted with “behaviours” — some employers “feel it is important to describe the general ways in which individuals go about achieving the outcomes”.

How much emphasis is put on outcomes, and how much on what the NOS Guide calls behaviours, depends largely on the job, and should determine the nature of the “performance criteria” written in a related standard. And, moreover, I think that this distinction between “outcomes” and “behaviours” is quite close to the very general distinction between “means” and “ends” that crops up as a general philosophical topic. To illustrate this, I’ll try giving two example jobs that differ greatly along this dimension: writing commercial pop songs; and flying commercial aeroplanes.

You could write outcome standards for a pop songwriter in terms of the song sales. It is very clear when a song reaches “the charts”, but how and why it gets there are much less clear. What is perhaps more clear is that the large majority of attempts to write pop songs result in — well — very limited success (i.e. failure). And although there are some websites that give e.g. Shortcuts to Hit Songwriting (126 Proven Techniques for Writing Songs That Sell), or How to Write a Song, other commentators e.g. in the Guardian are less optimistic: “So how do you write a classic hit? The only thing everyone agrees on is this: nobody has a bloody clue.”

The essence here is that the “hit” outcome is achieved, if it is achieved at all, through means that are highly individual. It seems unlikely that any standards setting organisation will write an NOS for writing hit pop songs. (On the other hand, some of the composition skills that underlie this could well be the subject of standards.)

Contrast this with flying commercial aeroplanes. The vast majority of flights are carried out successfully — indeed, flight safety is remarkable in many ways. Would you want your pilot to “do their own thing”, or try out different techniques for piloting your flight? A great deal of basic competence in flying is accuracy and reliability in following set procedures. (Surely set procedures are essentially the same kind of thing as behaviours?) There is a lot of compliance, checking and cross-checking, and little scope for creativity. Again it is interesting to note that there don’t seem to be any NOSs for airline pilots. (There are for ground and cabin staff, maintained by GoSkills. In the “National Occupational Standards For Aviation Operations on the Ground, Unit 42 – Maintain the separation of aircraft on or near the ground”, out of 20 performance requirements, no fewer than 11 start “Make sure that…”. Following procedures is explicitly a large part of other related NOSs.)

However, it is clear that there are better and worse pop songwriters, and better and worse pilots. One should be able to write some competence definitions in each case that are assessable, even if they might not be worth making into NOSs.

What about educational parallels for these, as most of school performance is assessed? Perhaps we could think of poetry writing and mathematics. Probably much of what is good in poetry writing is down to individual inspiration and creativity, tempered by some conventional rules. On the other hand, much of what is good in mathematics is the ability to remember and follow the appropriate procedures for the appropriate cases. Poetry, closely related to songwriting, is mainly to do with outcomes, and not procedures — ends, not means; mathematics, closer to airline piloting, is mainly to do with procedures, with the outcome pretty well assured as long as you follow the appropriate procedure correctly.

Both extremes of this “outcome” and “procedure” spectrum are assessable, but they are assessable in different ways, with different characteristics.

  1. Outcome-focused assessment (getting results, main effects, “ends”) allows variation in the component parts that are not standardised. What may be specified are the incidental constraints, or what to avoid.
  2. Assessment on procedures and conformance to constraints (how to do it properly, “means”, known procedures that minimise bad side effects) tends to have little variability in component procedural parts. As well as airline pilots, we may think of train drivers, power plant supervisors, captains of ships.

Of course, there is a spectrum between these extremes, with no clear boundary. Where the core is procedural conformance, handling unexpected problems may also feature (often trained through simulators). Coolness under pressure is vital, and could be assessed. We also have to face the philosophical point that someone’s ends may be another’s means, and vice versa. Only the most menial of means cannot be treated as an end, and only the greatest ends cannot be treated as a means to a greater end.

Outcomes are often quantitative in nature. The pop song example is clear — measures of songs sold (or downloaded, etc.) allow songwriters to be graded into some level scheme like “very successful”, “fairly successful”, “marginally successful” (or whatever levels you might want to establish). There is no obvious cut-off point for whether you are successful as a hit songwriter, and that invites people to define their own levels. On the other hand, conformance to defined procedures looks pretty rigid by comparison. Either you followed the rules or you didn’t. It’s all too clear when a passenger aeroplane crashes.

But here’s a puzzle for National Occupational Standards. According to the Guide, NOSs are meant to be to do with outcomes, and yet they admit no levels. If they acknowledged that they were about procedures, perhaps together with avoiding negative outcomes, then I could see how levels would be unimportant. And if they allowed levels, rather than being just “achieved” or “not yet achieved” I could see how they would cover all sorts of outcomes nicely. What are we to do about outcomes that clearly do admit of levels, as do many of the more complex kind of competences?

The apparent paradox is that NOSs deny the kind of level system that would allow them properly to express the kind of outcomes that they aspire to representing. But maybe it’s no paradox after all. It seems reasonable that NOSs actually just describe the known standards people need to reach to function effectively in certain kinds of roles. That standard is a level in itself. Under that reading, it would make little sense for a NOS to be subject to different levels, as it would imply that the level of competence for a particular role is unknown — and in that case it wouldn’t be a standard.

Assessing less assessable concepts

Having discussed assessable competence concepts from one extreme to the other, what about less assessable concepts? We are mostly familiar with the kinds of general headings for abilities that you get with PDP (personal/professional development planning) like teamwork, communication skills, numeracy, ICT skills, etc. You can only assess a person as having or not having a vague concept like “communication skills” after detailing what you include within your definition. With a competence such as the ability to manage a business, you can either assess it in terms of measurable outcomes valued by you (e.g. the business is making a profit, has grown — both binary — or perhaps some quantitative figure relating to the increase in shareholder value, or a quantified environmental impact) or in terms of a set of abilities that you consider make up the particular style of management you are interested in.

These less assessable concepts are surely useful as headings for gathering evidence about what we have done, and what kinds of skills and competences we have practiced, which might be useful in work or other situations. It looks to me that they can be made more assessable in one of a few ways.

  1. Detailing assessable component parts of the concept, in the manner of NOSs.
  2. Defining levels for the concept, where each level definition gives more assessable detail, or criteria.
  3. Defining variants for the concept, each of which is either assessable, or broken down further into assessable component parts.
  4. Using a generic level framework to supply assessable criteria to add to the concept.

Following this last possibility, there is nothing to stop a framework from defining generic levels as a shorthand for what needs to be covered at any particular level of any competence. While NOSs don’t have to define levels explicitly, it is still potentially useful to be able to have levels in a wider framework of competence.

[added 2011-09-04] Note that generic levels designed to add assessability to a general concept may not themselves be assessable without the general concept.

Assessability and values in everyday life

Defined concepts, standards, and frameworks are fine for established employers in established industries, who may be familiar with and use them, but what about for other contexts? I happen to be looking for a builder right now, and while my general requirements are common enough, the details may not be. In the “foreground”, so to speak, like everyone else, I want a “good” quality job done within a competitive time interval and budget. Maybe I could accept that the competence I require could be described in terms of NOSs, while price and availability are to do with the market, not competence per se. But when it comes to more “background” considerations, it is less clear. How do I rate experience? Well, what does experience bring? I suspect that experience is to do with learning the lessons that are not internalised in an educational or training setting. Perhaps experience is partly about learning to avoid “mistakes”. But, what counts as mistakes depends on one’s values. Individuals differ in the degree to which they are happy with “bending rules” or “cutting corners”. With experience, some people learn to bend rules less detectably, others learn more personal and professional integrity. If someone’s values agree with mine, I am more likely to find them pleasant.

There’s a long discussion here, which I won’t go into deeply, involving professional associations, codes of conduct and ethics, morality, social responsibility and so on. It may be possible to build some of these into performance criteria, but opinions are likely to differ. Where a standard talks about procedural conformance, it can sometimes be framed as knowing established procedures and then following them. A generic competence at handling clients might include the ability to find out what the client’s values are, and to go along with those to the extent that they are compatible with one’s own values. Where they aren’t, a skill in turning away work needs to be exercised in order to achieve personal integrity.

Conclusions

It’s all clearly a complex topic, more complex indeed than I had reckoned back last November. But I’d like to summarise what I take forward from this consideration of assessability.

  1. Less assessable concepts can be made more assessable by detailing them in any of several ways (see above).
  2. Goals, ends, aims, outcomes can be assessed, but say little about constraints, mistakes, or avoiding occasional problems. In common usage, outcomes (particularly quantitative ones) may often have levels.
  3. Means, procedures, behaviours, etc. can be assessed in terms of (binary) conformity to prescribed pattern, but may not imply outcomes (though constraints may be able to be formulated as avoidance outcomes).
  4. In real life we want to allow realistic competence structures with any of these features.

In the next post, I’ll take all these extra considerations forward into the question of how to represent competence structures, partly through discussing more about what levels are, along with how to represent them. Being clear about how to represent levels will leave us also clearer about how to represent the less precise, non-assessable concepts.

2 thoughts on “The logic of competence assessability

  1. “it is clear that there are better and worse pop songwriters, and better and worse pilots.”

    The latter yes, the former may be an aesthetic judgement.

    I think your selection of poetry and mathematics also exhibits this problem. You can make a case for the assessment of both poetry and mathematics in some technical way. However, ‘good poetry’ and even ‘good mathematics’ (which is a creative, not a procedural subject at high academic levels) may be more a matter of personal preference, or, if you like, judgement using experience and a range of probably undefinable human qualities.

    We have this in the game design area all the time. I may design a game that hits all the appropriate buttons for a ‘good game’, in other words, it would meet all the competence criteria if we had them, but it may be a rubbish game in the real world.

    If you’re dealing with a domain in which “nobody has a bloody clue”, then competence in the classic sense may be irrelevant.

    I think that in many job roles there are ‘emergent properties’ that are at the heart of the role and therefore cannot be teased out by a basically reductionist approach. Airline pilot may be one of these. While you need to be able to perform your procedural role in day-to-day circumstances (‘means’), the really vital aspect is not to kill your passengers in a crisis (‘ends’). This means that much of ‘airline piloting’ is playing a part in a system (involving teams, not just individual competences) to reduce the likelihood of killing the passengers. A pilot is therefore assessed on professional judgement, not just how well he or she performs the procedural role. For example, I doubt that the US pilot who landed on the Hudson River was strictly following a set procedure at the time.

    I think what I’m saying here is that many job roles are not easily susceptible to traditional competence statements.

  2. Thanks, Alan!

    Yes, with pop songs, I should have said “more or less successful” and specified that one can measure that in terms of sales. Certainly what anyone considers a “good” song is a matter of taste, though it would be strange to have a definition of a good pop songwriter whose songs were not popular, on some measure!

    You say “personal preference, or, if you like, judgement…” but there is an important distinction which should not be lost. Some judgements have more inter-rater reliability, others less. Where there is good inter-rater reliability, it is reasonable to set standards — indeed, many NVQs rely heavily on this.

    Similarly your next comment. One can have a competence where no one is sure exactly how it is carried out, but the outcomes are clear. I think it is quite reasonable to talk of competence in those cases, and implicitly allow for variation between individuals in how the outcome is achieved. Of course, if there is no inter-rater reliability in judgement of the outcomes, then I agree it would be futile to try to agree a competence standard.

    The Hudson River example was in my mind, too! That’s why I wrote “Where the core is procedural conformance, handling unexpected problems may also feature”. The Hudson River pilot was, it seems to me, showing exceptional competence at handling an unexpected emergency.

    We agree that there are limits to the applicability of competence statements, but we could be a little clearer about the boundaries.