(3rd in my logic of competence series)
I have suggested that the natural way of identifying competence concepts relates to the likely correlation of “the ability to do what is required” between different tasks and situations that may be encountered, requiring similar competence. Having identified an area of competence in this way, how could it best be analysed and structured?
First, we should make a case that analysis is indeed needed. Without analysis of competence concepts, we would have to assume that going through any relevant education, training or apprenticeship, leading to recognition, or a relevant qualification, gives people everything they need for competence in the whole area. If this were true, distinguishing between, say, the candidates for a job would not be on the basis of an analysis of their competence, but on the basis of personal attributes, or reputation, or recommendation. While this is indeed how people tend to handle getting a tradesperson to do a private job, it seems unlikely that it would be appropriate for specialist employees. Thus, for example, many IT employers do not just to want “a programmer”, but one who has experience or competence in particular languages and application areas.
On the other hand, it would not be much use only to recruit people who had experience of exactly the tasks or roles required. For a new role, there will naturally not be anyone with that exact prior experience. And equally obviously, people need to develop professionally, gaining new skills. So we need ways of measuring and comparing ability that are not just in terms of time served on the job. In any case, time served on a job is not a reliable indicator of competence. People may learn from experience at different rates, as well as learning different things, even from the same experience. This all points to the need to analyse competence, but how?
We should start by recognising the fact that there are at present no universally accepted rules for how to analyse competence concepts, or what their constituent parts should look like. Instead of imagining some ideal a priori analytical scheme, it is useful to start by looking at examples of how competence has been analysed in practical situations. First, back to horticulture…
The relevant source materials I have to hand happen to be the UK National Occupational Standards (NOSs) produced by LANTRA (UK’s Sector Skills Council for land-based and environmental industries). The “Production Horticulture” NOSs has 16 “units” specific to production horticulture, such as “Set out and establish crops”, “Harvest and prepare intensive crops”, and “Identify and classify plants accurately using their botanical names”. Alongside these specialist units, there are 21 other units either borrowed from, or shared with, other NOSs, such as “Monitor and maintain health and safety”, “Receive, transmit and store information within the workplace”, and “Provide leadership for your team”. At this “unit” level, the analysis of what it takes to be good at production horticulture seems to be understandable and comprehensible, with a good degree of common sense. Most areas of expertise can be broken down in this way to the kind of level where one sees individual roles, jobs or tasks that could in principle be allocated to different people. And there is often a logic to the analysis: to get crops, you have to prepare the ground, then plant, look after, and harvest the crops. That much is obvious to anyone. More detailed, less obvious analysis could be given by someone with relevant experience.
Even at this level of NOS units, there is some abstraction going on. LANTRA evidently chose not to create separate units or standards for growing carrots, cabbages and strawberries. Going back to the ideas on competence correlation, we infer that there is much in common between competence at growing carrots and strawberries, even if there are also some differences. This may be where “knowledge” comes into play, and why occupational standards seem universally to have knowledge listed as well as skills. If someone is competent at growing carrots, then perhaps simply their knowledge of what is different between growing carrots and growing strawberries goes much of the way towards their competence in growing strawberries. But how far? That is less clear.
Abstraction seems to be even more extensive at lower levels. Taking an arbitrary example, the first, fairly ordinary unit in “Production Horticulture” is “Clear and prepare sites for planting crops”, and is subdivided into two elements, PH1.1 “Clear sites ready for planting crops” and PH1.2 “Prepare sites and make resources available for planting crops”. PH1.2 contains lists of 6 things that people should be able to do, and 9 things that they should know. The second item in the list of things that people need to be able to do is “place equipment and materials in the correct location ready for use”, which self-evidently requires a knowledge of what the correct location is. The fifth item is to “keep accurate, legible and complete records”. This is supported by an explicit knowledge requirement, documented as “the records which are required and the purpose of such records”.
This is quite a substantial abstraction, as these examples could make equal sense in a very wide range of occupational standards. In each case, the exact nature of these abilities needs to be filled out with the relevant details from the particular area of application. But no formal structure is given for these abstractions, here or, as far as I know, in any occupational standard, and this leads to problems.
For example, there is no way of telling, from the standard documentation, the extent to which proving the ability to keep accurate records in one domain is evidence of the ability to keep accurate records in another domain; and indeed no way is provided to document views about the relationship between various record-keeping skills. When describing wide competences, this is may be somewhat less of a problem, because when two skills or competences are analysed explicitly, one can at least compare their documented parts to arrive at some sense of the degree of similarity, and the degree to which competence in one might predict competence in another. But at the narrowest, finest grained level documented — in the case of NOSs, the analysis of a unit or element into items of skill and items of knowledge — it means that, though we can see the abstractions, it is not obvious how to use them, and in particular it is not clear how to represent them in information systems in a way that they might be automatically compared, or otherwise managed.
There has been much written, speculatively, about how competence descriptions and structures might effectively be used with information systems, for example acting as the common language between the outcomes of learning, education and training on the one hand, and occupational requirements on the other. But to make this effective in practice, we need to get to grips properly with these questions of abstraction, structure and representation, to move forward from the common sense but informal abstractions and loose structures presently in use, to a more formally structured, though still flexible and intuitive approach.
The next two blog entries will attempt to explore two possible aspects of formalisation: level, and other features often left out from competence definitions, including context or conditions.