Developing a new approach to competence representation

InLOC is a European project organised to come up with a good way of communicating structures or frameworks of competence, learning outcomes etc. We’ve now produced our interim reports for consultation: the Information Model and the Guidelines. We welcome feedback from everyone, to ensure this becomes genuinely useful and not just another academic exercise.

The reason I’ve not written any blog posts for a few weeks is that so much of my energy has been going into InLOC, and for good reason. It has been a really exciting time working with the team to develop a better approach to representing these things. Many of us have been pushing in this direction for years, without ever quite getting there. Several projects have been nearby, including, last year, InteropAbility (JISC page; project wiki) and eCOTOOL (project web site; my Competence Model page) — I’ve blogged about these before, and we have built on ideas from both of them, as well as from several other sources: you may be surprised at the range and variety of “stakeholders” in this area that we have assembled within InLOC. Doing the thinking for the Logic of Competence series was of course useful background, but nor did it quite get there.

What I want to announce now is that we are looking for the widest possible feedback as further input to the project. It’s all too easy for people like us, familiar with interoperability specifications, simply to cook up a new one. It is far more of a challenge, as well as hugely more worthwhile and satisfying, to create something genuinely useful, which people will actually use. We have been looking at other groups’ work for several months now, and discussing the rich, varied, and sometimes confusing ideas going around the community. Now we have made our own initial synthesis, and handed in the “interim” draft agreements, it is an excellent time to carry forward the wide and deep consultation process. We want to discuss with people whether our InLOC format will work for them; whether they can adopt, use or recommend it (or whatever their role is to do with specifications; or, what improvements need to be made so that they are most likely to take it on for real.

By the end of November we are planning to have completed this intense consultation, and we hope to end up with the desired genuinely useful results.

There are several features of this model which may be innovative (or seem so until someone points out somewhere they have been done before!)

  1. Relationships aren’t just direct as in RDF — there is a separate class to contain the relationship information. This allows extra information, including a number, vital for defining levels.
  2. We distinguish the normal simple properties, with literal objects, which are treated as integral parts of whatever it is (including: identifier, title, description, dates, etc.) from what could be called “compound properties”. Compound properties, that have more than one part to their range, are a little like relationships, and we give them a special property class, allowing labels, and a number (like in relationships).
  3. We have arranged for the logical structure, including the relationships and compound properties, to be largely independent of the representation structure. This allows several variant approaches to structuring, including tree structures, flat structures, or Atom-like structures.

The outcome is something that is slightly reminiscent both of Atom itself, and of Topic Maps. Both are not so like RDF, which uses the simplest possible building blocks, but resulting in the need for harder-to-grasp constructs like blank nodes. The fact of being hard to grasp leads to people trying different ways of doing things, and possibly losing interoperability on the way. Both Atom and Topic Maps, in contrast, add a little more general purpose structure, which does make quite a lot of intuitive sense in both cases, and they have been used widely, apparently with little troublesome divergence.

Are we therefore, in InLOC, trying to feel our way towards a general-purpose way of representing substantial hierarchical structures of independently existing units, in a way that makes more intuitive sense that elementary approaches to representing hierarchies? General taxonomies are simply trying to represent the relationships between concepts, whereas in InLOC we are dealing with a field where, for many years, people have recognised that the structure is an important entity in its own right — so much so that it has seemed hard to treat the components of existing structures (or “frameworks”) as independent and reusable.

So, see what you think, and please tell me, or one of the team, what you do honestly think. And let’s discuss it. The relevant links are also available straight from the InLOC wiki home page. And if you are responsible for creating or maintaining structures of intended learning outcomes, skills, competences, competencies, etc., then you are more than welcome to try out our new approach, that we hope combines ease of understanding with the power to express just what you want to express in your “framework”, and that you will be persuaded to use it “for real”, perhaps when we have made the improvements that you need.

We envisage a future when many ICT tools can use the same structures of learning outcomes and competences, saving effort, opening up interoperability, and greatly increasing the possibilities for services to build on top of each other. But you probably don’t need reminding of the value of those goals. We’re just trying to help along the way.

The logic of tourism as an analogy for competence

(20th in my logic of competence series.)

Modelling competence is too far removed from common experience to be intuitive. So I’ve been thinking of what analogy might help. How about the analogy of tourism? This may help particularly with understanding the duality between competence frameworks (like tourist itineraries) and competence concept definitions (like tourist destinations).

The analogy is helped by the fact that last week I was in Lisbon for the first time, at work (the CEN WS-LT and TC 353), but also more relevantly as a tourist. (If you don’t know Lisbon, think of examples to suit your own chosen place to visit, that you know better.) I’ll start with the aspects of the analogy that seem to be most straightforward, and go on to more subtle features.

First things first, then: a tourist itinerary includes a list of destinations. This can be formalised as a guided tour, or left informal as a “things you should see” list given by a friend who has been there. A destination can be in any number of itineraries, or none. An itinerary has to include some destinations, but in principle it doesn’t have any upper limits: it could be a very detailed itinerary that takes a year to properly acquaint a newcomer with the ins and outs of the city. Different itineraries for the same place may have more, or fewer, destinations within that place. They may or may not agree on the destinations included. If there were destinations included by the large majority of guides, another guide could select these as the “essential” Lisbon or wherever. In this case, perhaps that would include visiting the Belem tower; the Castle of St George; Sintra; experiencing Fado; sampling the local food, particularly fish dishes; and a ride on one of the funicular trams that climb the steep hills. Or maybe not, in each case. There again, you could debate whether Sintra should be included in a guide to Lisbon, or just mentioned as a day trip.

A small itinerary could be made for a single destination, if desired. Some guides may just point you to a museum or destination as a whole; others may give detailed suggestions for what you should see within that destination. A cursory guide might say that you should visit Sintra; a detailed one might say that you really must visit the Castle of the Moors in Sintra, as well as other particular places in Sintra. A very detailed guide might direct you to particular things to see in the Castle of the Moors itself.

It should be clear from the above discussion that a place to visit should not be confused with an itinerary for that place. Any real place has an unlimited number of possible itineraries for it. An itinerary for a city may include a museum; an itinerary for a museum may include a painting; there may sometimes even be guides to a painting that direct the viewer to particular features of that painting. The guide to the painting is not the painting; the guide to the museum is not the museum; the guide to the city is not the city.

There might also be guides that do not propose particular itineraries, but list many places you might go, and you select yourself. In these cases, some kind of categorisation might be used to help you select the places of interest to you. What period of history do they come from? Are they busy or quiet? What do they cost? How long do they take to visit? Or a guide with itineraries may also categorise attractions, and make them explicitly optional. Optionality might be particularly helpful in guided tours, so that people can leave out things of less interest.

If a set of guides covered several whole places, not just one, it may make comparisons across the different places. If you liked the Cathar castles in the South of France, you may like the Castle of the Moors in Sintra. Those who like stately homes, on the other hand, may be given other suggestions.

A guide to a destination may also contain more than an itinerary of included destinations within it. A guidebook may give historical or cultural background information, which goes beyond the description of the destinations. Guides may also propose a visit sequence, which is not inherent in the destinations.

The features I have described above are reasonably replicated in discussion of competence. A guide or itinerary corresponds to a competence framework; a destination corresponds to a competence concept. This is largely intended to throw further light on what I discussed in number 12 in this series, Representing the interplay between competence definitions and structures.

Differences

One difference is that tourist destinations have independent existence in the physical world, whereas competence concepts do not. It may therefore be easier to understand what is being referred to in a guide book, from a short description, than in a competence framework. Both guide book and competence framework may rely on context. When a guide book says “the entrance”, you know it means the entrance to the location you are reading about, or may be visiting.

Physical embodiment brings clarity and constraints. Smaller places may be located within larger places, and this is relatively clear. But it is less clear whether lesser competence concepts are part of greater competence concepts. What one can say (and this carries through from the tourism analogy) is that concepts are included in frameworks (or not), and that any concept may be detailed by (any number of) frameworks.

Competence frameworks and concepts are more dependent on the words used in description, and because a description necessarily chooses particular words, it is easy to confuse the concept with the framework if they use the same words. It is easy to use the words of a descriptive framework to describe a concept. It is not so common, though perfectly possible, to use the description of an itinerary as a description of a place. It is because of this greater dependence on words (compared with tourist guides) that it may be more necessary to clarify the context of a competence concept definition, in order to understand what it actually means.

Where the analogy with competence breaks down more seriously is that high stakes decisions rarely depend on exactly where someone has visited. But at a stretch of the imagination, they could: recruitment for a relief tour guide could depend on having visited all of a given set of destinations, and being able to answer questions about them. What high stakes promotes is the sense that a particular structure (as defined or adopted by the body controlling the high-stakes decisions) defines a particular competence concept. Despite that, I assert that the competence structure and the separate competence concept remain strictly separate kinds of thing.

Understanding the logic of competence through this analogy

The features of competence models that are illustrated here are these.

  • Competence frameworks or structures may include relevant competence concepts, as well as other material. (See № 12.)
  • Competence concept definitions may be detailed by a framework structure for that competence concept. Nevertheless the structure does not fully define the concept. (See № 12 and № 13.)
  • Competence frameworks may include optional competences (as well as necessary or mandatory ones). (See № 15 and № 7.)
  • Both frameworks and concepts may be categorised. (See also № 5.)
  • Frameworks may contain sub-frameworks (just as itineraries may contain sub-itineraries).
  • But frameworks don’t contain concepts in the same way: they just include them (or not).
  • A framework may be simply an unstructured list of defined concepts.

I hope that helps anyone to understand more of the logic of competence, and I hope that also helps InLOC colleagues come to consensus on the related matters.

More and less specificity in competence definitions

(19th in my logic of competence series.)

Descriptions of personal ability can serve either as claims, like “This is what I am good at …”, or as answers to questions like “What are you good at?” or “can you … ?” In conversations — whether informally, or formally as in a job interview — the claims, questions, and answers may be more or less specific. That is a necessary and natural feature of communication. It is the implications of this that I want to explore here, as they bear on my current work, in particular including the InLOC project.

This is a new theme in my logic of competence series. Since the previous post in that series, I had to focus on completing the eCOTOOL competence model and managing the initial phases of InLOC, which left little time for following up earlier thinking. But there were ideas clearly evident in my last post in this series (representing level relationships) and now is the time for followup and development. The terms introduced previously there can be linked to this new idea of specificity. Simply: binarily assessable concepts are ones that are defined specifically enough for a yes/no judgement about a person’s ability; rankably assessable concepts have an intermediate degree of specificity, and are complemented by level definitions; while unorderly assessable concepts are ones that are less specifically defined, requiring more specificity to be properly assessable. (See that previous post for explanation of those terms.) The least specific competence-related concepts are not properly assessable at all, but serve as tags or headings.

As well as giving weight and depth to this idea of specificity in competence definitions, in this post I want to explore the connection between competence definitions and answering questions, because I think this will help to explain the ideas, because it is relatively straightforward to understand that questions and answers can be more or less specific.

Since the previous post in the series, my terminology has shifted slightly. The goals of InLOC — Integrating Learning Outcomes and Competences — have made it plain that we need to deal equally with learning outcomes and with competence or ability concepts. So I include “learning outcomes” more liberally, always meaning intended learning outcomes.

Job interviews

Imagine you are interviewing someone for a job. To make it more interesting, let’s make it an informal one: perhaps a mutual business contact has introduced you to a promising person at a business event. Add a little pressure by imagining that you have just a few minutes to make up your mind whether you want to ask this person to go through a longer, formal process. How would you structure the interview, and what questions would you ask?

As I envisage the process, one would probably start off with quite general, less specific questions, and then go into more detail where appropriate, where it mattered. So, for instance, one might ask “are you a programmer?”, and if the answer was yes, go into more detail about languages, development environments, length of experience, type of experience, etc. etc. The useful detail in this case would depend entirely on the circumstances of the job. For a graduate to be recruited into a large company, what matters might be aptitude, as it would be likely that full training would be supplied (which you could perhaps see as a kind of technical “enculturation”). On the other hand, for a specialist to join a short-term high-stakes project, even small details might matter a lot, as learning time would probably be minimal.

In reality, most job interviews start, not from a blank sheet, but from the basis of a job advert, and an application form, or CV and covering letter. A job advert may specify requirements; an application form may contain specific questions for which answers are expected, but in the absence of an appliation form, a CV and covering letter needs to try to answer, concisely, some of the key questions that would be asked first in an informal, unprepared job interview. This naturally explains the universal advice that CVs should be designed specifically for each job application. What you say about yourself unprompted not only reveals that information itself, but also says much about what you expect the other person to reckon as significant or interesting.

So, in the job interview, we notice the natural importance of varying specificity in descriptions and questions about abilities and experience.

Recruitment

This then carries over to the wider recruitment process. Potential employers often formulate a list of what is required of prospective employees, in terms of which abilities and experience are essential or desirable, but the detail and specificity of each item will naturally vary. The evidence for a less specific requirement may be assessed at interview with some quick general questions, but a more exacting requirement may want harder evidence such as a qualification, certificate or testimonial from an expert witness.

For example, in a regulated world such as pesticides that I wrote about recently, an employer might well want a prospective employee to have obtained a relevant certificate or qualification, so that they can legally do their job. Even when a certificate is not a legal requirement, some are widely asked for. A prospective sales employee with a driving licence or an office employee with an ECDL might be preferred over one without, and it would be perfectly reasonable for an employer to insist that non-native speakers had obtained a given certified level of proficiency in the principle workplace language. In each case, because the certificate is awarded only to people who have passed a carefully controlled test, the test result serves to answer many quite specific questions about the holder’s abilities, as well as the potential legal fact of their being allowed to perform certain actions in regulated occupations.

Vocational qualifications often detail quite specifically what holders are able to do. This is clearly the intention of the Europass Certificate Supplement (ECS), and has been in the UK, through the system of National Vocational Qualifications, relying on National Occupational Standards. So we could expect that employers with specific learning outcome or competence requirements may specify that candidates should have particular vocational qualifications; but what about less specific requirements? My guess is that those employers who have little regard for vocational qualifications are just those whose requirements are less specific. Time was when many employers looked only for a “good degree”, which in the UK often meant a “2:1″, an upper second class. This was supposed to answer generic questions, as typically the specific subject of the degree was not specified. Now there is a growing emphasis on the detail of the degree transcript or Europass Diploma Supplement (EDS), from which a prospective employer can read at least assessment results, if not yet explicit details of learning outcomes or competences. There is also a increasing trend towards making explicit the intended learning outcomes of courses at all levels, so the course information might be more informative than the transcript of EDS.

Interestingly, the CVs of many technical workers contain highly unspecific lists of programming languages that the individual implicitly claims, stating nothing about the detailed abilities and experience. These lists answer only the most general questions, and serve effectively only to open a conversation about what the person’s actual experience and achievements have been in those programming languages. At least for human languages there is the increasingly used CEFR; there does not appear to be any such widely recognised framework for programming languages. Perhaps, in the case of programming languages, it would be clumsy and ineffective to give answers to more detailed questions, because the individual does not know what those detailed questions would be.

Specificity in frameworks

Frameworks seem to gravitate towards specificity. Given that some people want to know the answers to specific questions, this is quite reasonable; but where does that leave the expression of the less specific requirements? For examples of curriculum frameworks, there is probably nowhere better than the American Achievement Standards Network (ASN). Here, as in many other places, learning outcomes are defined only in one or two levels. The ASN transcribes documents faithfully, then among many other things marks the “indexing status” of the various components. For an arbitrary example, see Earth and Space Science, which is a topic heading and not “indexable”. The heading below just states what the topic is about, and is not “indexable”. It is below this that the content becomes “indexable”, with first some less specific statements about what should be achieved by the end of fourth grade, broken down into the smallest components such as Identify characteristics of soils, minerals, rocks, water, and the atmosphere. It looks like it is just the “indexable” resources that are intended to represent intended learning outcome definitions.

At fourth grade, this is clearly nothing to do with employment, but even so, identifying characteristics of soils etc. is something that students may or may not be able to do, and this is part of the less specifically defined (but still “indexable”) “understanding of the characteristics of earth materials”. It strikes me that the item about identifying characteristics would fit reasonably (in my scheme of the previous post) as a “rankably assessible” concept, and its parent item about understanding might be classified (in my scheme) as unorderly assessable.

How to represent varying specificity

Having pointed out some of the practical examples of varying specificity in definitions of learning outcome or competence, the important issue for work such as InLOC is to provide some way of representing, not only different levels of specificity, but also how they relate to one another.

An approach through considering questions and answers

Any concept that is related to learning outcomes or competence can provide the basis for questions of an individual. Some of these questions have yes/no answers; some invite answers on a scale; some invite a longer, less straightforward reply, or a short reply that invites further questions. A stated concept can be both the answer to a question, and the ground for further questions. So, to go back to some of the above examples, a CV might somewhere state “French” or “Java”. These might be answers to the questions “what languages have you studied?” or “what languages do you use?” They also invite further questions, such as “how well do you know …?”, or “how much have you used …, and in what contexts?”, or “how good are you at …?” – which, if there is an appropriate scale, could be reformulated as “what level is your ability in …?”

Questions could be found corresponding to the ASN examples as well. “Identify characteristics of soils, minerals, rocks, water, and the atmosphere” has the same format that allows “can you …?” or “I can …”. The less specific statement — “By the end of fourth grade, students will develop an understanding of the characteristics of earth materials,” — looks like it corresponds with questions more like “what do you understand about earth materials?”.

As well as “summative” questions, there are related questions that are used in other ways than assessment. “How confident are you of your ability in …?” and “is your ability in … adequate in your current situation?” both come to mind (stimulated by considerations in LUSID).

What I am suggesting here is that we can adapt some of the natural properties of questions and answers to fit definitions of competence and ability. So what properties do I have in mind? Here is a provisional and tentative list.

  • Questions can be classified as inviting one of four kinds of answer:
    1. yes or no;
    2. a value on a (predefined) scale;
    3. examples;
    4. an explanation that is more complex than a simple value.
  • These types of answer probably need little explanation – many examples can readily be imagined.
  • The same form of answer can relate to more than one question, but usually the answer will mean different things. To be fully and clearly understood, an answer should relate to just one question. Using the above example, “French” as the answer to “what languages have you studied?” means something substantially different from “French” as the answer to “what languages are you fluent in?”
  • A more specific question may imply answers to less specific questions. For example, “what programming languages have you used in software development?” implies answers such as “software development” to the question “what competences do you have in ICT?” Many such implied questions and answers can be formulated. What matters in a particular framework is the other answers in that particular framework that can be inferred.
  • An answer to a less specific question may invite further more specific questions.
    1. Conversely to the example just above, if the question “what competences do you have in ICT?” includes the answer “software development”, a good follow-up question might be “what programming languages have you used in software development?” Similar patterns could be seen for any technical specialty. Often, answers like this may be taken from a known list of options. There are only so many languages, both human and computer.
    2. Where an answer is a rankable concept, questions about the level of that ability are invited. For instance, the question “what foreign languages can you speak?”, answered with “French” and “Italian”, invites questions such as “what is your European Language Passport level of ability in spoken interaction in French?”
    3. Where an answer has been analysed into its component parts, questions about each component part make sense. For example, if the answer to “are you able to clear sites for tree planting?”, following the LANTRA Treework NOS (2009) was “yes”, that invites the narrower implied questions set out in that NOS, like “can you select appropriate clearance methods …?” or “do you understand the potential impacts of your work on the environment …?”
    4. Unless the question is fully specific, admitting only the answers yes and no, and even in that case many times, it is nearly always possible to ask further questions, and give further answers. But everyone’s interest in detail stops sooner or later. The place to stop asking more specific questions is when the answer does not significantly affect the outcome you are looking for. And that varies between different interested parties.
  • Questions may be equivalent to other questions in other frameworks. This will come out from the answers given. If the answers given by the same person in the same context are always the same for two questions, they are effectively equivalent. It is genuinely helpful to know this, as it means that one can save time not repeating questions.
  • Answers to some questions may imply answers to other questions in different frameworks, without being equivalent. The answers may contain, or be contained by, their counterparts. This is another way of linking together questions from different frameworks, and saving asking unnecessary extra questions.

That covers a view of how to represent varying specificity in questions and answers, but not yet frameworks as they are at present.

Back to frameworks as they are at present

At present, it is not common practice to set out frameworks of competence or ability in terms of questions and answers, but only in terms of the concepts themselves. But, to me, it helps understanding enormously to imagine the frameworks as frameworks of questions, and the learning outcome or competence concepts as potential answers. In practice, all you see in the frameworks is the answers to the implied questions.

Perhaps this has come about through a natural process of doing away with unnecessary detail. The overall question in occupational competence frameworks is, “are you competent to do this job?”, so it can go unstated, with the title of the job standing in for the question. The rest of the questions in the framework are just the detailed questions about the component parts of that competence (see Carroll and Boutall’s ideas of Functional Analysis in their Guide to Developing National Occupational Standards). The formulation with action verbs helps greatly in this approach. To take NOS examples from way back in the 3rd post in this series, the units themselves and the individual performance criteria share a similar structure. Less specifically, “set out and establish crops” relates both to the question “are you able to set out and establish crops” and the competence claim “I am able to set out and establish crops”. More specifically, “place equipment and materials in the correct location ready for use” can be prefixed with “are you able to …” for a question, or “I am able to …” as a claim. Where all the questions take a form that invites answers yes or no, one really does not need to represent the questions at all.

With a less uniform structure, one would need mentally to remove all the questions to get a recognisable framework; or conversely, to understand a framework in terms of questions, one needs to add in those implied questions. This is not as easy, and perhaps that is why I have been drawn to elaborating all those structuring relationships between concepts.

We are left in a place that is very close to where we were before in the previous post. At simplest, we have the individual learning outcome or competence definitions (which are the answers) and the frameworks, which show how the answers connect up, without explicitly mentioning the questions themselves. The relations between the concepts can be factored out, and presented either together in the framework, or separately together with the concepts that are related by those relations.

If the relationships are simply “broader” and “narrower”, things are pretty straightforward. But if we admit less specific concepts and questions, because the questions are not explicitly represented, the structure needs a more elaborate set of relationships. In particular, we have to make particular provision for rankable concepts and levels. I’ll leave detailing the structures we are left with for later.

Before that, I’d like to help towards better grasp of the ideas through the analogy with tourism.

Competence and regulation

Today I had a most helpful phone call with a kind lady from the Health and Safety Executive (HSE), and it has illuminated the area of the competence world, related to regulation, that I was very unclear about, so I thought I would try to share my increased understanding.

The EU often comes up with directives intended for the good of European citizens in general. In this case, as an example we are looking at Directive 2009/128/EC of 2009-10-21 “establishing a framework for Community action to achieve the sustainable use of pesticides”. Good that this one looks uncontroversial in principle – we don’t want people to use pesticides in an unregulated way, potentially polluting common air, water or ground (potentially without our even being aware of it), so I guess most people would support the principle of regulation here.

If you work your way down to Article 5 of this directive, you see:

Article 5
Training
1. Member States shall ensure that all professional users, distributors and advisors have access to appropriate training by bodies designated by the competent authorities. This shall consist of both initial and additional training to acquire and update knowledge as appropriate.

The training shall be designed to ensure that such users, distributors and advisors acquire sufficient knowledge regarding the subjects listed in Annex I, taking account of their different roles and responsibilities.

2. By 14 December 2013, Member States shall establish certification systems and designate the competent authorities responsible for their implementation. These certificates shall, as a minimum, provide evidence of sufficient knowledge of the subjects listed in Annex I acquired by professional users, distributors and advisors either by undergoing training or by other means.

(I will say nothing at all about what “competent” means as in “competent authority”. Maybe it is quite different.)

It goes on. So what is this Annex I? That is really significant for the purposes of knowledge, skill and competence. It’s worth perhaps repeating this in full, just to get the full flavour of one example of the language and way these things are set out.

Training subjects referred to in Article 5

  1. All relevant legislation regarding pesticides and their use.
  2. The existence and risks of illegal (counterfeit) plant protection products, and the methods to identify such products.
  3. The hazards and risks associated with pesticides, and how to identify and control them, in particular:
    1. risks to humans (operators, residents, bystanders, people entering treated areas and those handling or eating treated items) and how factors such as smoking exacerbate these risks;
    2. symptoms of pesticide poisoning and first aid measures;
    3. risks to non-target plants, beneficial insects, wildlife, biodiversity and the environment in general.
  4. Notions on integrated pest management strategies and techniques, integrated crop management strategies and tech­niques, organic farming principles, biological pest control methods, information on the general principles and crop or sector-specific guidelines for integrated pest management.
  5. Initiation to comparative assessment at user level to help professional users make the most appropriate choices on pesticides with the least side effects on human health, non-target organisms and the environment among all auth­orised products for a given pest problem, in a given situation.
  6. Measures to minimise risks to humans, non-target organisms and the environment: safe working practices for storing, handling and mixing pesticides, and disposing of empty packaging, other contaminated materials and surplus pesticides (including tank mixes), whether in concentrate or dilute form; recommended way to control operator exposure (personal protection equipment).
  7. Risk-based approaches which take into account the local water extraction variables such as climate, soil and crop types, and relieves.
  8. Procedures for preparing pesticide application equipment for work, including its calibration, and for its operation with minimum risks to the user, other humans, non-target animal and plant species, biodiversity and the environment, including water resources.
  9. Use of pesticide application equipment and its maintenance, and specific spraying techniques (e.g. low-volume spraying and low-drift nozzles), as well as the objectives of the technical check of sprayers in use and ways to improve spray quality. Specific risks linked to use of handheld pesticide application equipment or knapsack sprayers and the relevant risk management measures.
  10. Emergency action to protect human health, the environment including water resources in case of accidental spillage and contamination and extreme weather events that would result in pesticide leaching risks.
  11. Special care in protection areas established under Articles 6 and 7 of Directive 2000/60/EC.
  12. Health monitoring and access facilities to report on any incidents or suspected incidents.
  13. Record keeping of any use of pesticides, in accordance with the relevant legislation.

What we have here is a kind of syllabus, but in some ways quite a vague syllabus. It does not make it expressly clear what people have to be able to do as a result of training with this syllabus, as is now good practice for many learning outcomes, particularly vocational ones. So it falls to the Member States to interpret that, with the result that different Member States may do things differently, the resultant competences may not be the same, and there may well be considerable differences across Europe in how safe people actually are from the dangers the regulation was brought in to counter.

So the European directive works its way down through the system to national governments, and out comes something like The Plant Protection Products (Basic Conditions) Regulations 1997. In this case, though the area is similar, this UK legislation was obviously created long before the above European directive. Here we read:

1. It shall be the duty of all employers to ensure that persons in their employment who may be required during the course of their employment to use prescribed plant protection products are provided with such instruction, training and guidance as is necessary to enable those persons to comply with any requirements provided in and under these Regulations and the Plant Protection Products Regulations.

and later

3. No person in the course of a business or employment shall use a prescribed plant protection product, or give an instruction to others on the use of a prescribed plant protection product, unless that person—
(a) has received adequate instruction, training and guidance in the safe, efficient and humane use of prescribed plant protection products, and
(b) is competent for the duties which that person is called upon to perform.

and yet later

7.—(1) No person in the course of a commercial service shall use a prescribed plant protection product approved for agricultural use unless that person—
(a) has obtained a certificate of competence recognised by the Ministers; or
(b) uses that plant protection product under the direct and personal supervision of a person who holds such a certificate; or […]

The UK Regulations do not themselves define in detail what counts as “adequate instruction, training and guidance”, nor indeed “competent” and “competence”. This is where the HSE comes in, by approving as “adequate” the proposals of awarding bodies aimed at the certification of this training etc.

Do we get the general idea here? I hope so. But wait a minute …

One cannot help remarking on the differences between the language of the EU regulation and the language recommended, say, in the Europass Certificate Supplement, where it is clear that each skill or competence item should start with an action verb. Is it therefore a case of lack of effective communication between DGs? It looks likely, but I have no evidence.

Nor do I have an opinion about the merits of leaving definitions open (perhaps deliberately so) to give room for courts to establish case law and precedent.

But it would seem to me a good idea, when formulating this kind of regulation, at the same time to put together a well-structured framework of knowledge, skill and competence to define the required abilities of the people concerned. Not defining them clearly just means that the cost of defining them is multiplied through being borne by every Member State, resulting not only in divergence but in considerable administrative work that one could say was unnecessary. Multiply this across all the relevant European regulations. OK, admitted, I have little knowledge of the workings of such bureaucracy. Maybe there is a reason, but at present I am an unsatisfied citizen.

And this is one area where InLOC outputs could potentially play a role. It would be principally at a European level, though national governments could do something similar for any national regulations. Some central European body could define the required knowledge and ability, for each European regulation across all areas of public life, according to clear and sensible standard approaches that relate directly to learning outcomes, competence, training and assessment. The requirements could be published in InLOC format in all relevant languages. (That’s what InLOC is set up to facilitate). How to train and assess would still be up to training, assessment and awarding bodies, and there would still have to be structures and practices (probably with considerable national variation) within which this is controlled, and operating licences managed. But at least several stages would be removed from the process, which could be much quicker. The Commission could be seen to be more in touch with the grass roots. Procedures would look more transparent and fairer. Maybe, even, European regulations would be held in higher repute. That would be a nice outcome.

Badges for singers

We had a badges session last week at the CETIS conference (here are some summary slides and my slides on some requirements). I’d like to reflect on that, not directly (as Phil has done) but instead by looking forward on how a badge system for leisure activity might be put together.

In the discussion part of our conference session, we looked at two areas of potential application of badges. First, for formative assesssment in high-stakes fields (such as medicine); second, for communities of practice such as the ones CETIS facilitates, still in the general area of work. What we didn’t look at was badges for leisure or recreation. The Mozilla Open Badges working paper makes no distinction between badges for skills that are explicitly about work and for skills that are not obviously about work, so looking at leisure applications complements the conference discussion nicely, while providing an example to think through many of the main issues with badges.

The worked example that follows builds on my personal knowledge of one hobby area, but is meant to be illustrative of many. Please think of your own favourite leisure activities.

Motivation

On returning from the conference, on the very same day as the badges session, it happened to be a rehearsal evening for the small choir I currently sing with. So what more natural for me to think about than a badge system for singing. The sense of need for such a system has continued to grow on me. Many people can sing well enough to participate in a choir. While learners of classical instruments have “grade” examinations and certificates indicating the stages of mastery of an instrument, there is no commonly used equivalent for choral singing. Singing is quite a diverse activity, with many genres as well as many levels of ability. The process of established groups and singers getting together tends to be slow and subject to chance, but worse, forming a new group is quite difficult, unless there is some existing larger body (school, college, etc) all the singers belong to.

Badges for singers might possibly help in two different ways. First, badges can mark attainment. A system of attainment badges could help singers find groups and other singers of the right standard for them to enjoy singing with. It may be worthy, but not terribly exciting singing with a group at a lower level, and one can feel out of one’s depth or embarrassed singing with people a lot more accomplished. So, when a group looks for a new member, it could specify levels of any particular skills that were expected, as well as the type of music sung. This wouldn’t necessarily remove the need for an audition, but it would help the right kind of singer to consider the choir. Compared with current approaches, including singers hearing a choir performing and then asking about vacancies, or learning of openings through friends, a badge system could well speed things up. But perhaps the greatest benefit would be to singers trying to start new groups or choirs, where there is no existing group to hear about or to go to listen. Here, a badge system could make all the difference between it being practical to get a new group together, or failing.

Second, the badges could provide a structured set of goals that would help motivate singers to broaden and improve their abilities. This idea of motivating steps in a pathway is a strong theme in the Open Badges documentation. There must be singers at several levels who would enjoy and gain satisfaction from moving on, up a level maybe. In conjunction with groups setting out their badge requirements, badges in the various aspects of choral singing would at very least provide a framework within which people could more clearly see what they needed to gain experience of and practice, in order to join the kind of group they really want.

By the way, I recognise that not all singing groups are called “choirs”. Barbershop groups tend to be “choruses”, while very small groups are more often called “ensembles”; but for simplicity here I use the term “choir” to refer to any singing group.

Teachers / coaches

Structured goals lead on to the next area. If there were a clear set of badged achievements to aim for, then the agenda for coaches, tutors, et al. would be more transparent. This might not produce a great increase in demand for paid tuition (and therefore “economic” activity) but might well be helpful for amateur coaching. Whichever way, a clear set of smaller, more limited goals on tried and tested pathways would provide a time-honoured approach to achieving greater goals, with whatever amount of help from others that is needed.

Badge content

I’ve never been in charge of a choir for more than a few songs, but I do have enough experience to have a reasonable guess at what choirmasters and other singers want from people who want to join them. First, there are the core singing skills, and these might be broken down for example like this:

  • vocal range and volume (most easily classified as soprano / alto / tenor / bass)
  • clarity and diction
  • voice quality and expressiveness (easy to hear in others, but hard to measure)
  • ability to sing printed music at first sight (“sight-singing”)
  • attentiveness to and blend with other singers
  • ability to sing a part by oneself
  • speed at learning one’s part if necessary
  • responsiveness to direction during rehearsal and performance
  • specialist skills

It wouldn’t be too difficult to design a set of badges that expressed something like these abilities, but this is not the time to do that job, as such a structure needs to reflect a reasonable consensus involving key stakeholders.

Then there are other personal attributes, not directly related to singing, that are desirable in choir members, e.g.:

  • reliability of attendance at rehearsals and particularly performances
  • helpfulness to other choir members
  • diligence in preparation

Badges for these could look a little childish, but as a badge system for singing would be entirely voluntary, perhaps there would be no harm in defining them, for the benefit of junior choirs at least.

Does this cover everything? It may or may not cover most of what can be improved — those things that can be better or not so good — but there is one other area that is vital for the mutual satisfaction of choir and singer. Singers have tastes in music; choirs have repertoires or styles they focus on. To get an effective matching system, style and taste would have to be represented.

Assessing and awarding badges

So who would assess and award these badges? The examinations for musical instrument playing are managed by bodies such as the ABRSM (indeed including solo singing). These exams have a very long history, and are often recognised, e.g. for entry to musical education institutions. But choral singers are usually wanting to enjoy themselves, not gain qualifications so they can be professional musicians. They are unlikely to want to pay for exams for what is just a hobby. That leaves three obvious options: choirmasters, fellow singers, and oneself.

In any case, the ABRSM and similar bodies already have their own certificates and records. A badge system for them would probably be just a new presentation of what currently exists. The really interesting challenge is to consider how badges can work effectively without an official regulating body.

On deeper consideration, there really isn’t much to choose between choirmasters and fellow singers as the people who award choral singing badges. There is nothing to stop any singer being a choirmaster, anyway. There is not much incentive for people to misrepresent their choral singing skills: as noted before, it’s not much fun being in a choir of the wrong standard, nor in singing music one doesn’t like much. So, effectively, a badge system would have the job of making personal and choir standards clear.

There is an analogy here with language skills, which are closely related in any case. The Europass Language Passport is designed to be self-assessed, with people judging their own ability against a set of criteria that were originally defined by the Council of Europe. The levels — A1 to C2 — all have reasonably clear descriptors, and one sees people describing their language skills using these level labels increasingly often.

This is all very well if people can do this self-assessment accurately. The difficulty is that some of the vital aspects of choral singing are quite hard to assess by oneself. Listening critically to one’s own voice is not particularly easy when singing in a group. It might be easier if recording were more common, but again, most people are unfamiliar with the sound of their own voice, and may be uncomfortable listening to it.

On the other hand, we don’t want people, in an attempt to be “kind”, to award each other badges over-generously. We could hope that dividing up the skills into enough separate badges would mean that there would be some badges for everyone, and no one need be embarrassed by being “below average” in some ways. Everyone in a choir can have a choir membership badge, which says something about their acceptance and performance within the choir as a whole. Then perhaps all other choir members can vote anonymously about the levels which others have reached. Some algorithm could be agreed for when to award badges based on peer votes.

The next obvious thing would be to give badges to the choir as a whole. Choirs have reputations, and saying that one has sung in a particularly choir may mean something. This could be done in several ways, all involving some external input. Individual singers (and listeners) could compare the qualities of different choirs in similar genres. Choral competitions are another obvious source of expert judgement.

Setting up a badge system

The more detailed questions come to a head in the setting up of an actual badge system. The problem would not only be the ICT architecture (such as Mozilla Open Badges is a working prototype for) but also the organisational arrangements for creating the systems around badges for singers. Now, perhaps, we can see more clearly that the ICT side is relatively easy. This is something that we are very familiar with in CETIS. The technology is hardly ever the limiting factor — it is the human systems.

So here are some questions or issues (among possibly many more) that would need to be solved, not necessarily in this order.

  • Who would take on responsibility for this project as a whole? Setting up a badge system is naturally a project that needs to be managed.
  • Who hosts the information?
  • How is the decision made about what each badge will be, and how it is designed?
  • How would singers and choirs be motivated to sign up in the first place?
  • If a rule is set for how badges are to be awarded, how is this enforced, or at least checked?
  • Is Mozilla Open Badges sufficient technical infrastructure, and if not, who decides what is?
  • Could this system be set up on top of other existing systems? (Which, or what kind?)

Please comment with more issues that need to be solved. I’ll add them if they fit!

Business case

And how does such a system run, financially? The beneficiaries would primarily be choirs and singers, and perhaps indirectly people who enjoy listening to live choral music. Finding people or organisations in whose financial interests this would be seems difficult. So it would probably be useful for the system to run with minimal resources.

One option might be to offer this as a service to one or more membership organisations that collects fees from members, or alternatively, as an added service that has synergy with an existing paid-for service. However, the obvious approach of limiting the service to paid members would work against its viability in terms of numbers. In this case, the service would in effect be advertising promoting the organisation. Following the advertising theme, it might be seen as reasonable, for users who do not already pay membership, to receive adverts from sellers of music or organisers of musical events, which could provide an adequate income stream. The nice thing is that the kind of information that makes sense for individuals to enter, to improve the effectiveness of the system, could well be used to target adverts more effectively.

Would this be enough to make a business case? I hope so, as I would like to use this system!

Reflection

I hope that this example illustrates some of the many practical and possibly thorny issues that lie before a real working badge system can be implemented, and these issues are not primarily technical. What would be really useful would be to have a working technical infrastructure available so that at least some of the technical issues are dealt with in advance. As I wrote in comments on a previous post, I’m not convinced that Mozilla Open Badges does the job properly, but at least it is a signpost in the right direction.

ICT Skills

Several of us in CETIS have been to the CEN Workshop Learning Technologies (WS-LT), but as far as I know none yet to a closely related Workshop on ICT Skills. Their main claim to fame is the European e-Competence Framework (e-CF), a simpler alternative to SFIA (developed by the BCS and partners). It was interesting on several counts, and raises some questions we could all give an opinion on.

The meeting was on 2011-12-12 at the CEN meeting rooms in Brussels. I was there on two counts: first as a CETIS and BSI member of CEN WS-LT and TC 353, and second as the team leader of InLOC, which has the e-CF mentioned in its terms of reference. There was a good attendance of 35 people, just a few of whom I had met before. Some members are ICT employers, but more are either self-employed or from various organisations with an interest in ICT skills, and in particular, CEPIS (not to be confused with CETIS!) of which the BCS is a member. A surprising number of Workshop members are Irish, including the chair, Dudley Dolan.

The WS-LT and TC353 think a closer relationship with the WS ICT Skills would be of mutual benefit, and I personally agree. ICT skills are a vital component of just about any HE skills programme, essential as they are for the great majority of graduate jobs. As well as the e-CF, which is to do with competences used in ICT professions, the WS ICT Skills have recently started a project to agree a framework of key skills for ICT users. So for the WS-LT there is an easy starting point for which we can offer to apply various generic approaches to modelling and interoperability. The strengths of the two workshops are complementary: the WS-LT is strong in the breadth of generalities about metadata, theory, interoperability; the WS ICT Skills is strong in depth, about practice in the field of ICT.

The meeting revealed that the two workshops share several concerns. Both need to manage their CWAs, withdrawing outdated ones; both are concerned about the length and occasional opaqueness of the procedure to fund standardisation expert team work. Both are concerned with the availability and findability of their CWAs. André Richier is interested in both Workshops, though more involved in the WS ICT Skills. Both are concerned, in their own different ways, with the move through education and into employment. Both are concerned with creating CWAs and ENs (European “Norm” Standards), though the WS-LT is further ahead on this front, having prompted the formation of CEN TC353 a few years ago, to deal with the EN business. The WS ICT Skills doesn’t have a TC, and it is discussing whether to attempt ENs without a TC, or to start their own TC, or to make use of the existing TC353.

On the other hand, the WS ICT Skills seems to be ahead in terms of membership involvement. They charge money for voting membership, and draw in big business interest, as well as small. Would the WS-LT (counterintuitively perhaps) draw in a larger membership if it charged fees?

I was lucky to have a chance (in a very full agenda) to introduce the WS-LT and the InLOC project. I mentioned some of the points above, and pointed out how relevant InLOC is to ICT skills, with many links including shared experts. While understanding is built up between the two workshops, it was worth stressing that nothing in InLOC is sector-specific; we will not be developing any learning outcome or competence content; and that far from being in any way competitive, we are perfectly set up for collaboration with the WS ICT Skills, and the e-CF.

Work on e-CF version 3 is expected to be approved very soon, and there is a great opportunity there to try to ensure that the InLOC structures are suited to representing the e-CF, and that any useful insights from InLOC are worked into the e-CF. The e-CF work is ably led by Jutta Breyer who runs her own consultancy. Another project of great interest to InLOC is their work on “end user” ICT skills (the e-CF deals with professional competences), led by Neil Farren of the ECDL Foundation. The term “end user” caused some comment and will probably not feature in the final outputs of this project! Their project is a mere month or so ahead of InLOC in time. In particular, they envisage developing some kind of “framework shell”, and to me it is vital that this coordinates well with the InLOC outputs, as a generalisation-specialisation.

Another interesting piece of work is looking at ICT job profiles. The question of how a job profile relates to competence definitions is something that needs clarifying and documenting within the InLOC guidelines, and again, the closer we can coordinate this, the better for both of us.

Finally, should there be an EN for the e-CF? It is a tricky question. Sector Skills Councils in the UK find it hard enough to write National Occupation Standards for the UK – would it be possible to reach agreement across Europe? What would it mean for SFIA? If SFIA saw it as a threat, it would be likely to weigh in strongly against such a move. Instead, would it be possible to persuade SFIA to accept a suitably adapted e-CF as a kind of SFIA “Lite”? Some of us believe that would help, rather than conflict with, SFIA itself. Or could there be an EN, not rigidly standardising the descriptions of “e-Competences”, but rather giving an indication for how such frameworks should be expressed, with guidelines on ICT skills and competences in particular?

Here, above all, there is room for detailed discussion between the Workshops, and between InLOC and the ongoing ICT Skills Workshop teams, to achieve something that is really credible, coherent and useful to interested stakeholders.

Badges – another take

Badges can be seen as recognisable tokens of status or achievement. But tokens don’t work in a vacuum, they depend on other things to make them work. Perhaps looking at these may help us understand how they might be used, both for portfolios and elsewhere.

Rowin wrote a useful post a few weeks ago, and the topic has retained a buzz. Taking this forward, I’d like to discuss specifically the aspects of badges — and indeed any other certificate — relevant both to portfolio tools and to competence definitions. Because the focus here is on badges, I’ll use the term “badge” occasionally to include what is normally thought of as a certificate.

A badge, by being worn, expresses a claim to something. Some real badges may express the proposition that the wearer is a member of some organisation or club. Anyone can wear an “old school tie”, but how does one judge the truth of the claim to belong to a particular alumni group? Much upset can be caused by the misleading wearing of medals, in the same way as badges.

Badges could often do with a clarification of what is being claimed. (That would be a “better than reality” feature.) Is my wearing a medal a statement that I have been awarded it, or it is just in honour of the dead relative that earned it? Did I earn this badge on my own, was I helped towards it, or am I just wearing it because it looks “cool”? An electronic badge, e.g. on a profile or e-portfolio, can easily link to an explicit claim page including a statement of who was awarded this badge, and when, beyond information about what the badge is awarded for. These days, a physical badge could have a QR code so that people can scan it and be taken to the same claim page.

If the claim is, for example, simply to “be” a particular way, or to adhere to some opinion, or perhaps to support some team (in each case where the natural evidence is just what the wearer says), then probably no more is needed. But most badges, at least those worn with pride, represent something more than that the wearer self-certifies something. Usually, they represent something like a status awarded by some other authority than the wearer, and to be worth wearing, they show something that the wearer has, but might not have had, which is of some significance to the intended observers.

If a badge represents a valued status, then clearly badges may be worn misleadingly. To counter that, there will need to be some system of verification, through which an observer can check on the validity of the implied claim to that status. Fortunately, this is much easier to arrange with an electronic badge than a physical one. Physical badges really need some kind of regulatory social system around them, often largely informal, that deters people from wearing misleading badges. If there is no such social system, we are less in the territory of badges, and more of certificates, where the issues are relatively well known.

When do you wear physical badges? When I do it is usually a conference, visitor or staff badge. Smart badges can be “swiped” in some way, and that could, for instance, lead to a web page on the authority’s web site with a photo of the person. That would be a pretty good quick check that would be difficult to fake effectively. “Swiping” can these days be magnetic, RFID, or QR code.

My suggestion for electronic badges is that the token badge links directly to a claim page. The claim page ideally holds the relevant information in a form that is both machine processable and human readable. But, as a portfolio is typically under the control of the individual, more portfolio pages cannot easily provide any official confirmation. The way to do this within a user-controlled portfolio would be with some kind of electronic signature. But probably much more effective in the long term is for the portfolio claim page to refer to other information held by the awarding authority. This page can either be public or restricted, and could hold varying amounts of information about the person as well as the badge claim.

Here are some first ideas of information that could relate to a badge (or indeed any certificate):

  • what is claimed (competence, membership, permission, values, etc.);
  • identity of the person claiming;
  • what authority is responsible for validating the claim and awarding;
  • when and on what grounds the award was made;
  • how and when any assessment process was done;
  • assurance that the qualifying performance was not by someone else.

But that’s only a quick attempt. A much slower attempt would be helpful.

It’s important to be able to separate out these components. The “what is claimed” part is very closely related to learning outcome and competence definitions, the subject of the InLOC work. All the assessment and validation information is separable, and the information models (along with any interoperability specifications) should be created separately.

Competence and values can be defined independently of any organisation — they attach just to an individual. This is different from membership, permission, and the like, that are essentially tied to systems and organisations, and not as such transferable.

Representing level relationships

(18th in my logic of competence series)

Having prepared the ground, I’m now going to address in more detail how levels of competence can best be represented, and the implications for the rest of representing competence structures. Levels can be represented similar to other competence concept definitions, but need different relationships.

I’ve written about how giving levels to competence reflects common usage, at least for competence concepts that are not entirely assessable, and that the labels commonly used for levels are not unique identifiers; about how defining levels of assessment fits into a competence structure; and lately about how defining levels is one approach to raising the assessability of competence concepts.

Later: shortly after first writing this, I put together the ideas on levels more coherently in a paper and a presentation for the COME-HR conference, Brussels.

Some new terms

Now, to take further this idea of raising assessability of concepts, it would be useful to define some new terms to do with assessability. It would be really good to know if anyone else has thought along this direction, and how their thoughts compare.

First, may we define a binarily assessable concept, or “binary” for short, as a concept typically formulated as something that a person either has or does not have, and where there is substantial agreement between assessors over whether any particular person actually has or does not have it. My understanding is that the majority of concepts used in NOSs are intended to be of this type.

Second, may we define a rankably assessable concept, or “rankable” for short, as a concept typically formulated as something a person may have to varying degrees, and where there is substantial agreement between assessors over whether two people have a similar amount of it, or who has more. IQ might be a rather old-fashioned and out-of-favour example of this. Speed and accuracy of performing given tasks would be another very common example (and widely used in TV shows), though that would be more applicable to simpler skills than occupational competence. Sports have many scales of this kind. On the occupational front, a rankable might be a concept where “better” means “more additional abilities added on”, while still remaining the same basic concept. Many complex tasks have a competence scale, where people start off knowing about it and being able to follow someone doing it, then perform the tasks in safe environments under supervision, working towards independent ability and mastery. In effect, what is happening here is that additional abilities are being added to the core of necessary understanding.

Last, may we define a unorderly assessable concept, or “unordered” for short, as any concept that is not binary or rankable, but still assessable. For it to remain assessable despite possible disagreement about who is better, there at least has to be substantial agreement between assessors about the evidence which would be relevant to an assessment of the ability of a person in this area. In these cases, assessors would tend to agree about each others’ judgements, though they might not come up with the same points. Multi-faceted abilities would be good examples: take management competence. I don’t think there is just one single accepted scale of managerial ability, as different managers are better or worse at different aspects of management. Communication skills (with no detailed definition of what is meant) might be another good example. Any vague competence-related concept that is reasonably meaningful and coherent might fall into this category. But it would probably not include concepts such as “nice person” where people would disagree even about what evidence would count in its support.

Defining level relationships

If you allow these new terms, definitions of level relationships can be more clearly expressed. The clearest and most obvious scenario is that levels can be defined as binaries related to rankables. Using an example from my previous post, success as a pop songwriter based on song sales/downloads is rankable, and we could define levels of success in that in terms of particular sales, hits in the top ten, etc. You could name the levels as you liked — for instance, “beginner songwriter”, “one hit songwriter”, “established songwriter”, “successful songwriter”, “top flight songwriter”. You would write the criteria for each level, and those criteria would be binary, allowing you to judge clearly which category would be attributed to a given songwriter. Of course, to recall, the inner logic of levels is that higher levels encompass lower levels. We could give the number 1 to beginner, up to number 5 for top flight.

To start formalising this, we would need an identifier for the “pop songwriter” ability, and then to create identifiers for each defined level. Part of a pop songwriter competence framework could be the definitions, along with their identifiers, and then a representation of the level relationships. Each level relationship, as defined in the framework, would have the unlevelled ability identifier, the level identifier, the level number and the level label.

If we were to make an information model of a level definition/relationship as an independent entity, this would mean that it would include:

  • the fact that this is a level relationship;
  • the levelled, binary concept ID;
  • the framework ID;
  • the level number;
  • the unlevelled, rankable concept ID;
  • the level label.

If this is represented within a framework, the link to the containing framework is implicit, so might not show clearly. But the need for this should be clear if a level structure is represented separately.

As well as defining levels for a particular area like songwriting, it is possible similarly (as many actual level frameworks do) to define a set of generic levels that can apply to a range of rankable, or even unordered, concepts. This seems to me to be a good way of understanding what frameworks like the EQF do. Because there is no specific unlevelled concept in such a framework, we have to make inclusion of the unlevelled concept within the information model optional. The other thing that is optional is the level label. Many levels have labels as well as numbers, but not all. The number, however, though it is frequently left out from some level frameworks, is essential if the logic of ordering is to be present.

Level attribution

A key point that has been growing in conviction in me is that relationships for level attribution and level definition need to be treated separately. In this context, the word “attribution” suggests that a level is an attribute, either of a competence concept or of a person. It feels quite close to other sorts of categorisation.

Representing the attribution of levels is pretty straightforward. Whether levels are educational, professional, or developmental, they can be attributed to competence concepts, to individual claims and to requirements. Such an attribution can be expressed using the identifier of the competence concept, a relationship meaning “… is attributed the level …”, and an identifier for the level.

If we say that a certain well-defined and binarily assessable ability is at, say, EQF competence level 3, it is an aid to cross-referencing; an aid to locating that ability in comparison with other abilities that may be at the same or different levels.

A level can be attributed to:

  • a separate competence concept definition;
  • an ability item claimed by an individual;
  • an ability item required in a job specification;
  • a separate intended learning outcome for a course or course unit;
  • a whole course unit;
  • to a whole qualification, but care needs to be exercised, as many qualifications have components at mixed levels.

An assessment can result in the assessor or awarding body attributing an ability level to an individual in a particular area. This means that, in their judgement, that individual’s ability in the area is well described by the level descriptors.

Combining generic levels with areas of skill or competence

Let’s look more closely at combining generic levels with general areas of skill or competence, in such a way that the combination is more assessable. A good example of this is associated with the Europass Language Passport (ELP) that I mentioned in post 4. The Council of Europe’s “Common European Framework of Reference for Languages” (CEFRL), embodied in the ELP, make little sense without the addition of specific languages in which proficiency is assessed. Thus, the CEFRL’s “common reference levels” are not binarily assessable, just as “able to speak French” is not. The reference levels are designed to be independent of any particular language.

Thus, to represent a claim or a requirement for language proficiency, one needs both a language identifier and an identifier for the level. It would be very easy in practice to construct a URI identifier for each combination. The exact method of construction would need to be widely agreed, but as an example, we could define a URI for the CEFRL — e.g. http://example.eu/CEFRL/ — and then binary concept URIs expressing levels could be constructed something like this:

http://example.eu/CEFRL/language/mode/level#number

where “language” is replaced by the appropriate IETF language tag; “mode” is replaced by one of “listening”, “reading”, “spoken_interaction”, “spoken_production” or “writing” (or agreed equivalents, possibly in other languages); “level” is replaced by one of “basic_user”, “independent_user”, “proficient_user”, “A1″, “A2″, “B1″, “B2″, “C1″, “C2″; and “number” is replaced by, say, 10, 20, 30, 40 , 50 or 60 corresponding to A1 through to C2. (These numbers are not part of the CEFRL, but are needed for the formalisation proposed here.) A web service would be arranged where putting the URI into a browser (making an http request) would return a page with a description of the level and the language, plus other appropriate machine readable metadata, including links to components that are not binarily assessable in themselves. “Italian reading B1″ could be a short description, generated by formula, not separately, and a long description could also be generated automatically combining the descriptions of the language, reading, and the level criteria.

In principle, a similar approach could be taken for any other level system. The defining authority would define URIs for all separate binarily assessable abilities, and publish a full structure expressing how each one relates to each other one. Short descriptions of the combinations could simply combine the titles or short descriptions from each component. No new information is needed to combine a generic level with a specific area. With a new URI to represent the combination, a request for information about that combination can return information already available elsewhere about the generic level and the specific area. If a new URI for the combination is not defined, it is not possible to represent the combination formally. What one can do instead is to note a claim or a requirement for the generic level, and give the particular area in the description. This seems like a reasonable fall-back position.

Relating levels to optionality

Optionality was one of the less obvious features discussed previously, as it does not occur in NOSs. It’s informative to consider how optionality relates to levels.

I’m not certain about this, but I think we would want to say that if a definition has optional parts, it is not likely to be binarily assessable, and that levelled concepts are normally binarily assessable. A definition with optional parts is more likely to be rankable than binary, and it could even fail to be rankably assessable, rather being merely unordered. So, on the whole, defining levels should surely reduce, and ideally eliminate, optionality: levelled concepts should ideally have no optionality, or at least less than the “parent” unlevelled concept.

Proposals

So in conclusion here are my proposals for representing levels, as level-defining relations.

  1. Use of levels Use levels as one way of relating binarily assessable concepts to rankable ones.
  2. The framework Define a set of related levels together in a coherent framework. Give this framework a URI identifier of its own. The framework may or may not include definitions of the related unlevelled and levelled concepts.
  3. The unlevelled concept In cases of levels of a concept more general than the set of levels you are defining, ensure the unlevelled concept has one URI. In a generic framework, this may not be present.
  4. The levels Represent each level as a competence concept in its own right, complete with short and long descriptions, and a URI as identifier.
  5. Level numbering Give each level a number, such that higher levels have higher numbers. Sometimes consecutive numbers from 0 or 1 will work, but if you think judgements of personal ability may lie in between the levels you define, you may want to choose numbers that make good sense to people who will use the levels.
  6. Level labels If you are trying to represent levels where labels already exist in common usage, record these labels as part of the structured definition of the appropriate level. Sometimes these labels may look numeric, but (as with UK degree classes) the numbers may be the wrong way round, so they really are labels, not level numbers. Labels are optional: if a separate label is not defined, the level number is used as the label.
  7. The level relationships These should be represented explicitly as part of the framework. This can either be separately, or within a hierarchical structure.

Representing level definitions allows me to add to the diagram that last appeared at the bottom of post 14, showing the idea of what should be there to represent levels. The diagram includes defining level relationships, but not yet attributing levels (which is more like categorising in other ways.)

information model diagram including levels

Later, I’ll go back to the overall concept map to see how the ideas that I’ve been developing in recent months fit in to the whole, and change the picture somewhat. But first, some long-delayed extra thoughts on specificity, questions and answers related to competence.

The logic of competence assessability

(17th in my logic of competence series)

The discussion of NOS in the previous post clearly implicated assessability. Actually, assessment has been on the agenda right from the start of this series: claims and requirements are for someone “good” for a job or role. How do we assess what is “good” as opposed to “poor”? The logic of competence partly relies on the logic of assessability, so the topic deserves a closer look.

“Assessability” isn’t a common word. I mean, as one might expect, the quality of being assessable. Here, this applies to competence concept definitions. Given a definition of skill or competence, will people be able to use that definition to consistently assess the extent to which an individual has that skill or competence? If so, the definition is assessable. Particular assessment methods are usually designed to be consistent and repeatable, but in all the cases I can think of, a particular assessment procedure implies the existence of a quality that could potentially be assessed in other ways. So “assessability” doesn’t necessarily mean that one particular assessment method has been defined, but rather that reliable assessment methods can be envisaged.

The contrast between outcomes and behaviours / procedures

One of the key things I learned from discussion with Geoff Carroll was the importance to many people of seeing competence in terms of assessable outcomes. The NOS Guide mentioned in the previous post says, among other things, that “the Key Purpose statement must point clearly to an outcome” and “each Main Function should point to a clear outcome that is valued in employment.” This is contrasted with “behaviours” — some employers “feel it is important to describe the general ways in which individuals go about achieving the outcomes”.

How much emphasis is put on outcomes, and how much on what the NOS Guide calls behaviours, depends largely on the job, and should determine the nature of the “performance criteria” written in a related standard. And, moreover, I think that this distinction between “outcomes” and “behaviours” is quite close to the very general distinction between “means” and “ends” that crops up as a general philosophical topic. To illustrate this, I’ll try giving two example jobs that differ greatly along this dimension: writing commercial pop songs; and flying commercial aeroplanes.

You could write outcome standards for a pop songwriter in terms of the song sales. It is very clear when a song reaches “the charts”, but how and why it gets there are much less clear. What is perhaps more clear is that the large majority of attempts to write pop songs result in — well — very limited success (i.e. failure). And although there are some websites that give e.g. Shortcuts to Hit Songwriting (126 Proven Techniques for Writing Songs That Sell), or How to Write a Song, other commentators e.g. in the Guardian are less optimistic: “So how do you write a classic hit? The only thing everyone agrees on is this: nobody has a bloody clue.”

The essence here is that the “hit” outcome is achieved, if it is achieved at all, through means that are highly individual. It seems unlikely that any standards setting organisation will write an NOS for writing hit pop songs. (On the other hand, some of the composition skills that underlie this could well be the subject of standards.)

Contrast this with flying commercial aeroplanes. The vast majority of flights are carried out successfully — indeed, flight safety is remarkable in many ways. Would you want your pilot to “do their own thing”, or try out different techniques for piloting your flight? A great deal of basic competence in flying is accuracy and reliability in following set procedures. (Surely set procedures are essentially the same kind of thing as behaviours?) There is a lot of compliance, checking and cross-checking, and little scope for creativity. Again it is interesting to note that there don’t seem to be any NOSs for airline pilots. (There are for ground and cabin staff, maintained by GoSkills. In the “National Occupational Standards For Aviation Operations on the Ground, Unit 42 – Maintain the separation of aircraft on or near the ground”, out of 20 performance requirements, no fewer than 11 start “Make sure that…”. Following procedures is explicitly a large part of other related NOSs.)

However, it is clear that there are better and worse pop songwriters, and better and worse pilots. One should be able to write some competence definitions in each case that are assessable, even if they might not be worth making into NOSs.

What about educational parallels for these, as most of school performance is assessed? Perhaps we could think of poetry writing and mathematics. Probably much of what is good in poetry writing is down to individual inspiration and creativity, tempered by some conventional rules. On the other hand, much of what is good in mathematics is the ability to remember and follow the appropriate procedures for the appropriate cases. Poetry, closely related to songwriting, is mainly to do with outcomes, and not procedures — ends, not means; mathematics, closer to airline piloting, is mainly to do with procedures, with the outcome pretty well assured as long as you follow the appropriate procedure correctly.

Both extremes of this “outcome” and “procedure” spectrum are assessable, but they are assessable in different ways, with different characteristics.

  1. Outcome-focused assessment (getting results, main effects, “ends”) allows variation in the component parts that are not standardised. What may be specified are the incidental constraints, or what to avoid.
  2. Assessment on procedures and conformance to constraints (how to do it properly, “means”, known procedures that minimise bad side effects) tends to have little variability in component procedural parts. As well as airline pilots, we may think of train drivers, power plant supervisors, captains of ships.

Of course, there is a spectrum between these extremes, with no clear boundary. Where the core is procedural conformance, handling unexpected problems may also feature (often trained through simulators). Coolness under pressure is vital, and could be assessed. We also have to face the philosophical point that someone’s ends may be another’s means, and vice versa. Only the most menial of means cannot be treated as an end, and only the greatest ends cannot be treated as a means to a greater end.

Outcomes are often quantitative in nature. The pop song example is clear — measures of songs sold (or downloaded, etc.) allow songwriters to be graded into some level scheme like “very successful”, “fairly successful”, “marginally successful” (or whatever levels you might want to establish). There is no obvious cut-off point for whether you are successful as a hit songwriter, and that invites people to define their own levels. On the other hand, conformance to defined procedures looks pretty rigid by comparison. Either you followed the rules or you didn’t. It’s all too clear when a passenger aeroplane crashes.

But here’s a puzzle for National Occupational Standards. According to the Guide, NOSs are meant to be to do with outcomes, and yet they admit no levels. If they acknowledged that they were about procedures, perhaps together with avoiding negative outcomes, then I could see how levels would be unimportant. And if they allowed levels, rather than being just “achieved” or “not yet achieved” I could see how they would cover all sorts of outcomes nicely. What are we to do about outcomes that clearly do admit of levels, as do many of the more complex kind of competences?

The apparent paradox is that NOSs deny the kind of level system that would allow them properly to express the kind of outcomes that they aspire to representing. But maybe it’s no paradox after all. It seems reasonable that NOSs actually just describe the known standards people need to reach to function effectively in certain kinds of roles. That standard is a level in itself. Under that reading, it would make little sense for a NOS to be subject to different levels, as it would imply that the level of competence for a particular role is unknown — and in that case it wouldn’t be a standard.

Assessing less assessable concepts

Having discussed assessable competence concepts from one extreme to the other, what about less assessable concepts? We are mostly familiar with the kinds of general headings for abilities that you get with PDP (personal/professional development planning) like teamwork, communication skills, numeracy, ICT skills, etc. You can only assess a person as having or not having a vague concept like “communication skills” after detailing what you include within your definition. With a competence such as the ability to manage a business, you can either assess it in terms of measurable outcomes valued by you (e.g. the business is making a profit, has grown — both binary — or perhaps some quantitative figure relating to the increase in shareholder value, or a quantified environmental impact) or in terms of a set of abilities that you consider make up the particular style of management you are interested in.

These less assessable concepts are surely useful as headings for gathering evidence about what we have done, and what kinds of skills and competences we have practiced, which might be useful in work or other situations. It looks to me that they can be made more assessable in one of a few ways.

  1. Detailing assessable component parts of the concept, in the manner of NOSs.
  2. Defining levels for the concept, where each level definition gives more assessable detail, or criteria.
  3. Defining variants for the concept, each of which is either assessable, or broken down further into assessable component parts.
  4. Using a generic level framework to supply assessable criteria to add to the concept.

Following this last possibility, there is nothing to stop a framework from defining generic levels as a shorthand for what needs to be covered at any particular level of any competence. While NOSs don’t have to define levels explicitly, it is still potentially useful to be able to have levels in a wider framework of competence.

[added 2011-09-04] Note that generic levels designed to add assessability to a general concept may not themselves be assessable without the general concept.

Assessability and values in everyday life

Defined concepts, standards, and frameworks are fine for established employers in established industries, who may be familiar with and use them, but what about for other contexts? I happen to be looking for a builder right now, and while my general requirements are common enough, the details may not be. In the “foreground”, so to speak, like everyone else, I want a “good” quality job done within a competitive time interval and budget. Maybe I could accept that the competence I require could be described in terms of NOSs, while price and availability are to do with the market, not competence per se. But when it comes to more “background” considerations, it is less clear. How do I rate experience? Well, what does experience bring? I suspect that experience is to do with learning the lessons that are not internalised in an educational or training setting. Perhaps experience is partly about learning to avoid “mistakes”. But, what counts as mistakes depends on one’s values. Individuals differ in the degree to which they are happy with “bending rules” or “cutting corners”. With experience, some people learn to bend rules less detectably, others learn more personal and professional integrity. If someone’s values agree with mine, I am more likely to find them pleasant.

There’s a long discussion here, which I won’t go into deeply, involving professional associations, codes of conduct and ethics, morality, social responsibility and so on. It may be possible to build some of these into performance criteria, but opinions are likely to differ. Where a standard talks about procedural conformance, it can sometimes be framed as knowing established procedures and then following them. A generic competence at handling clients might include the ability to find out what the client’s values are, and to go along with those to the extent that they are compatible with one’s own values. Where they aren’t, a skill in turning away work needs to be exercised in order to achieve personal integrity.

Conclusions

It’s all clearly a complex topic, more complex indeed than I had reckoned back last November. But I’d like to summarise what I take forward from this consideration of assessability.

  1. Less assessable concepts can be made more assessable by detailing them in any of several ways (see above).
  2. Goals, ends, aims, outcomes can be assessed, but say little about constraints, mistakes, or avoiding occasional problems. In common usage, outcomes (particularly quantitative ones) may often have levels.
  3. Means, procedures, behaviours, etc. can be assessed in terms of (binary) conformity to prescribed pattern, but may not imply outcomes (though constraints may be able to be formulated as avoidance outcomes).
  4. In real life we want to allow realistic competence structures with any of these features.

In the next post, I’ll take all these extra considerations forward into the question of how to represent competence structures, partly through discussing more about what levels are, along with how to represent them. Being clear about how to represent levels will leave us also clearer about how to represent the less precise, non-assessable concepts.

The logic of National Occupational Standards

(16th in my logic of competence series)

I’ve mentioned NOSs (UK National Occupational Standards) many times in earlier posts in this series, (3, 5, 6, 8, 9, 12, 14) but last week I was fortunate to visit a real SSCLANTRA — talk to some very friendly and helpful people there and elsewhere, and reflect further on the logic of NOSs.

One thing that became clear is that NOSs have specific uses, not exactly the same as some of the other competence-related concepts I’ve been writing about. Following this up, on the UKCES website I soon found the very helpful “Guide to Developing National Occupational Standards” (pdf) by Geoff Carroll and Trevor Boutall, written quite recently: March 2010. For brevity, I’ll refer to this as “the NOS Guide”.

The NOS Guide

I won’t review the whole NOS Guide, beyond saying that it is an invaluable guide to current thinking and practice around NOSs. But I will pick out a few things that are relevant: to my discussion of the logic of competence; to how to represent the particular features of NOS structures; and towards how we represent the kinds of competence-related structures that are not part of the NOS world.

The NOS Guide distinguishes occupational competence and skill. Its definitions aren’t watertight, but generally they are in keeping with the idea that a skill is something that is independent of its context, not necessarily in itself valuable, whereas an occupational competence in a “work function” involves applying skills (and knowledge). Occupational competence is “what it means to be competent in a work role” (page 7), and this seems close enough to my formulation “the ability to do what is required“, and with the corresponding EQF definitions. But this doesn’t help greatly in drawing a clear line between the two. What is considered a work function might depend not only on the particularities of the job itself, and also the detail in which it has been analysed for defining a particular job role. In the end, while the distinction makes some sense, the dividing line still looks fairly arbitrary, which justifies my support for not making a distinction in representation. This seems confirmed also by the fact that, later, when the NOS Guide discusses Functional Analysis (more of which below), the competence/skill distinction is barely mentioned.

The NOS Guide advocates a common language for representing skill or occupational competence at any granularity, ideally involving one brief sentence, containing:

  1. at least one action verb;
  2. at least one object for the verb;
  3. optionally, an indication of context or conditions.

Some people (including M. David Merrill, and following him, Lester Gilbert) advocate detailed vocabularies for the component parts of this sentence. While one may doubt the practicality of ever compiling complete general vocabularies, perhaps we ought to allow at least for the possiblity of representing verbs, objects and conditions distinctly, for any particular domain, represented in a domain ontology. If it were possible, this would help with:

  • ensuring consistency and comprehensibility;
  • search and cross-referencing;
  • revision.

But it makes sense not to make these structures mandatory, as most likely there are too many edge cases.

The whole of Section 2 of the NOS Guide is devoted to what the authors refer to as “Functional Analysis”. This involves identifying a “Key Purpose”, the “Main Functions” that need to happen to achieve the Key Purpose, and subordinate to those, the possible NOSs that set out what needs to happen to achieve each main function. (What is referred to in the NOS Guide as “a NOS” has also previously been called a “Unit”, and for clarity I’ll refer to them as “NOS units”.) Each NOS unit in turn contains performance criteria, and necessary supporting “knowledge and understanding”. However, these layers are not rigid. Sometimes, a wide-reaching purpose may be analysed by more than one layer of functions, and sometimes a NOS unit is divided into elements.

It makes sense not to attempt to make absolute distinctions between the different layers. (See also my post #14.) For the purposes of representation, this implies that each competence concept definition is represented in the same way, whichever layer it might be seen as belonging to; layers are related through “broader” and “narrower” relationships between the competence concepts, but different bodies may distinguish different layers. In eCOTOOL particularly, I’ve come to call competence concept definitions, in any layer, “ability items” for short, and I’ll use this terminology from here.

One particularly interesting section of the NOS Guide is its Section 2.9, where attention turns to the identification of NOS units themselves, as the component parts of the Main Functions. In view of the authority of this document, it is highly worthwhile studying what the Guide says about the nature of NOS units. Section 2.9 directly tackles the question of what size a NOS should be. Four relevant points are made, of which I’ll distinguish just two.

First, there is what we could call the criterion of individual activity. The Guide says: “NOS apply to the work of individuals. Each NOS should be written in such a way that it can be performed by an individual staff member.” I look at this both ways for complementary views. When two aspects of a role may reasonably and justifiably be performed separately by separate individuals, there should be separate NOS units. Conversely, when two aspects of a role are practically always performed by the same person, they naturally belong within the same NOS unit.

Second, I’ve put together manageability and distinctness. The Guide says that, if too large, the “size of the resulting NOS … could result in a document that is quite large and probably not well received by the employers or staff members who will be using them”, and also that it matters “whether or not things are seen as distinct activities which involve different skills and knowledge sets.” These seem to me both to be to do with fitting the size of the NOS unit to human expectations and requirements. In the end, however, the size of NOS units is a matter of good practice, not formal constraint.

Section 3 of the NOS Guide deals with using existing NOS units, and given the good sense of reuse, it seems right to discuss this before detailing creating your own. The relationship between the standards one is creating and existing NOS units could well be represented formally. Other existing NOS units may be

  • “imported” as is, with the permission of the originating body
  • “tailored”, that is modified slightly to suit the new context, but without any substantive change in what is covered (again, with permission)
  • used as the basis of a new NOS unit.

In the first two cases, the unit title remains the same; but in the other case where the content changes, the unit title should change as well. Interestingly, there seems no formal way of stating that a new NOS unit is based on an existing one, but changed too much to be counted as “tailored”.

Section 4, on creating your own NOSs, is useful particularly from the point of view of formalising NOS structures. The “mandatory NOS components” are set out as:

  1. Unique Reference Number
  2. Title
  3. Overview
  4. Performance Criteria
  5. Knowledge and Understanding
  6. Technical Data

and I’ll briefly go over each of these here.

It would be so easy, in principle, to recast a Unique Reference Number as a URI! However, the UKCES has not yet mandated this, and no SSC seems to have taken it up either. (I’m hoping to persuade some.) If a URI was also given to the broader items (e.g. key purposes and main functions) then the road would be open to a “linked data” approach to representing the relationships between structural components.

Title is standard Dublin Core, while Overview maps reasonably to dcterms:description.

Performance criteria may be seen as the finest granularity ability items represented in a NOS, and are strictly parts of NOS units. They have the same short sentence structure as both NOS units and broader functions and purposes. In principle, each performance criterion could also have its own URI. A performance criterion could then be treated like other ability items, and further analysed, explained or described elsewhere. An issue for NOSs is that performance criteria are not identified separately, and therefore there is no way within a NOS structure to indicate similarity or overlap between performance criteria appearing in different NOS units, whether or not the wording is the same. On the other hand, if NOS structures could give URIs to the performance criteria, they could be reused, for example to suggest that evidence within one NOS unit would provide also useful evidence within a different NOS unit.

Performance criteria within NOS units need to be valid across a sector. Thus they must not embody methods, etc., that are fine for one typical employer but wrong for another. They must also be practically assessable. These are reasons for avoiding evaluative adverbs, like the Guide’s example “promptly”, which may be evaluated differently in different contexts. If there are going to be contextual differences, they need to be more clearly signalled by referring e.g. to written guidance that forms part of the knowledge required.

Knowledge and understanding are clearly different from performance criteria. Items of knowledge are set out like performance criteria, but separately in their own section within a NOS unit. As hinted just above, the inclusion of explicit knowledge can mean that a generalised performance criterion can often work if the knowledge dependent on context is factored out, in places where there would otherwise be no common approach to assessment.

In principle, knowledge can be assessed, but the methods of assessment differ from those of performance criteria. Action verbs such as “state”, “recall”, “explain”, “choose” (on the basis on knowledge) might be introduced, but perhaps are not absolutely essential, in that a knowledge item may be assessed on the basis of various behaviour. Knowledge is then treated (by eCOTOOL and others) as another kind of ability item, alongside performance criteria. The different kinds of ability item may be distinguished — for example following the EQF, as knowledge, skills, and competence — but there are several possible categorisations.

The NOS Guide gives the following technical data as mandatory:

  1. the name of the standards-setting organisation
  2. the version number
  3. the date of approval of the current version
  4. the planned date of future review
  5. the validity of the NOS: “current”; “under revision”; “legacy”
  6. the status of the NOS: “original”; “imported”; “tailored”
  7. where the status is imported or tailored, the name of the originating organisation and the Unique Reference Number of the original NOS.

These could very easily be incorporated into a metadata schema. For imported and tailored NOS units, a way of referring to the original could be specified, so that web-based tools could immediately jump to the original for comparison. The NOS Guide goes on to give more optional parts, each of which could be included in a metadata schema as optional.

Issues emerging from the NOS Guide

One of the things that is stressed in the NOS Guide (e.g. page 32) is that the Functional Analysis should result in components (main functions, at least) that are both necessary and sufficient. That’s quite a demand — is it realistic, or could it be characterised as reductionist?

Optionality

The issue of optionality has been covered in the previous post in this series. Clearly, if NOS structures are to be necessary and sufficient, logically there can be no optionality. It seems that, practically, the NOS approach avoids optionality in two complementary ways. Some options are personal ways of doing things, at levels more finely grained than NOS units. Explicitly, NOS units should be written to be inclusive of the diversity of practice: they should not prescribe particular behaviours that represent only some people’s ways of doing things. Other options involve broader granularity than the NOS unit. The NOS Guide implies this in the discussion of tailoring. It may be that one body wants to create a NOS unit that is similar to an existing one. But if the “demand” of the new version NOS unit is not the same as the original, it is a new NOS unit, not a tailored version of the original one.

The NOS Guide does not offer any way of formally documenting the relationship between variant ways of achieving the same aim, or function (other than, perhaps, simple reference). This may lead to some inefficiencies down the line, when people recognise that achieving one NOS unit is really good evidence for reaching the standard of a related NOS unit, but there is no general and automatic way of documenting that or taking it into account. We should, I suggest, be aiming at an overall structure, and strategy, that documents as much relationship as we can reliably represent. This suggests allowing for optionality in an overall scheme, but leaving it out for NOSs.

Levels and assessability

The other big issue is levels. The very idea of level is somehow anathema to the NOS view. A person either has achieved a NOS, and is competent in the area, or has not yet acheived that NOS. There is no provision for grades of achievement. Compare this with the whole of the academic world, where people almost always give marks and grades, comparing and ranking people’s performance. The vocational world does have levels — think of the EQF levels, that are intended for the vocational as well as the academic world — but often in the vocational world a higher level is seen as the addition of other separate skills or occupational competences, not as improving levels of the same ones.

A related idea came to me while writing this post. NOSs rightly and properly emphasise the need to be assessable — to have a effective standard, you must be able to tell if someone has reached the standard or not — though the assessment method doesn’t have to be specified in advance. But there are many vaguer competence-related concepts. Take “communication skills” as a common example. It is impossible to assess whether someone has communication skills in general, without giving a specification of just what skills are meant. Every wakeful person has some ability to communicate! But we frequently see cases where that kind of unassessably vague concept is used as a heading around which to gather evidence. It does make sense to ask a person about evidence for their “communication skills”, or to describe them, and then perhaps to assess whether these are adequate for a particular job or role.

But then, thinking about it, there is a correspondence here. A concept that is too vague to assess is just the kind of concept for which one might define (assessable) levels. And if a concept has various levels, it follows that whether a person has the (unlevelled) concept cannot be assessed in the binary way of “competent” and “not yet competent”. This explains why the NOS approach does not have levels, as levels would imply a concept that cannot be assessed in the required binary way. Rather than call unlevelled concepts “vague”, we could just call them something like “not properly assessable”, implying the need to add extra detail before the concept becomes assessable. That extra detail could be a whole level scheme, or simply a specification of a single-level standard (i.e. one that is simply reached or not yet reached).

In conclusion, I cannot see a problem with specifying a representation for skill and competence structures that includes non-assessable concepts, along with levels as one way of detailing them. The “profile” for NOS use can still explicity exclude them, if that is the preferred way forward.

Update 2011-08-22 and later

After talking further with Geoff Carroll I’ve clarified above that NOSs are to do specifically with occupational competence rather than, e.g. learning competence. And having been pushed into this particular can of worms, I’d better say more about assessability to get a clear run up to levels.