Other competence attributes

(5th in my logic of competence series)

Beyond levels, there are still many aspects to the abstractions that can be seen in competence definitions and structures in common use. What else can we identify? We could think of these as potential attributes of competence, though the term “attributes” is far from ideal.

Just as levels don’t appear in competence definitions themselves, when an attribute is abstracted from a competence definition it usually does not appear explicitly. But unlike levels, which are set out plainly in frameworks, we have to be aware to find other abstracted features.

I started this series by saying that the logical starting point for talking about competence are the claim and the requirement. And indeed, we see many implicit or explicit claims to competence with plenty of detail — for example in the form of covering letters accompanying CVs in support of applications for opportunities. Following this through, it is relevant to consider what else could be said in support of a claim to competence — that goes into more detail than, say, official recognition that a particular standard of competence has been reached.

Imagine that you had done a course on “production horticulture” (the topic mentioned in previous entries) and received a certificate, perhaps even with an attached Europass Certificate Supplement describing the “skills and competences” that you are expected to have acquired by the end of this course. Alternatively, your certificate may not have come as a direct result of a course, but instead from “APEL” — the accreditation of prior experiential learning. That would mean that you had plenty of experience as a horticulturalist, and it had been assessed that you had the skills and competence covered for a particular certificate that covers production horticulture. Now, if you were applying for a job, as well as citing your certificates, maybe attaching the information in any supplements, and stating the level of competence you believe you have achieved, what else would you be likely to want to say to a prospective employer, e.g. in a covering letter, or at an interview?

The most immediately obvious extra feature of one’s own experience might be, for horticulture, what kind of crops you have experience in growing. Stating this is an natural consequence of the fact that the LANTRA standards do not refer to the kind of crops. Next, the NOS documentation explicitly mentions indoor (perhaps greenhouse) and outdoor growing, though these terms are not used in the actual definitions. Which do you have experience of? Or, broadening that out, what kind of farms have you worked on? Soon afterwards, the documentation goes on to talk about equipment, without mentioning the types of equipment explicitly. Can you drive a tractor? Beyond this, I’m ignorant about what kinds of specialist equipment are used for different kinds of cultivation, but more relevant questions could be asked, as it might be important to know whether someone is experienced in using whatever equipment is used in the job being offered. Most workers these days will not be experienced in using hand ploughs or ox-drawn ploughs… And after equipment, the list of attributes that are abstracted out — left out from documented competence definitions — continues.

Some differences in skills and competence are less significant, and you may not need to mention them explicitly, because it is understood that any farmhand could pick up the ability quickly. It is the skills that take longer to learn that will be more significant for recruitment purposes, and more likely to be mentioned in a covering letter or checked at interview.

One of the key facts to recognise here is that the boundary is essentially arbitrary between, on the one hand, what is specified in documentation like the LANTRA NOSs, and on the other hand, what is left for individuals to fill in. Where the boundary is set depends entirely on the authority. While LANTRA standards do not specify particular crops or equipment, we could imagine that there was a professional association of, say, strawberry growers, that published a set of more detailed standards about what skills and competences are needed to grow just strawberries. Quite possibly that would mention the specific equipment that is used in strawberry growing. (There is for example as it happens a Florida Strawberry Growers Association, but it doesn’t seem to set out competence standards.)

Different occupational areas will reveal a similar pattern. It is unlikely, for instance, that ICT skills standards (e.g. SFIA, e-CF) will specify particular programming languages, but it is one attribute that programmers regularly mention in their CVs or covering letters when seeking employment. Or, take the case of health professionals. There are several generic types of situations with different demands. What “is required” of a competent health professional may well differ between first aid situations with no equipment; at the scene of accidents; in emergency units in hospitals; and for outpatient clinics. Some skills or abilities may be present in different forms in these different situations, and we can perhaps imagine someone mentioning in a covering letter the kinds of situation in which they had experience, if these were particularly relevant to a post applied for.

It should be clear at this point that extra detail claimed will naturally fill in for what is left out of the skills or competence standard documentation. But because what is sometimes left out is equally sometimes left in the documentation, it would make a great deal of sense for industry sectors to set out a full appropriate terminology for their domain — a “domain ontology” if you like. (Though don’t be using that “O” word in the wrong company…) Those terms may then be used either within competence definitions, or for individuals to supplement the competence definitions within their own claims. Typically we could expect common industry terms to include a set of roles, and a range of tools, equipment or materials, but of course, these sets will differ between occupations. They may also differ between different countries, cultures and jurisdications. As well as roles and equipment, any occupational area could easily have their own set of terms that have no corresponding set in different areas. We saw this above with indoor and outdoor growing. For plumbing, there are, for example, compression fittings and soldered fittings. For musicians, there are different genres of music each with their own style. For builders, building methods and parts differ between different countries, so could be documented somewhere. And so on.

There is a very wide range of particular attributes that could be dealt with in this way, but it is probably worth mentioning a few particular generic concepts that may be of special interest. First, let us consider context. For standard descriptions of competence, it will be the contexts that are met with repeatedly that are of interest, because it is those where experience may be gained and presented that may match with a job requirement. To call something a context, all that is needed is to be able to say that a skill was learned, or practiced, in the context of — whatever the context is. A context could be taken as a set of conditions that in some way frame, or surround, or define, a kind of situation of practice. If we have a good domain ontology, we would expect to find the common contexts of the domain formulated and documented.

Second, what about conditions? We can refer back to more informal usage, where someone might say, for instance, I can plough that field, or write that program, as long as I have this particular tool (agricultural or software). It makes a lot of sense to say that this is a condition of the competence. Conditions can really be almost anything one can think of that can affect performance of a task. As suggested in the discussion of context, a set of stable and recognisable conditions could be taken to consititute a context. But the term “conditions” generally seems to be wider. It means, literally, anything that I can “say” that affects the competence. As such, we are probably more likely to meet conditions in the clarification of an individual claim than in standard competence documentation. That still means that there is value to assembling terminology for any conditions that are understandable to many people in a domain. It may be that a job requirement specifies conditions that are not in the standard competence definitions, and if those conditions are in a domain ontology, they can potentially be matched automatically to the claims of individuals referring to the same conditions.

Assessment methods should also specify conditions under which the assessment is to take place. The relevance of an assessment may depend at least partly on how closely the conditions for the assessment reflect the conditions under which relevant tasks are performed. And talking about assessment, it is perhaps worth pointing out that, though assessment criteria are logically separate from the definitions of the skills and competence that are being assessed, there is still a fluid boundary between what is defined by the competence documentation, what is claimed by an individual, and what appears as an assessment condition or criterion. The conditions of an assessment may add detail to a competence in such a way that the individual no longer needs to detail something in a claim. An asssessment criterion may fairly obviously point to a level, but, given that a level is also sometimes wrapped in with a competence definition, the criterion may take over something of the competence definition itself. It would be expected that assessment criteria also use the same domain terminology as can be used, both for competence definitions, and within claims.

If the picture that emerges is rather confused, that seems unfortunately realistic. The fluid boundaries that I have discussed here are perhaps a natural result of the desire to specify and detail skill and competence in whatever way is most convenient, but that does not add any clarity to distinctions between context, conditions, criteria, levels, and other possible features or attributes of competence. On the other hand, this lack of clarity makes it paradoxically easier to represent the information. If we have no clear distinction between these different concepts, then we can use a minimal number of ways of representing them.

So, how should competence attributes, including context, conditions and criteria, be represented?

  1. To do this most usefully, a domain ontology / classification / glossary / dictionary needs to exist. It doesn’t matter what it is called, but it does matter that each term is defined, related where possible to the other terms, and given a URI. This doesn’t need to be a monolithic ontology. It could be just a set of relevant defined terms in vocabularies. And there is every reason to reuse common terms, vocabularies and classification schemes across different domains.
  2. There is one major logical distinction to be made. Some terms are strictly ordered on a scale: these are levels or like levels. Other terms are not on a scale, and are not ordered. These are all the rest, covering what has been discussed above as context, conditions, criteria.
  3. Competence definitions, assessment specifications, job requirements and individual claims can all use this set of domain related terms. The more thoroughly this is done, the more possibilities there will be to do automatic matching, or at least for the ICT systems to be as helpful as possible when people are searching for jobs, when employers are searching for people, or anything related.

Having sorted out this much, we are free to consider the basic structures into which competence concepts and definitions seem to fit.

Levels of competence

(4th in my logic of competence series)

Specifications, to gain acceptance, have to reflect common usage, at least to a reasonable degree. The reason is not hard to see. If a specification fails to map common usage in an understandable way, people using it will be confused, and could try to represent common usage in unpredictable ways, defeating interoperability. The abstractions that are most important to formalise clearly are thus those in common usage.

It does seem to be very common practice that competence in many fields comes to be described as having levels. The logic of competence levels is very simple: a higher level of competence subsumes — that is, includes — lower levels of the same competence. In any field where competence has levels, in principle this allows graded claims, where there may be a career progression from lower to higher level, along with increasing knowledge, practice, and experience. Individuals can claim competence at a level appropriate to them; if a search system represents levels of competence effectively, employers or others seeking competent people will not miss people whose level of competence is greater than the one they give as the minimum.

For example, the Skills Framework for the Information Age (SFIA) is a UK-originated framework for the IT sector, founded in 2003 by a partnership including the British Computer Society. This gives 7 “levels or responsibility”, and different roles in the industry are represented at one or more levels. The levels labels are: 1 Follow; 2 Assist; 3 Apply; 4 Enable; 5 Ensure, advise; 6 Initiate, influence; 7 Set strategy, inspire, mobilise. These levels are given fuller general definitions in terms of degress of autonomy, influence, complexity, and business skills. There are around 87 separate skills defined, and for each skill, there is a description of what is expected of this skill at each defined level — of which there are between 1 and 6.

The European e-Competency Framework (e-CF), on which work began in 2007, was influenced by SFIA, but has just 5 “proficiency levels” simply termed e-1 to e-5. The meaning of each level is given within each e-Competence. There are 36 e-competences, grouped into 5 areas.

The e-CF refers to the cross-subject European Qualifications Framework, which has 8 levels. Level e-1 corresponds to EQF level 3; e-2 to EQF 4 and 5; e-3 to EQF 6; e-4 to EQF 7; and e-5 to EQF 8. However, the relationships between e-CF and SFIA, and between SFIA and EQF, are not as clear cut. The EQF gives descriptors for each of three categories at each level: “Knowledge”, “Skills”, and “Competence”: that is, 24 descriptors in all.

This small selection of well-developed frameworks is enough to show conclusively that there is no universally agreed set of levels. In the absence of such agreement, levels only make sense in terms of the framework that they belong to. All these frameworks give descriptors of what is expected at each level, and the process of assigning a level will essentially be a process of gauging which descriptor best fits a particular person’s performance in a relevant setting. While this is not a precise science, the kind of descriptors used suggest that there might be a reasonable degree of agreement between assessors about the level of a particular individual in a particular area.

For comparison, it is worth mentioning some other frameworks. (Here are just two more to broaden the scope of the examples; but there are very many others throughout the professions, and in learning education and training.)

In the UK, the National Health Service has a Knowledge and Skills Framework (NHS KSF) published in 2004. It is quite like the e-CF in structure, in that there are 30 areas of knowledge and skill (called, perhaps confusingly, “dimensions”), and for each “dimension” there are descriptors at four levels, from the lowest 1 to the highest 4. As with all level structures, higher level competence in one particular “dimension” seems to imply coverage of the lower levels, though a level on one “dimension” has no obvious implication about levels in other “dimensions”.

A completely different application of levels is seen in the Europass Language Passport. This offers 6 levels for each of 5 linguistic areas, as a way of self-assessing the levels of one’s linguistic abilities. The areas are: listening; reading; spoken interaction; spoken production; and writing. The levels are in three groups of two: basic user A1 and A2; independent user B1 and B2; proficient user C1 and C2. At each level, for each area, there is a descriptor of the ability in that area at that level. That is 30 different descriptors. All of this applies equally to any language, so the particular languages do not need to appear in the framework.

Overall, there is a great deal of consistency in the kind of ways in which levels are described and used. Given that they have been in use now for many years, it makes clear sense for any competence structure to take account of levels, by allowing a competence claim, or a requirement, to specify a level as a qualifier to the area of competence, with that level tied to the framework to which it belongs, and where it is defined in terms of a descriptor. This use of level will at least make processing of competence information a little easier.

But beyond level it seems to get harder. The next topic to be covered will be other attributes including conditions or context of competence.

Analysis and structure of competence

(3rd in my logic of competence series)

I have suggested that the natural way of identifying competence concepts relates to the likely correlation of “the ability to do what is required” between different tasks and situations that may be encountered, requiring similar competence. Having identified an area of competence in this way, how could it best be analysed and structured?

First, we should make a case that analysis is indeed needed. Without analysis of competence concepts, we would have to assume that going through any relevant education, training or apprenticeship, leading to recognition, or a relevant qualification, gives people everything they need for competence in the whole area. If this were true, distinguishing between, say, the candidates for a job would not be on the basis of an analysis of their competence, but on the basis of personal attributes, or reputation, or recommendation. While this is indeed how people tend to handle getting a tradesperson to do a private job, it seems unlikely that it would be appropriate for specialist employees. Thus, for example, many IT employers do not just to want “a programmer”, but one who has experience or competence in particular languages and application areas.

On the other hand, it would not be much use only to recruit people who had experience of exactly the tasks or roles required. For a new role, there will naturally not be anyone with that exact prior experience. And equally obviously, people need to develop professionally, gaining new skills. So we need ways of measuring and comparing ability that are not just in terms of time served on the job. In any case, time served on a job is not a reliable indicator of competence. People may learn from experience at different rates, as well as learning different things, even from the same experience. This all points to the need to analyse competence, but how?

We should start by recognising the fact that there are at present no universally accepted rules for how to analyse competence concepts, or what their constituent parts should look like. Instead of imagining some ideal a priori analytical scheme, it is useful to start by looking at examples of how competence has been analysed in practical situations. First, back to horticulture…

The relevant source materials I have to hand happen to be the UK National Occupational Standards (NOSs) produced by LANTRA (UK’s Sector Skills Council for land-based and environmental industries). The “Production Horticulture” NOSs has 16 “units” specific to production horticulture, such as “Set out and establish crops”, “Harvest and prepare intensive crops”, and “Identify and classify plants accurately using their botanical names”. Alongside these specialist units, there are 21 other units either borrowed from, or shared with, other NOSs, such as “Monitor and maintain health and safety”, “Receive, transmit and store information within the workplace”, and “Provide leadership for your team”. At this “unit” level, the analysis of what it takes to be good at production horticulture seems to be understandable and comprehensible, with a good degree of common sense. Most areas of expertise can be broken down in this way to the kind of level where one sees individual roles, jobs or tasks that could in principle be allocated to different people. And there is often a logic to the analysis: to get crops, you have to prepare the ground, then plant, look after, and harvest the crops. That much is obvious to anyone. More detailed, less obvious analysis could be given by someone with relevant experience.

Even at this level of NOS units, there is some abstraction going on. LANTRA evidently chose not to create separate units or standards for growing carrots, cabbages and strawberries. Going back to the ideas on competence correlation, we infer that there is much in common between competence at growing carrots and strawberries, even if there are also some differences. This may be where “knowledge” comes into play, and why occupational standards seem universally to have knowledge listed as well as skills. If someone is competent at growing carrots, then perhaps simply their knowledge of what is different between growing carrots and growing strawberries goes much of the way towards their competence in growing strawberries. But how far? That is less clear.

Abstraction seems to be even more extensive at lower levels. Taking an arbitrary example, the first, fairly ordinary unit in “Production Horticulture” is “Clear and prepare sites for planting crops”, and is subdivided into two elements, PH1.1 “Clear sites ready for planting crops” and PH1.2 “Prepare sites and make resources available for planting crops”. PH1.2 contains lists of 6 things that people should be able to do, and 9 things that they should know. The second item in the list of things that people need to be able to do is “place equipment and materials in the correct location ready for use”, which self-evidently requires a knowledge of what the correct location is. The fifth item is to “keep accurate, legible and complete records”. This is supported by an explicit knowledge requirement, documented as “the records which are required and the purpose of such records”.

This is quite a substantial abstraction, as these examples could make equal sense in a very wide range of occupational standards. In each case, the exact nature of these abilities needs to be filled out with the relevant details from the particular area of application. But no formal structure is given for these abstractions, here or, as far as I know, in any occupational standard, and this leads to problems.

For example, there is no way of telling, from the standard documentation, the extent to which proving the ability to keep accurate records in one domain is evidence of the ability to keep accurate records in another domain; and indeed no way is provided to document views about the relationship between various record-keeping skills. When describing wide competences, this is may be somewhat less of a problem, because when two skills or competences are analysed explicitly, one can at least compare their documented parts to arrive at some sense of the degree of similarity, and the degree to which competence in one might predict competence in another. But at the narrowest, finest grained level documented — in the case of NOSs, the analysis of a unit or element into items of skill and items of knowledge — it means that, though we can see the abstractions, it is not obvious how to use them, and in particular it is not clear how to represent them in information systems in a way that they might be automatically compared, or otherwise managed.

There has been much written, speculatively, about how competence descriptions and structures might effectively be used with information systems, for example acting as the common language between the outcomes of learning, education and training on the one hand, and occupational requirements on the other. But to make this effective in practice, we need to get to grips properly with these questions of abstraction, structure and representation, to move forward from the common sense but informal abstractions and loose structures presently in use, to a more formally structured, though still flexible and intuitive approach.

The next two blog entries will attempt to explore two possible aspects of formalisation: level, and other features often left out from competence definitions, including context or conditions.

Competence concepts and competence transfer

(2nd in my logic of competence series)

If we take competence as the ability to do what is required in a particular situation, then there is a risk that competence concepts could proliferate wildly. This is because “what is required” is rarely exactly the same in different kinds of situations. Competence concepts group together the abilities to do what is required in related situations, where there is at least some correlation between the competence required in the related situations — sometimes talked about in terms of transfer of competence from one situation to another.

For example, horticulture can reasonably be taken as an area of competence, because if one is an able horticulturalist in one area — say growing strawberries — there will be some considerable overlap in one’s ability in another, less practiced area — say growing apples. Yes, there are differences, and a specialist in strawberries may not be very good with apples. But he or she will probably be much better at it than a typical engineer. Surgery might be a completely different example. A specialist in hip replacements might not be immediately competent in kidney transplants, but the training necessary to achieve full competence in kidney transplants would be very much shorter than for a typical engineer.

Some areas of competence, often known as “key skills”, appear across many different areas of work, and probably transfer well. Communication skills, team working skills, and other areas at the same level play a part in full competence of many different roles, though the communication skills required of a competent diplomat may be at a different level to those required of a programmer. Hence, we can meaningfully talk about skill, or competence, or competency, in team work. But if we consider the case of “dealing with problems” (and that may reasonably be taken as part of full competence in many areas) there is probably very little in common between those different areas. We therefore do not tend to think of “dealing with problems” as a skill in its own right.

But we do recognise that the competence in dealing with problems in, say, horticultural contexts shares something in common, and when someone shows themselves able to deal with problems in one situation, probably we only need to inform them of what problems may occur and what action they are meant to take, and they will be able to take appropriate actions in another area of horticulture. As people gain experience in horticulture, one would expect that they would gain familiarity with the general kinds of equipment and materials they have to deal with, although any particularly novel items may need learning about.

Clearing and preparing sites for crops may well have some similarity to other tasks or roles in production horticulture and agriculture more generally, but is unlikely to have much in common with driving or surgery. The more skills or competences in two fields have in common, the more that competence in one field is likely to transfer to competence in another.

So, we naturally accept competence concepts as meaningful, I’m claiming, in virtue of the fact that they refer to types of situation where there is at least some substantial transfer of skill between one situation and another. The more that we can identify transfer going on, the more naturally we are inclined to see it as one area of competence. Conversely, to the extent to which there is no transfer, we are likely to see competences as distinct. This way of doing things naturally supports the way we informally deal with reputation, which is generally done in as general terms as seems to be adequate. Though this failure to look into the details of what we mean to require does lead to mistakes. How did we not know that the financial adviser we took on didn’t know about the kind of investments we really wanted, or was indeed less than wholly ethical in other ways?

Having a clearer idea of what a competence is prepares the way for thinking more about the analysis and structure of competence.

The basis of competence ideas

(1st in my logic of competence series)

Let’s start with a deceptively simple definition. Competence means the ability to do what is required. It is the unpacking of “what is required” that is not simple.

I don’t want to make any claims for that particular form of words — there are any number of definitions current, most of them quite reasonable in their own way. But, in the majority of definitions, you can pick out two principle components: here, they are “the ability to do” and “what is required”. Rowin’s and my earlier paper does offer some other reasonable definitions of what competence means, but I wanted here to start from something as simple-looking as possible.

If the definition is to be helpful, “the ability to do” has to be something simpler than the concept of competence as a whole. And there are many statements of basic, raw ability that would not normally be seen as amounting to competence in any distinct sense. The answers to questions like “can you perform this calculation in your head”, “can you lift this 50 kg weight” and “can you thread this needle” are generally taken as matters of fact, easily testable by giving people the appropriate equipment and seeing if they can perform the task.

What does “what is required” mean, then? This is where all the interest and hidden complexity arises. Perhaps it is easiest to go back to the basic use of competence ideas in common usage. For a job — with an employer, perhaps, or just getting a trades person to fix something — “what is required” is that the person doing the job is competent at the role he or she is taking on. Unless we are recruiting someone, we don’t usually think this through in any detail. We just want “a good gardener”, or to go to “a good dentist” without knowing exactly what being good at these roles involves. We often just go on reputation: has that person done a good job for someone we know? would they recommend them?

The idea is similar from the other point of view. If I want a job as a gardener or a dentist, at the most basic level I want to claim (and convince people) that I am a good gardener, or a good dentist. Exactly what that involves is open to negotiation. What I’m suggesting is that these are the absolute basics in common usage and practice of concepts related to competency. It is, at root, all about finding someone, or claiming that one is the kind of person, that fulfils a role well, according to what is generally required.

People claim, or require, a wide range of things that they “can do” or “are good at”. At the most familiar end of the spectrum, we think of people’s ability or competence for example at cooking, housework, child care, driving, DIY. There are any number of sports and pastimes that people may be more or less good at. At the formal and organisational end of the spectrum, we may think of people as more or less good at their particular role in an organisation — a position for which they may be employed, and which might consist of various sub-roles and tasks. The important point to base further discussion on is that we tend normally to think about people in these quite general terms, and people’s reputation tends to be passed on in these quite general terms, often without explicit analysis or elaboration, unless specific questions are raised.

When either party asks more specific questions, as might happen in a recruitment situation, it is easy to imagine the kind of details that might come up. Two things may happen here. First, questions may probe deeper than the generic idea of competence, to the specifics of what is required for this particular job or role. And second, the issue of evidence may come up. I’ll address these questions later, but right next I want to discuss how competence concepts are identified in terms of transferability.

But the point I have made here is that all this analysis is secondary. Because common usage does not rely on it, we must take the concept of competence as resting primarily just on the claim and on the requirement for a person to fill a role.

The logic of competence

This is a note introducing a series of posts setting out the logic of competence as I see it. I will link from here to other posts in the series as I write them.

This work as a whole is intended to feed in to several activities in which I have been taking part, including InLOC, eCOTOOL, ICOPER, MedBiquitous Competencies WG, Competence Structures for E-Portfolio Tools, and the CEN WS-LT Competency SIG, which had its 3rd Annual meeting in Berlin near the beginning of the series. It builds on and complements Rowin’s and my earlier paper, intending not to set out an academic case, which we did in that paper, but rather the detailed logic, that can be evaluated on its own terms, requiring reference only to common language and practice.

The first step is to express a working definition, and a logical basis for further discussion, which is that it is expressions like claims to competence, rather than competency definitions, that are logically prior. See № 1, “The basis of competence ideas”.

I will continue by considering (please click to go to the posts)

  1. how transferability gives a competence concept its logical identity
  2. how the analysis of just what a competence claim is claiming results in various possible structures for the competence-related concepts
  3. how to make sense of levels of competence
  4. how to make sense of criteria, conditions or context
  5. basic tree structuring of competence concepts
  6. desirable variants of tree structures (including more on levels)
  7. representing the commonality in different structures of competence
  8. other less precise cross-structure relationships
  9. definitions, and a map, of several of the major concepts used, together with logically related ones.

Continuing towards practical implementations:

  1. the requirements for implementing the logic of competence
  2. representing the interplay between concept definitions and structures
  3. representing structural relationships
  4. different ways of representing the same logic
  5. optional parts of competence
  6. the logic of National Occupational Standards
  7. the logic of competence assessability
  8. representing level relationships
  9. more and less specificity in competence definitions
  10. the logic of tourism as an analogy for competence
  11. The pragmatics of InLOC competence logic
  12. InLOC as a cornerstone for other initiatives
  13. InLOC and open badges: a reprise
  14. Open frameworks of learning outcomes
  15. Why frameworks of skill and competence?
  16. How to do InLOC
  17. The key to competence frameworks

I will try, where possible, to motivate and illustrate each point by reference to examples, drawn from existing published materials.

After all the parts have been published and discussed, I intend to put together a full paper (placement as yet undecided) incorporating and crediting ideas from other people — so please contribute these, ideally as comments on the posts themselves, or alternatively just to me.

Later addition, February 2011: I recognise that some of these posts are more than just bite sized. Are there some that you find too much of a mouthful to chew and/or swallow? Might that hold you back from commenting? If so, here is an offer: get in touch with me and I will talk you through any of this material you are interested in, while at the same time I will try to understand where you are coming from, and what is easier or harder for you to grasp. That will help me to express myself more clearly and simply, where I have not yet achieved clarity. I hope this will help!

Development of a conceptual model 5

This conceptual model now includes basic ideas about what goes on in the individual, plus some of the most important concepts for PDP and e-portfolio use, as well as the generalised formalisable concepts processes surrounding individual action. It has come a long way since the last time I wrote about it.

The minimised version is here, first… (recommended to view the images below separately, perhaps with a right-click)

eurolmcm25-min3

and that is complex enough, with so many relationship links looking like a bizarre and distorted spider’s web. Now for the full version, which is quite scarily complex now…

eurolmcm25

Perhaps that is the inevitable way things happen. One thinks some more. One talks to some more people. The model grows, develops, expands. The parts connected to “placement processes” were stimulated by Luk Vervenne’s contribution to the workshop in Berlin of my previous blog entry. But — and I find hard to escape from this — much of the development is based on internal logic, and just looking at it from different points of view.

It still makes sense to me, of course, because I’ve been with it through its growth and development. But is there any point in putting such a complex structure up on my blog? I do not know. It’s reached the stage where perhaps it needs turning into a paper-length exposition, particularly including all the explanatory notes that you can see if you use CmapTools, and breaking it down into more digestible, manageable parts. I’ve put the CXL file and a PDF version up on my own concept maps page. I can only hope that some people will find this interesting enough to look carefully at some of the detail, and comment… (please!) If you’re really interested, get in touch to talk things over with me. But the thinking will in any case surface in other places. And I’ll link from here later if I do a version with comments that is easier to get at.

More competency

The CEN WS-LT Competency SIG discussions of a conceptual model for skill/competence/competency are still at the very interesting early stage where very many questions are open. What kind of model are we trying to reach, and how can we get to where we could get? Anything seems possible, including experiments with procedures and conventions to help towards consensus.

Tuesday, December 1st, Berlin — a rainy day in the Ambassador Hotel, talking with an esteemed bunch of people about modelling skill/competence/competency. I won’t go on about participants and agenda — these can be seen at http://sites.google.com/site/competencydriven/ It was all interesting stuff, conducted in a positive atmosphere of enquiry. I’ll write here about just the issues that struck me, which were quite enough…

How many kinds of model are there?

At the meeting, there seemed to be quite some uncertainty about what kind of model we might be trying to agree on. I don’t know about other people, but I discern two kinds of model:

  • a conceptual model attempting to represent how people understand entities and relationships in the world;
  • an information model that could be used for expressing and exchanging information of common interest.

A binding isn’t really a separate model, but an expression of an information model.

My position, which I know is shared by several others, is that to be effective, information models should be based on common conceptual models. The point here is that without an agreed conceptual model, it is all too easy to imagine that you are building an information model where the terms mean the same thing, and play the same role. This could lead to conflict when agreeing the information model, as different people’s ideas would be based on different conceptual models, which would be hard to reconcile, or even worse in the long term, troublesome ambiguity could become embedded in an information model. Not all ambiguity is troublesome, if the things you are being ambiguous about really share the same information model, but no doubt you can imagine what I mean.

Claims and requirements for competence

A long-term aim of many people is to match what is learned in education with what is required for employment — Luk Vervenne was as usual championing the employer point of view. After reflection on what we have at the moment, and incorporating some of Luk’s ideas in the common information model I’ve been putting together, I’d say we have enough there to make a start, at least, in detailing what a competency claim might be, and how that might relate to a competency requirement.

In outline, a full claim for a single separate competence component could have

  • the definition of that component (or just a title or brief description if no proper definition available)
  • any assessment relevant to that component, with result
  • any qualification or other status relevant to that component (which may imply assessment result)
  • a narrative filling the gap between qualifications or assessment and what is claimed
  • any relevant testimonials
  • a record of relevant experience requiring, or likely to lead to, that competence component
  • links to / location of any other relevant “raw” (i.e. unassessed) evidence

I’ll detail later a possible model of competency requirements, and detail how the two could fit together. And I have now put up the latest version of the big conceptual model as well. There is clearly also a consequent need to be clearer about the structure of assessments, and we’ll be working on that, probably both within CETIS and within the CEN WS-LT.

What about competencies in themselves?

Reflected in the meeting, there still seems to be plenty of disagreement about the detail that is possible in an information model of a competency. Lester Gilbert, for example put forward a model in which he distinguished, for a fully specified educational objective:

  • situation;
  • constraints;
  • learned capability;
  • subject matter content;
  • standard of performance;
  • tools.

The question here, surely, is, to what extent are these facets of a definition (a) common and shared (b) amenable to representation in a usefully machine-processable way?

Personally, I wouldn’t like to rule anything in or out before investigating more fully. At least this could be a systematic investigation, looking at current practice across a range of application areas, carefully comparing what is used in the different areas. I have little difficulty believing that for most if not all learning outcomes or competency definitions, you could write a piece of text to fit into each of Lester’s headings. What I am much more doubtful about is whether there is any scheme that would get us beyond human-readable text into the situation where we could do any automatic matching on these values. Even if there are potential solutions for some, like medical subject headings for the subject matter content, we would need these labels to be pretty repeatable and consistent in order for them to be used automatically. And, what would we do with things like “situation”? The very best I could imagine for situation would be a classification of the different situations that are encountered in the course of a particular occupation. In UK NOSs, these might be written in to the documentation, either explicitly or implicitly. Similar considerations would apply to Lester’s “tools” facet. This might be tractable in the longer term, but would require at least the creation of many domain-specific ontologies, and the linking of any particular definition to one of these domain ontologies.

I can also envisage, as I have been advocating for some time, that some competency definitions would have ontology-like links to other related definitions. These could be ones of equivalence, or the SKOS terms “broadMatch” and “narrowMatch”, in cases where the authorities maintaining the definitions believed that in all contexts, the relationship was applicable.

What about frameworks of skill, competence, etc.?

It surprised me a little that we didn’t actually get round to talking about this in Berlin. But on reflection, with so many other fundamental questions still on the table, perhaps it was only to be expected. Interestingly, so far, I have found more progress here in my participation with MedBiquitous than with the CEN WS-LT.

I’ll write more about this later, but just to trail the key ideas in my version of the MedBiquitous approach:

  • a framework has some metadata (DC is a good basis), a set of competency objects, and a map;
  • the map is a set of propositions about the individual competency objects, relating them to each other and to objects that are not part of the framework;
  • frameworks themselves can be linked to as constituent parts of a framework, just as individual competency objects;
  • it is specified whether to accept the relationships defined within the competency objects, and in particular any breakdown into parts.

The point here is that just about any competency definition could, in principle, be analysed into a set of lower-level skills or competencies. This would be a framework. Equally, most frameworks could be used as objectives in themselves, so playing the same role as an individual competency object, being part of a competency framework. If a framework is included, and marked for including its constituent parts, then all those constituent parts would become part of the framework, by inclusion rather than by direct naming. In this way, it would be easy to extend someone else’s framework rather than duplicating it all.

Need for innovations in process and convention

Perhaps the most interesting conclusion from my point of view was about how we could conduct the processes better. There is a temptation to see the process as a competition between models — this would assume that each model is fixed in advance, and that people can be objective about their own, as well as other people’s models. Probably neither of these assumptions is justified. Most people seem to accept the question as “how can a common conceptual model be made from these models?”, even though there may be little wisdom around on how to do this. There is also the half-way approach of “what common information elements can be discerned between these models?” that might come into play if the greater aim of unifying the conceptual models was relinquished.

From my point of view, this brings me back to two points that I have come to recognise only in recent months.

This meeting, for me, displayed some of the same pattern as many previous ones. I was interested in the models being put forward Luk, and Lester, and others, but it was all too easy not to fully understand them, not quite to reach the stage of recognising the insights from them that could be applied to the model I’m continuing to put together. I put this down to the fact that the meeting environment is not conducive to a deep mutual understanding. One can ask a question here and there, but the questions of others may be related more to their own models, not to the relationship of the model under discussion with one’s own. So, one gets the feeling at the end of the meeting that one hasn’t fully grasped the direction one should take one’s own model. Little growth and development results.

So I proposed in the meeting what I have not actually proposed in a meeting before, that we schedule as many one-to-one conceptual encounters as are needed to facilitate that mutual growth of models at least towards the mutual understanding that could allow a meaningful composite to be assembled, if not a fully constituted isomorphism. I don’t know if people will be bold enough to do this, but I’ll keep on suggesting it in different forums until someone does, because I want to know if it is really an effective strategy.

The other point that struck me again was about the highest-level ontology used. One of the criteria, to my mind, of a conceptual model being truly shared, is that people answer questions about the concepts in recognisably similar ways, or largely the same on a multiple choice basis. Some of those questions could easily relate to the essential nature of the concept in question. In the terms of my own top ontology, is the concept about the material world? Or is it a repeatable pattern, belonging to the world of perception and thought? Is it, rather a concept related to communication — an expression of some kind? Whether this is exactly the most helpful set of distinctions is not the main point — it is that some set of distinctions like this will surely help people to clarify what kind of concepts they are discussing and representing in a conceptual model, and thus help people towards that mutual understanding.

A similar, but less clear point seems to apply to relationships between concepts. Allowed free rein in writing a conceptual model, people seem to write all kinds of things in for the relationships between concepts. Some of them seems to tie things in knots — “is a model of” for instance. So maybe, as well as having clear types for concepts, maybe we could agree a limited vocabulary for permitted relationships. That would certainly help the process of mapping two concept maps to each other. There are also two related conventions I have used in my most recent conceptual model.

  1. Whole-part relationships are represented by having contained concepts, of varying types, represented as inside a containing concept. This is easy to do in CmapTools. Typically the containing concept represents a sub-system of some kind. These correspond to the UML links terminated by diamond shapes (open and filled).
  2. Relationships typically called “kind of” or “is a” correspond to the UML sub-class relationship, given with an open triangle terminator. As these should always be between concepts of the same essential type, these can be picked out easily by being a uniform colour for the minimized and detailed representations of the whole.

So, all in all, a very stimulating meeting. Watch this space for further installments as trailed above.

Development of a conceptual model 4

This version of the conceptual model (of learning opportunity provision + assessment + award of credit or qualification) uses the CmapTools facility for grouping nodes; and it further extends the use of my own “top ontology” (introduced in my book).

There are now two diagrams: a contracted and an expanded version. When you use CmapTools, you can click on the << or >> symbols, and the attached box will expand to reveal the detail, or contract to hide it. This grouping was suggested by several people in discussion, particularly Christian Stracke. Let’s look at the two diagrams first, then go on to draw out the other points.

eurolmcm13-contracted1

You can’t fail to notice that this is remarkably simpler than the previous version. What is important is to note the terms chosen for the groupings. It is vital to the communicative effectiveness of the pair of diagrams that the term for the grouping represents the things contained by the grouping, and in the top case — “learning opportunity provision” — it was Cleo Sgouropoulou who helped find that term. Most of the links seem to work OK with these groupings, though some are inevitably less than fully clear. So, on to the full, expanded diagram…

eurolmcm13-expanded1

I was favourably impressed with the way in which CmapTools allows grouping to be done, and how the tools work.

Mainly the same things are there as in the previous version. The only change is that, instead of having one blob for qualification, and one for credit value, both have been split into two. This followed on from being uncomfortable with the previous position of “qualification”, where it appeared that the same thing was wanted or led to, and awarded. It is, I suggest, much clearer to distinguish the repeatable pattern — that is, the form of the qualification, represented by its title and generic properties — and the particular qualification awarded to a particular learner on a particular date. I originally came to this clear distinction, between patterns and expressions, in my book, when trying to build a firmer basis for the typology of information represented in e-portfolio systems. But in any case, I am now working on a separate web page to try to explain it more clearly. When done, I’ll post that here on my blog.

A pattern, like a concept, can apply to many different things, at least in principle. Most of the documentation surrounding courses, assessment, and the definitions about qualifications and credit, are essentially repeatable patterns. But in contrast, an assessment result, like a qualification or credit awarded, is in effect an expression, relating one of those patterns to a particular individual learner at a particular time. They are quite different kinds of thing, and much confusion may be caused by failing to distinguish which one is talking about, particularly when discussing things like qualifications.

These distinctions between types of thing at the most generic level is what I am trying to represent with the colour and shape scheme in these diagrams. You could call it my “top ontology” if you like, and I hope it is useful.

CmapTools is available free. It has been a great tool for me, as I don’t often get round to diagrams, but CmapTools makes it easy to draw the kinds of models I want to draw. If you have it, you might like to try finding and downloading the actual maps, which you can then play with. Of course, there is only one, not two; but I have put it in both forms on the ICOPER Cmap server, and also directly in CXL form on my own site. If you do, you will see all the explanatory comments I have made on the nodes. Please feel free to send me back any elaborations you create.

Development of a conceptual model 3

I spent 3 days in Lyon this week, in meetings with European project colleagues and learning technology standardization people. This model had a good airing, and there was lots of discussion and feedback. So it has developed quite a lot over the three days from the previous version.
eurolmcm12

So, let’s start at the top left. The French contingent wanted to add some kind of definition of structure to the MLO (Metadata for Learning Opportunities) draft CWA (CEN Workshop Agreement) and it seemed like a good idea to put this in somewhere. I’ve added it as “combination rule set”. As yet we haven’t agreed its inclusion, let alone its structure, but if it is represented as a literal text field just detailing what combinations of learning opportunities are allowed by a particular provider, that seems harmless enough. A formal structure can await future discussion.

Still referring to MLO, the previous “assessment strategy” really only related to MLO and nothing else. As it was unclear from the diagram what it was, I’ve taken it out. There is usually some designed relationship between a course and a related assessment, but though perhaps ideally the relationship should be through intended learning outcomes (as shown), it may not be so — in fact it might involve those combination rules — so I’ve put in a dotted relationship “linked to”. The dotted relationships are meant to indicate some caution: in this case its nature is unclear; while the “results in” relationship is really through a chain of other ones. I’ve also made dotted the relationship between a learning opportunity specification and a qualification. Yes, perhaps the learning opportunity is intended to lead to the award of a qualification, but that is principally the intention of the learning opportunity provider, and may vary with other points of view.

Talking about the learning opportunity provider, discussion at the meetings, particularly with Mark Stubbs, suggested that the important relationships between a provider and an learning opportunity specification are those of validation and advertising. And the simple terms “runs” and “run by” seem to express reasonably well how a provider relates to an instance. I am suggesting that these terms might replace the confusingly ambiguous “offer” terminology in MLO.

Over on the right of the diagram, I’ve tidied up the arrows a bit. The Educational Credit Information Model CWA (now approved) has value, level and scheme on a par, so I though it would be best to reflect that in the diagram with just one blob. Credit transfer and accumulation schemes may or may not be tied to wider qualifications frameworks with levels. I’ve left that open, but represented levels in frameworks separately from credit.

I’ve also added a few more common-sense relationships with the learner, who is and should be central to this whole diagram. Learners aspire to vague things like intended learning outcomes as well as specific results and qualifications. They get qualifications. And how do learners relate to learning opportunity specifications? One would hope that they would be useful for searching, for investigation, as part of the process of a learner deciding to enrol on a course.

I’ve added a key in the top right. It’s not quite adequate, I think, but I’m increasingly convinced that this kind of distinction is very helpful and important for discussing and agreeing conceptual models. I’m hoping to revisit the distinctions I made in my book, and to refine the key so that it is even clearer what kind of concept each one is.