Representing common structures

(8th in my logic of competence series)

In the last two posts, I’ve set out some logic for simple competence structures and for more complex cases. But we still need to consider how to link across different structures, because only then will the structures start to become really useful.

If you look at various related UK National Occupational Standards (NOSs), you will notice that typically, each published document, containing a collection of units, has some units specific to that collection and some shared in common with other collections. Thus, LANTRA’s Production Horticulture NOSs (October 2008) include 17 common units that are shared between different LANTRA NOSs, and Agricultural Crop Production NOSs (May 2007) include 18 of these units. Ten of them appear in both sets. Now if, for instance, you happen to have studied Production Horticulture and you wanted to move over to Agricultural Crop Production, it would be useful to be able to identify the common ground so that you didn’t have to waste your time studying things you know already. And, if you want to claim competence in both agriculture and horticulture, it would be useful to be able to use the same evidence for common requirements.

How can what is in common between two such competence structures be clearly identified? There are currently common codes (CU2, CU5, etc.) that identify the common units; and units imported from other Sector Skills Councils (as frequently happens) are identified by their unit code from the originating NOSs. However, there are no guarantees. And if you look hard, you sometimes find discrepancies. CU5, for example, “Develop personal performance and maintain working relationships”, is divided into two elements, “Maintain and develop personal performance” and “Establish and maintain working relationships with others”. In both sets, “others” are defined as

  1. colleagues
  2. supervisors and managers
  3. persons external to the team, department or organisation
  4. people for whom English is not their first language.

But when the unit CU5 appeared in Veterinary Nursing NOSs in 2006, non-native English speakers were not explicitly specified. Do we have to regard the units are slightly different? We can imagine what has happened — presumably someone has recognised an omission, and put it what was missing. But what if that has been reflected in the training delivered? Would it mean that people trained in 2006 would not have been introduced to issues with non-native speakers? And does that mean that they really should be given some extra training? And later the plot thickens… LANTRA’s “Veterinary nursing and auxiliary services” NOSs from July 2010 has CU5, “Maintain and develop personal performance” and CU5A, “Establish and maintain working relationships with others”. This seems to follow a pattern of development in which the NOS units are simplified and separated. The (same) different kinds of “others” are now just included in the overview paragraph at the beginning of CU5A.

I hope it’s worth going through this exercise in possible confusion to underline the need for links across structures. Ideally, an occupational standard should be able to include a unit from elsewhere by referring to it, not by copying it; and there would need to be clear versions with clearly marked changes. But if people insist on copying (as they currently often do), at least there could be very clear indications given about when something is intended to be exactly the same, and when it is pretty close even though not exactly the same.

Back in the simple competence structures post, I introduced the SKOS relationships “broader” and “narrower”. There are other SKOS relationships that seem perfectly suited for this job of relating across different competence structures. These are the SKOS Mapping Properties. It would seem natural to take skos:exactMatch to mean that this competence definition I have here is intended to be exactly the same as that one over there, and skos:closeMatch would serve well for “pretty much the same”, or “practically the same”. If these relationships were properly noted, there could be ICT support for the kinds of tasks mentioned above — e.g. working out what counted as evidence of what competence, and what you still needed to cover in a new course that you hadn’t covered in an old course, or gained from experience.

And if all parts of competence structures were given globally unique IDs, ideally in the form of URIs, then this same process could apply at any granularity. It would be easy to make it clear even to machines that this NOS unit was the same as that one, right down to the fine granularity of a particular knowledge or ability item being the same as one in a different unit. An electronic list of competence concepts would have alongside it an electronic list of relationships — a kind of “map” — that could show both the internal “skos:broader” and “skos:narrower” relations, and the external “skos:exactMatch” and “skos:closeMatch” equivalencies.

This gives us a reasonable basis for reuse of parts, at any level, of different structures, but we haven’t yet considered comparison of competence structures where there aren’t any direct equivalence mappings.

CPD-Eng – review of a JISC project in progress

I was asked to look into the CPD-Eng project by the JISC programme managers, specifically in conjunction with the JISC “Benefits Realisation” work, because this project in particular has a lot to do with portfolio technology and interoperability, and then with skills and competences in professional development.

CPD-Eng is a JISC-funded project, to quote from the JISC project page, “to integrate systems that support personalised IPD/CPD, applicable to professional frameworks.” The lead institution is the University of Hull, and much of the documentation can be found through their own project page.

The project start date was April 2009. At that date, the tender documentation spelled out the aspirations for the project. The main image that emerges from the tender documentation is not of a new e-portfolio system as such, but of an approach to integrating evidence that may reside on different systems, all of which has relevance to an individual’s CPD. “CPD-Eng will provide the innovative, personalised infrastructure that will support the work-based learner through a new suite of flexible pathways…”

The original plan was to deploy Sun’s Identity Management software, perhaps aiming towards Shibboleth in the longer term. These were overtaken by various events. Consultations with various people related to JISC lowered the perceived value of Shibboleth, and in any case there needed to be a more open approach to accommodate the wider potential usage of the tools.

One of the shaping factors was the requirement to integrate the Hull institutional VLE (“eBridge”), based on the SAKAI framework. Right from the outset there was explicit mention also of integrating e-portfolio systems that have adopted Leap2A. Clearly, one of the main areas of future evidence of the success of a project that claims to be about integrating systems will be the actual extent of integration carried out. In the project plan, WP7 states that the most important integration is between eBridge and other partners’ VLEs. The nature of the integration, however, was not specified at the outset, but remained to be filled in through earlier work in the project itself. A theme that does continue throughout the tender is about the ability of learners to control access to information that may be stored in different places.

The baseline report (version 0.9 of August 2009) spells out more graphically where the project started from. Perhaps the core point at the centre of the vision is the statement: “A portal system has been established alongside an Identity Management (IDM) system that allows self-service management of the learner’s identity. CPD-Eng will develop a robust and scalable approach to interoperability, access and identity management that is both easy to use and seamless, allowing the learner to control their personal e-portfolio-type technologies and share the content within them with whom they choose.” Compared to this, the rest of the baseline report is interesting background.

After the baseline report the project took stock of the situation. Some other portfolio products were “not very engineering”. TAS3 was not seen as very user-friendly. Academics still wanted the software to sit in SAKAI. Unfortunately, the lead programmer resigned to take up employment elsewhere, and the project was left without a developer. After looking into advertising for a replacement, and consulting with the JISC Programme Manager, it was decided to put the software development out to tender. MyKnowledgeMap (MKM: a York-based company with a track record of producing software in the area of skills and portfolios, for users in education and various professions) was judged as the most suitable partner, though they lacked experience of SAKAI. The project leads arranged for MKM employees to be trained by the University’s consultant, Unicon.

This was the situation for the following progress report of 2010-03. Three software modules were being developed within SAKAI:

  • Aggregator
  • Mapper / Tagger
  • Showcaser

The Aggregator module “provides a mechanism for users to gather their artefacts and items of evidence from across multiple sources.” While it is relatively easy simply to copy information from other sources into a system like this, the point here is explicitly “not” (emphasis original) to copy, where it is possible to establish access from the original location.

Tagging is conceived of similarly to elsewhere, in terms of adding free text tags to aid categorisation by individuals. Mapping, in contrast, is designed to allow users to connect artefacts to elements of established “skills, competency and assessment frameworks”. This function is vital within CPD practice, so that users can present evidence of meeting required standards. Such frameworks are stored externally. JISC is now funding some “Competence structures for e-portfolio tools interoperability” work finishing July 2011, and CPD-Eng can play a useful role in determining the standard form in which competence structures should be communicated. MKM does have existing techniques for this, but they will be critiqued and adapted as necessary before being adopted as a consensus.

Showcasing is conceived of similarly as in many portfolio tools. Official review is supported by routine copying of showcases to uneditable areas. This works for e.g. health professionals, because they want a carefully controlled system, though others like engineers need that less, and prefer to do without the extra burden of storage. It is planned to support review without necessarily copying showcases in a later release. Access to showcases was originally only via SAKAI, but it was later recognised that limiting to SAKAI access was not ideal, particularly as many professional bodies have their own e-portfolio systems.

Working with health professionals, archeologists and others (there is also interest from schools, and a pilot with BT), it became clear that a useful system should not be tightly bound to other institutional software or to SAKAI, so what was needed was an independent identity management system.

In the Benefits Realisation plan from the summer of 2010 came a clear restatement of the core aim of the work. “At its centre is a scalable, interoperable and robust access and identity management system that integrates and control access to personal e-portfolio technologies.” But what is the relationship between the CPD-Eng work and other identity management systems? Sun’s Identity Management software had been abandoned. The European funded TAS3 work (in which the University of Nottingham is a partner) was seen as too complex for professional end users. A question which remains outstanding is to clarify the relationship of the system devised with the trials funded by JISC under the PIOP 3 programme, involving PebblePad, the University of Nottingham and Newcastle University. It would be good to see a clear exposition of these and any other relationships. All that can be said at this stage is that in the perception of the CPD-Eng project, none of the other identity management systems really worked for them.

The next piece of documentation is the progress report from September 2010. This is where the questions really start to become clearer. Included in this massive report is the complete text of the final report from “Personalised systems to support CPD within Health Care”, a mini project extending CPD-Eng concepts to health care professions. In this very interesting inner report, there is a large body of evidence about CPD practice in the health professions. This leads to a second major question. What about the whole process side of portfolio practice? CPD-Eng has very clearly focused on the rather technical side of facilitating the access management for artefacts. This is certainly useful at the stage when all the evidence has been generated, when it remains to gather together appropriate items from different places.

Much portfolio practice centrally involves reflection on evidence as well as its collection and showcasing. The phrase “collect, select, reflect…” is often used in portfolio circles. (Helen Barrett adds direction/projection and connection.) Reflection is often vital in portfolio practice, because the mere presentation of a selection of artefacts is no guarantee of a clear and coherent understanding of how and why they fit together. Because unprompted reflection is hard, institutions often support the process. It is useful to be able to build in some aspect of process into the same tools which hold the information and resources to be reflected on, and it is useful to hold the reflections themselves in a place that they can be easily connected with the things on which they are reflecting.

Increasingly, it is recognised that the peer group is another vital aspect of this reflection process. In a situation where staff time is short, or the staff seem uninvolved, possibly the best stimulus for reflection and re-evaluation of ideas is the peer group. Portfolio systems designers themselves have recognised this, by integrating social tools into the portfolio software.

It is clear that MyShowcase is not primarily designed to support reflection. (More information about MyShowcase can be found at www.my-showcase.org which has a demo and links, and through www.mkmlabs.com.) However, one of the consequences of implementing a stand-alone version is that users found an immediate need for some of the functionality that is normally provided within full-feature e-portfolio systems. In particular, users want to collect evidence together and send a link to a tutor for feedback. The link is sent to the reviewer by e-mail, who can then access the system for the purpose of leaving feedback comments. Indeed, some users feel that fully-featured e-portfolio systems are just too complex for their users’ needs, and value a simple tool because it does less, and does not explicitly support reflection by users. So this is the only extra feature implemented in MyShowcase that comes from the fuller set normal in portfolio software.

If one has a hybrid system including MyShowcase and some other portfolio tool, the portfolio functionality will therefore mainly be fulfilled in the portfolio tool, not MyShowcase. But what of the feedback that has been designed into MyShowcase for standalone use? To be useful, it would have to seen within the portfolio system. And generally, any information used in MyShowcase needs to be presented to the associated e-portfolio system for use within whatever (e.g. reflective) processes the portfolio software is used for. We don’t want the flow of information to be only one-way, or there to be solely unidirectional two-stage processes. What is needed is an effective two-way integration, so that the chosen portfolio tool can access all the information gathered through MyShowcase, the user / learner can gather further feedback and reflect, and present the outcomes of that further reflection back into the MyShowcase pool, for onward presentation.

Recent discussion has confirmed that MyShowcase is not primarily conceived of as a replacement for a full e-portfolio system, though it does act as what we might call an “evidence resource management” tool. Perhaps we can now discern an ideal answer to where this might lead, and where things ought to be heading, and the questions that still need to be answered.

If a service such as MyShowcase is to work effectively alongside e-portfolio tools, there needs to be transfer of information all ways round. In addition to this, and because it can be implemented stand-alone, there needs to be Leap2A export and import directly between MyShowcase and a user’s personal storage.

Looking at things from a user’s perspective, a portfolio tool user should be able to make use of MyShowcase functionality transparently. It should be able to be used as invisible “middleware”, allowing the front end e-portfolio system to focus on an appropriate user interface, and portfolio and PDP processes (including reflection and feedback), with MyShowcase providing the funcationality that allow the user to link to evidential resources held in a variety of places, including VLEs, HR systems, other portfolio tools, social networking services, blogs, etc., possibly including sites with sensitive information that will only be displayed to authorised users.

The MyShowcase architecture in principle could provide resource management for “thin portfolio” services, where the storage is not in the portfolio system. Is it, or could it be, adapted for this?

As part of the PIOP 3 projects, Leap2A connection between different systems was been investigated by PebblePad, Nottingham and Newcastle, and this work needs to be carefully compared with the MyShowcase architecture. What exactly are the similarities and the differences? Are they alternatives? Can they be integrated, combining any strong points from both?

In order to facilitate this two-way interaction, there really needs to be substantial compatibility between the information models in all the connected systems, so that there can be meaningful communication between the systems. This does not necessarily mean a full implementation of Leap2A in each participating system, but it does mean at least a reasonable mapping between the information managed and corresponding Leap2A structures, because Leap2A is the only well-implemented and tested model we have at this time that covers all the relevant information. If there are requirements that are not covered by Leap2A, this is a good time to raise them so that they can be incorporated into discussion with other parties interested in Leap2A, and our common future thinking.

I hope I’ve made the issues clearer here. Here are collected recommendations for taking this work forward, whether within the current CPD-Eng and Benefits Realisation work or beyond it.

  • What the portfolio community really needs is multi-way integration of portfolio information, artefacts and permissions, based around Leap2A concepts.
  • Leap2A export and import by users should be provided for standalone implementations of MyShowcase, just as with other portfolio systems that have adopted Leap2A.
  • Showcases in MyShowcase need to be exportable as Leap2A (as with PebblePad WebFolios and Mahara Views).
  • For transparent integration between different sources of information in a portfolio architecture, identity management approaches need consolidating around good workable models such as OAuth
  • The PIOP 3 work by PebblePad Nottingham and Newcastle, as well as TAS3, need to be carefully considered, to extract any lessons relevant to CPD-Eng, even if their appearance is only in the final report.
  • The opportunity provided by any planned project meeting should be fully exploited in these directions.
  • Another meeting should be planned around the wider questions of e-portfolio interoperability architecture, covering not only the technical aspects, but the requirements of practice as well, such as reflection, feedback and comments on non-public items stored elsewhere.

Advanced structures for competences

(7th in my logic of competence series)

In my previous post, I explained how SKOS relationships can be used to represent the basics of competence structures. But in one of the examples cited, the QAA Subject Benchmark Statement for honours level agriculture related studies, the aspect of level of attainment is present, and this is not easily covered by the SKOS broader and narrower relations just by themselves. Let me explain in some more detail.

In this particular Subject Benchmark, the skills, knowledge and understanding are described at three levels: “threshold”, “typical”, and “excellent”. As a first example, in one of the generic skills, (communication skills), under “threshold” one item reads “make contributions to group discussions”; under “typical” the corresponding item reads “contribute coherently to group discussions”; and under “excellent” it reads “contribute constructively to group discussions”. Or take an example from the “subject specific knowledge and understanding in agriculture and horticulture” — threshold: “demonstrate some understanding of the scientific factors affecting production”; typical: “demonstrate understanding of the scientific factors limiting production”; excellent: “demonstrate understanding of the scientific factors limiting production and their interactions”. Leaving aside difficulties in clarifying and assessing exactly what these mean, it is clear that there is a level structure, as illustrated in my earlier post. In both cases, the three descriptions are neither identical nor unrelated — higher levels encompass lower ones. (But note also that benchmark statements in different subjects have different structures.)

Can one represent these attainment levels in a tree structure? One option might be to have three benchmark statements presented separately, one each for threshold, typical and excellent. However this would miss the obvious connections between the elements within each level. A more helpful approach might be to describe the common headings with the finest reasonable granularity, and then distinguish the descriptors for different attainment levels at this granularity. This would need a slight restructuring of this statement, because finer-grained common headings are possible than the ones given. For instance, “subject specific knowledge and understanding in agriculture and horticulture” could easily be subdivided into something like these, (using words that appear in each level):

  • “science and management of sustainable production systems”
  • “social, economic, legal, scientific and technological principles underlying the business management of farm or horticultural enterprises”
  • “range of concepts, theories and methods drawn from the constituent disciplines”

At a still finer level, the descriptors mostly share many words, with just the detail differing to reflect the different levels, as exemplified above. In the example above, the common wording is “understanding of the scientific factors affecting production”. Headings could be created from common wording. Then there is still the issue of relating the three described levels into the structure as a whole. Threshold, typical, and excellent are not three components of one higher level entity, they are different levels of the same entity. These levels are one kind of variant.

Variants more generally are not always easy to see in common definitions, perhaps because part of the point of having standards is to reduce variability. For a clearer example from a broader perspective, we may consider areas not documented by occupational or educational standards. Consider skill and competence at management. The literature suggests several distinct styles of management: autocratic, democratic, laissez-faire, paternalistic, etc. It is probably obvious that to be an effective manager, one does not have to be able to manage according to all these styles. Perhaps just one may be good enough for any particular management position, though different ones may be needed in different contexts. Having chosen a management style, each will have a different range of component skills. If one wished to create a tree structure to represent management competences, what would the relationship be between a reasonable topmost node, perhaps called just “management”, and the four or more styles? It is rather similar to the issue with the levels we saw above, but at a different granularity. As another alternative example, look at the broader issue of developing competence in agriculture or horticulture. Probably no one is an expert in growing everything. Anyone wanting to be a farmer or grower will at some point need to decide what to specialise in, if not in academic study, then at least in terms of practical experience and expertise. There are clear choices, and the range of skills and competence needed for different specialisms will of course differ. Being a competent farmer does not mean being competent at growing all crops in the world. You have to choose.

The basic structures mentioned in my previous post start out with the idea of “broader” and “narrower” concepts. It is reasonable to say that management competence in general is a broader concept than competence as a democratic manager? Or can one say that graduate level competence in agriculture is a broader concept than being assessed as threshold, typical or excellent? Does it really help to say simply that horticulture is a broader concept than growing grapes?

What seems to emerge on thinking this through is that there are at least two kinds of “broader” (and equally two kinds of “narrower”) with different logic. One type is like whole-part relationships. We saw this in the National Occupational Standards units, which were composed of things that a person needs to be able to do, alongside things that the person needs to know. In principle all parts are needed to constitute the whole. If we imagine say a personal development or learning tracking system that helps you with your learning, and you were working towards the unit of competence, then the system could keep track of which ones you say you have done, and perhaps remind you to complete the remaining ones.

On the other hand, the other type of relationship (illustrated above) is “style” or “variant” rather than “part”. If we imagine a system to help with professional development, and you wanted to develop your management skill, it is at least plausible that you could be asked at the outset which style of management you would like to improve your skill in. Having chosen one (or more) the rest would be put aside. You would work towards the constituent knowledge and skills for the chosen ones, and the system would not bother you with the knowledge and skills needed for the styles you had chosen not to learn more about. Similarly, a general horticulture skill aid would have to start by getting you to select the kind of crops you wanted to grow. And for the other example, with the attainment standards of the Subject Benchmark, we can imagine selecting a topic and then being asked what level you believe you have attained on this topic, so again there is a selection process instead of simply the combination of parts.

One could indeed imagine all of these features together in a tool that helped with personal development. The system could ask you what level you believe you have attained already, and what level you are working towards, for fine grained knowledge and skills, and then reminding you to work at the identified gaps. At the same time, which fine grained areas you work at will depend on your more course-grained choices, like which styles of the competence you want to acquire, and which options you will specialise in.

It may help to compare these two kinds of relationship with ones that are very common elsewhere. UML distinguishes various relationships within class diagrams by graphical symbols, and two of the most common are called “composition” and “generalization”. Composition is very close to the kind of basic relationship in competence where component skills and knowledge are required to make up a wider competence, or that various competences are required to qualify as a certain grade of professional. On the other hand, the broad concept of management competence could be seen as a generalisation of the more specific competences in various styles of management. A word of caution, however: UML is designed specifically for use in systems analysis and design, or software engineering, so it should not be suprising if the match with representing competence is not exact.

Even though the two kinds of relationship I have been talking about are well known in many fields, SKOS does not make an explicit distinction between them. Logic seems to lead to the idea (which I have heard SKOS experts suggesting) that it is up to others to define more specific relationships than (specialisations of) SKOS’s “broader” and “narrower” to represent these two kinds of relationship. We don’t want to deprive SKOS of the right to be called “Simple”.

However we represent these two kinds of relationship, if we are going to represent them in a way which is useful for tools to help people manage their competence, their learning towards competence, and their self-assessment of competence (perhaps leading to external assessment) then it does seem entirely appropriate to represent them differently. Very simply, there are times when you need all of a set of components, and there are times when you need to choose which of a group of options you are going to choose: “and” and “or”; and both kinds of relationship are of great practical use.

Addition, 2011-06-22 On the other hand, people seem to easily mix compulsory and optional parts in the same structure. This is extremely widespread in the definition of qualifications, which are still a very important proxy for or indicator of abilities and competence. So, rather than needing necessarily to separate out the two kinds of structural relationship, we can simply be liberal about accepting whatever combinations people want to represent. If a certain ability has both necessary and optional parts, it is still very easy to understand what that means in practice, and to follow through the implications.

2011-07-04 And I give detailed argument to the reasons for optionality in post 15 in this series.

That may be a good place to stop for defining generic structure for single framework structures of skills or competence. But what I have not covered so far is relationships between different competence structures. One thing this is needed for is reuse of common elements between definitions…

Basic structures of competences

(6th in my logic of competence series)

In the earlier post on structure, I was looking for the structure of a single “definition” of “what is required”. In following that line of enquiry, I drew attention to one of the UK National Occupational Standards (NOSs), in horticulture as it happened. Other UK NOSs share a similar structure, and each one of these could be seen as setting out a kind of relationship structure between competences in that occupational area. In each case we see an overall area (in the case cited, “production horticulture”), which is broken down into units, where each unit seems to correspond roughly to an occupational role — one of a set of roles that could be distributed between employees. Then, each unit is broken down into what the person with that role has to be able to do, and what they need to know to provide a proper basis for that ability.

This is clearly a kind of tree structure, but it is not immediately obvious what kind of tree. Detailed consideration of a few examples is instructive. A first point to note is that NOS units may occur within several different occupational areas. This is particularly true of generic competences such as health and safety, but also applies to some specific units of skill and competence that just happen to play a part in several occupational areas, or several careers if you like. So, a particular unit does not necessarily have a single place on “the tree”. A second point emerges from consideration of different trees. UK NOSs have a common structure of, roughly: responsible body (usually a Sector Skills Council); occupational area; unit; skill or knowledge. But this is not always the case with structures that are not NOSs. For example, the “Tuning” work on “educational structures in Europe” includes “generic competences” that are given just as headings, from “capacity for analysis and synthesis” to “will to succeed”, and there is no attempt to break these down into smaller components.

Tuning’s specific competences have the same depth of tree structure as their generic ones, still unlike NOSs. For instance, the “business-specific competences” have items such as “identify and operate adequate software”, which looks a bit like some of the things that NOSs specify that people have to do, but also items such as “understand the principles of law and link them with business / management knowledge”, which seems to correspond more with NOS knowledge items. Some Tuning items straddle both ability and knowledge. In all Tuning cases, the tree structure is shallower than for NOSs. You may find many other such tree-structures of competences, but I doubt you will find any reliable correspondence between the kinds of thing that appear at different points on different trees. This is a natural consequence of the logical premise of this whole series: that it is the claim and the requirement that are the logical starting point. Yes, we may well see correspondence at that level of job requirement, and much common practice; but any commonality here will not extend to other levels, because people analyse claims and requirements in their own different ways. It’s not just that some trees leave out particular kinds of branch, but rather that, to go with the natural analogy, branches come in all thicknesses, with no clear dividing line between say a branch and a twig.

Even for the same subject area, there are quite different structures. As well as NOSs, the UK has what are called “subject benchmarks”, which are more for academic courses rather than purely vocational ones. The QAA‘s Subject benchmark statement for “Agriculture, horticulture, forestry, food and consumer sciences” has this structure:

  • 8 very general “abilities and skills”, such as “understand the provisional nature of information and allow for competing and alternative explanations within their subject”
  • other generic skills divided into
  • intellectual skills
  • practical skills
  • numeracy skills
  • communication skills
  • information and communication technology (ICT) skills
  • interpersonal/teamwork skills
  • self-management and professional development skills
  • Subject-specific knowledge and understanding, expressed as what a graduate “will be able to”, in three areas:
    • “agriculture and horticulture”,
    • “the agricultural sciences”,
    • “food science and technology”.

    Both the subject-specific and the generic skills have descriptions for what is expected at three levels: “threshold”, “typical”, and “excellent”. While this is an interesting and reasonable structure, the details of the structure do differ from the NOSs in the same area.

    We have also to reckon with the fact that just about any of a tree’s smallest branches can in principle be extended to even more detailed and smaller ones by adding thinner twigs. It might be tempting to try this with the Tuning competences, as talk about the “principles of law”, and how they linked with other “knowledge”, begs the question of what principles we are talking about and indeed how they are linked. However, in practice this is unlikely, because the Tuning work is intended as a synthesis and reference point for diverse academic objectives, and typically every academic institution will structure their own version of these competences in their own different ways. Another way in which two similar trees may differ is the number of intermediate layers, together with the branching factor. One tree may have twenty “thinner” branches coming off a “thicker” one; another tree may cover the same twenty by first having four divisions, each with five sub-divisions. There is no right or wrong here, just variants.

    A simple way of representing many tree structures is to document the relationship between elements that are immediately larger and smaller, or broader and narrower. And recently, there seems to be a significant consensus building up that relationships from the SKOS Simple Knowledge Organization System are a good start, and may be the best known and most widely known relationships that fit. SKOS has the relationships “broader” and “narrower”: the broader competence, skill, or knowledge is the one that covers a set of narrower ones. The only thing to be careful about is that the SKOS terms come from the librarian’s BT and NT — that is, if we write “A broader B” it does not mean “A is broader than B”, but the opposite, that A is associated with a broader term, and that broader term is B. Thus B is a broader concept than A. Then, to use SKOS in the way it is designed to be used, we need identifiers for all the terms that might occur as “A” or “B” here. Each identifier would most reasonably be a URI, and needs to be clearly associated with its description.

    This general purpose structure of URIs and SKOS relations seems to be sufficient to represent the basic aspects of most aspects of the competence structures I have mentioned or referred to, beyond the concepts and definitions themselves. We will next look at more advanced considerations.

    Other competence attributes

    (5th in my logic of competence series)

    Beyond levels, there are still many aspects to the abstractions that can be seen in competence definitions and structures in common use. What else can we identify? We could think of these as potential attributes of competence, though the term “attributes” is far from ideal.

    Just as levels don’t appear in competence definitions themselves, when an attribute is abstracted from a competence definition it usually does not appear explicitly. But unlike levels, which are set out plainly in frameworks, we have to be aware to find other abstracted features.

    I started this series by saying that the logical starting point for talking about competence are the claim and the requirement. And indeed, we see many implicit or explicit claims to competence with plenty of detail — for example in the form of covering letters accompanying CVs in support of applications for opportunities. Following this through, it is relevant to consider what else could be said in support of a claim to competence — that goes into more detail than, say, official recognition that a particular standard of competence has been reached.

    Imagine that you had done a course on “production horticulture” (the topic mentioned in previous entries) and received a certificate, perhaps even with an attached Europass Certificate Supplement describing the “skills and competences” that you are expected to have acquired by the end of this course. Alternatively, your certificate may not have come as a direct result of a course, but instead from “APEL” — the accreditation of prior experiential learning. That would mean that you had plenty of experience as a horticulturalist, and it had been assessed that you had the skills and competence covered for a particular certificate that covers production horticulture. Now, if you were applying for a job, as well as citing your certificates, maybe attaching the information in any supplements, and stating the level of competence you believe you have achieved, what else would you be likely to want to say to a prospective employer, e.g. in a covering letter, or at an interview?

    The most immediately obvious extra feature of one’s own experience might be, for horticulture, what kind of crops you have experience in growing. Stating this is an natural consequence of the fact that the LANTRA standards do not refer to the kind of crops. Next, the NOS documentation explicitly mentions indoor (perhaps greenhouse) and outdoor growing, though these terms are not used in the actual definitions. Which do you have experience of? Or, broadening that out, what kind of farms have you worked on? Soon afterwards, the documentation goes on to talk about equipment, without mentioning the types of equipment explicitly. Can you drive a tractor? Beyond this, I’m ignorant about what kinds of specialist equipment are used for different kinds of cultivation, but more relevant questions could be asked, as it might be important to know whether someone is experienced in using whatever equipment is used in the job being offered. Most workers these days will not be experienced in using hand ploughs or ox-drawn ploughs… And after equipment, the list of attributes that are abstracted out — left out from documented competence definitions — continues.

    Some differences in skills and competence are less significant, and you may not need to mention them explicitly, because it is understood that any farmhand could pick up the ability quickly. It is the skills that take longer to learn that will be more significant for recruitment purposes, and more likely to be mentioned in a covering letter or checked at interview.

    One of the key facts to recognise here is that the boundary is essentially arbitrary between, on the one hand, what is specified in documentation like the LANTRA NOSs, and on the other hand, what is left for individuals to fill in. Where the boundary is set depends entirely on the authority. While LANTRA standards do not specify particular crops or equipment, we could imagine that there was a professional association of, say, strawberry growers, that published a set of more detailed standards about what skills and competences are needed to grow just strawberries. Quite possibly that would mention the specific equipment that is used in strawberry growing. (There is for example as it happens a Florida Strawberry Growers Association, but it doesn’t seem to set out competence standards.)

    Different occupational areas will reveal a similar pattern. It is unlikely, for instance, that ICT skills standards (e.g. SFIA, e-CF) will specify particular programming languages, but it is one attribute that programmers regularly mention in their CVs or covering letters when seeking employment. Or, take the case of health professionals. There are several generic types of situations with different demands. What “is required” of a competent health professional may well differ between first aid situations with no equipment; at the scene of accidents; in emergency units in hospitals; and for outpatient clinics. Some skills or abilities may be present in different forms in these different situations, and we can perhaps imagine someone mentioning in a covering letter the kinds of situation in which they had experience, if these were particularly relevant to a post applied for.

    It should be clear at this point that extra detail claimed will naturally fill in for what is left out of the skills or competence standard documentation. But because what is sometimes left out is equally sometimes left in the documentation, it would make a great deal of sense for industry sectors to set out a full appropriate terminology for their domain — a “domain ontology” if you like. (Though don’t be using that “O” word in the wrong company…) Those terms may then be used either within competence definitions, or for individuals to supplement the competence definitions within their own claims. Typically we could expect common industry terms to include a set of roles, and a range of tools, equipment or materials, but of course, these sets will differ between occupations. They may also differ between different countries, cultures and jurisdications. As well as roles and equipment, any occupational area could easily have their own set of terms that have no corresponding set in different areas. We saw this above with indoor and outdoor growing. For plumbing, there are, for example, compression fittings and soldered fittings. For musicians, there are different genres of music each with their own style. For builders, building methods and parts differ between different countries, so could be documented somewhere. And so on.

    There is a very wide range of particular attributes that could be dealt with in this way, but it is probably worth mentioning a few particular generic concepts that may be of special interest. First, let us consider context. For standard descriptions of competence, it will be the contexts that are met with repeatedly that are of interest, because it is those where experience may be gained and presented that may match with a job requirement. To call something a context, all that is needed is to be able to say that a skill was learned, or practiced, in the context of — whatever the context is. A context could be taken as a set of conditions that in some way frame, or surround, or define, a kind of situation of practice. If we have a good domain ontology, we would expect to find the common contexts of the domain formulated and documented.

    Second, what about conditions? We can refer back to more informal usage, where someone might say, for instance, I can plough that field, or write that program, as long as I have this particular tool (agricultural or software). It makes a lot of sense to say that this is a condition of the competence. Conditions can really be almost anything one can think of that can affect performance of a task. As suggested in the discussion of context, a set of stable and recognisable conditions could be taken to consititute a context. But the term “conditions” generally seems to be wider. It means, literally, anything that I can “say” that affects the competence. As such, we are probably more likely to meet conditions in the clarification of an individual claim than in standard competence documentation. That still means that there is value to assembling terminology for any conditions that are understandable to many people in a domain. It may be that a job requirement specifies conditions that are not in the standard competence definitions, and if those conditions are in a domain ontology, they can potentially be matched automatically to the claims of individuals referring to the same conditions.

    Assessment methods should also specify conditions under which the assessment is to take place. The relevance of an assessment may depend at least partly on how closely the conditions for the assessment reflect the conditions under which relevant tasks are performed. And talking about assessment, it is perhaps worth pointing out that, though assessment criteria are logically separate from the definitions of the skills and competence that are being assessed, there is still a fluid boundary between what is defined by the competence documentation, what is claimed by an individual, and what appears as an assessment condition or criterion. The conditions of an assessment may add detail to a competence in such a way that the individual no longer needs to detail something in a claim. An asssessment criterion may fairly obviously point to a level, but, given that a level is also sometimes wrapped in with a competence definition, the criterion may take over something of the competence definition itself. It would be expected that assessment criteria also use the same domain terminology as can be used, both for competence definitions, and within claims.

    If the picture that emerges is rather confused, that seems unfortunately realistic. The fluid boundaries that I have discussed here are perhaps a natural result of the desire to specify and detail skill and competence in whatever way is most convenient, but that does not add any clarity to distinctions between context, conditions, criteria, levels, and other possible features or attributes of competence. On the other hand, this lack of clarity makes it paradoxically easier to represent the information. If we have no clear distinction between these different concepts, then we can use a minimal number of ways of representing them.

    So, how should competence attributes, including context, conditions and criteria, be represented?

    1. To do this most usefully, a domain ontology / classification / glossary / dictionary needs to exist. It doesn’t matter what it is called, but it does matter that each term is defined, related where possible to the other terms, and given a URI. This doesn’t need to be a monolithic ontology. It could be just a set of relevant defined terms in vocabularies. And there is every reason to reuse common terms, vocabularies and classification schemes across different domains.
    2. There is one major logical distinction to be made. Some terms are strictly ordered on a scale: these are levels or like levels. Other terms are not on a scale, and are not ordered. These are all the rest, covering what has been discussed above as context, conditions, criteria.
    3. Competence definitions, assessment specifications, job requirements and individual claims can all use this set of domain related terms. The more thoroughly this is done, the more possibilities there will be to do automatic matching, or at least for the ICT systems to be as helpful as possible when people are searching for jobs, when employers are searching for people, or anything related.

    Having sorted out this much, we are free to consider the basic structures into which competence concepts and definitions seem to fit.

    Levels of competence

    (4th in my logic of competence series)

    Specifications, to gain acceptance, have to reflect common usage, at least to a reasonable degree. The reason is not hard to see. If a specification fails to map common usage in an understandable way, people using it will be confused, and could try to represent common usage in unpredictable ways, defeating interoperability. The abstractions that are most important to formalise clearly are thus those in common usage.

    It does seem to be very common practice that competence in many fields comes to be described as having levels. The logic of competence levels is very simple: a higher level of competence subsumes — that is, includes — lower levels of the same competence. In any field where competence has levels, in principle this allows graded claims, where there may be a career progression from lower to higher level, along with increasing knowledge, practice, and experience. Individuals can claim competence at a level appropriate to them; if a search system represents levels of competence effectively, employers or others seeking competent people will not miss people whose level of competence is greater than the one they give as the minimum.

    For example, the Skills Framework for the Information Age (SFIA) is a UK-originated framework for the IT sector, founded in 2003 by a partnership including the British Computer Society. This gives 7 “levels or responsibility”, and different roles in the industry are represented at one or more levels. The levels labels are: 1 Follow; 2 Assist; 3 Apply; 4 Enable; 5 Ensure, advise; 6 Initiate, influence; 7 Set strategy, inspire, mobilise. These levels are given fuller general definitions in terms of degress of autonomy, influence, complexity, and business skills. There are around 87 separate skills defined, and for each skill, there is a description of what is expected of this skill at each defined level — of which there are between 1 and 6.

    The European e-Competency Framework (e-CF), on which work began in 2007, was influenced by SFIA, but has just 5 “proficiency levels” simply termed e-1 to e-5. The meaning of each level is given within each e-Competence. There are 36 e-competences, grouped into 5 areas.

    The e-CF refers to the cross-subject European Qualifications Framework, which has 8 levels. Level e-1 corresponds to EQF level 3; e-2 to EQF 4 and 5; e-3 to EQF 6; e-4 to EQF 7; and e-5 to EQF 8. However, the relationships between e-CF and SFIA, and between SFIA and EQF, are not as clear cut. The EQF gives descriptors for each of three categories at each level: “Knowledge”, “Skills”, and “Competence”: that is, 24 descriptors in all.

    This small selection of well-developed frameworks is enough to show conclusively that there is no universally agreed set of levels. In the absence of such agreement, levels only make sense in terms of the framework that they belong to. All these frameworks give descriptors of what is expected at each level, and the process of assigning a level will essentially be a process of gauging which descriptor best fits a particular person’s performance in a relevant setting. While this is not a precise science, the kind of descriptors used suggest that there might be a reasonable degree of agreement between assessors about the level of a particular individual in a particular area.

    For comparison, it is worth mentioning some other frameworks. (Here are just two more to broaden the scope of the examples; but there are very many others throughout the professions, and in learning education and training.)

    In the UK, the National Health Service has a Knowledge and Skills Framework (NHS KSF) published in 2004. It is quite like the e-CF in structure, in that there are 30 areas of knowledge and skill (called, perhaps confusingly, “dimensions”), and for each “dimension” there are descriptors at four levels, from the lowest 1 to the highest 4. As with all level structures, higher level competence in one particular “dimension” seems to imply coverage of the lower levels, though a level on one “dimension” has no obvious implication about levels in other “dimensions”.

    A completely different application of levels is seen in the Europass Language Passport. This offers 6 levels for each of 5 linguistic areas, as a way of self-assessing the levels of one’s linguistic abilities. The areas are: listening; reading; spoken interaction; spoken production; and writing. The levels are in three groups of two: basic user A1 and A2; independent user B1 and B2; proficient user C1 and C2. At each level, for each area, there is a descriptor of the ability in that area at that level. That is 30 different descriptors. All of this applies equally to any language, so the particular languages do not need to appear in the framework.

    Overall, there is a great deal of consistency in the kind of ways in which levels are described and used. Given that they have been in use now for many years, it makes clear sense for any competence structure to take account of levels, by allowing a competence claim, or a requirement, to specify a level as a qualifier to the area of competence, with that level tied to the framework to which it belongs, and where it is defined in terms of a descriptor. This use of level will at least make processing of competence information a little easier.

    But beyond level it seems to get harder. The next topic to be covered will be other attributes including conditions or context of competence.

    Analysis and structure of competence

    (3rd in my logic of competence series)

    I have suggested that the natural way of identifying competence concepts relates to the likely correlation of “the ability to do what is required” between different tasks and situations that may be encountered, requiring similar competence. Having identified an area of competence in this way, how could it best be analysed and structured?

    First, we should make a case that analysis is indeed needed. Without analysis of competence concepts, we would have to assume that going through any relevant education, training or apprenticeship, leading to recognition, or a relevant qualification, gives people everything they need for competence in the whole area. If this were true, distinguishing between, say, the candidates for a job would not be on the basis of an analysis of their competence, but on the basis of personal attributes, or reputation, or recommendation. While this is indeed how people tend to handle getting a tradesperson to do a private job, it seems unlikely that it would be appropriate for specialist employees. Thus, for example, many IT employers do not just to want “a programmer”, but one who has experience or competence in particular languages and application areas.

    On the other hand, it would not be much use only to recruit people who had experience of exactly the tasks or roles required. For a new role, there will naturally not be anyone with that exact prior experience. And equally obviously, people need to develop professionally, gaining new skills. So we need ways of measuring and comparing ability that are not just in terms of time served on the job. In any case, time served on a job is not a reliable indicator of competence. People may learn from experience at different rates, as well as learning different things, even from the same experience. This all points to the need to analyse competence, but how?

    We should start by recognising the fact that there are at present no universally accepted rules for how to analyse competence concepts, or what their constituent parts should look like. Instead of imagining some ideal a priori analytical scheme, it is useful to start by looking at examples of how competence has been analysed in practical situations. First, back to horticulture…

    The relevant source materials I have to hand happen to be the UK National Occupational Standards (NOSs) produced by LANTRA (UK’s Sector Skills Council for land-based and environmental industries). The “Production Horticulture” NOSs has 16 “units” specific to production horticulture, such as “Set out and establish crops”, “Harvest and prepare intensive crops”, and “Identify and classify plants accurately using their botanical names”. Alongside these specialist units, there are 21 other units either borrowed from, or shared with, other NOSs, such as “Monitor and maintain health and safety”, “Receive, transmit and store information within the workplace”, and “Provide leadership for your team”. At this “unit” level, the analysis of what it takes to be good at production horticulture seems to be understandable and comprehensible, with a good degree of common sense. Most areas of expertise can be broken down in this way to the kind of level where one sees individual roles, jobs or tasks that could in principle be allocated to different people. And there is often a logic to the analysis: to get crops, you have to prepare the ground, then plant, look after, and harvest the crops. That much is obvious to anyone. More detailed, less obvious analysis could be given by someone with relevant experience.

    Even at this level of NOS units, there is some abstraction going on. LANTRA evidently chose not to create separate units or standards for growing carrots, cabbages and strawberries. Going back to the ideas on competence correlation, we infer that there is much in common between competence at growing carrots and strawberries, even if there are also some differences. This may be where “knowledge” comes into play, and why occupational standards seem universally to have knowledge listed as well as skills. If someone is competent at growing carrots, then perhaps simply their knowledge of what is different between growing carrots and growing strawberries goes much of the way towards their competence in growing strawberries. But how far? That is less clear.

    Abstraction seems to be even more extensive at lower levels. Taking an arbitrary example, the first, fairly ordinary unit in “Production Horticulture” is “Clear and prepare sites for planting crops”, and is subdivided into two elements, PH1.1 “Clear sites ready for planting crops” and PH1.2 “Prepare sites and make resources available for planting crops”. PH1.2 contains lists of 6 things that people should be able to do, and 9 things that they should know. The second item in the list of things that people need to be able to do is “place equipment and materials in the correct location ready for use”, which self-evidently requires a knowledge of what the correct location is. The fifth item is to “keep accurate, legible and complete records”. This is supported by an explicit knowledge requirement, documented as “the records which are required and the purpose of such records”.

    This is quite a substantial abstraction, as these examples could make equal sense in a very wide range of occupational standards. In each case, the exact nature of these abilities needs to be filled out with the relevant details from the particular area of application. But no formal structure is given for these abstractions, here or, as far as I know, in any occupational standard, and this leads to problems.

    For example, there is no way of telling, from the standard documentation, the extent to which proving the ability to keep accurate records in one domain is evidence of the ability to keep accurate records in another domain; and indeed no way is provided to document views about the relationship between various record-keeping skills. When describing wide competences, this is may be somewhat less of a problem, because when two skills or competences are analysed explicitly, one can at least compare their documented parts to arrive at some sense of the degree of similarity, and the degree to which competence in one might predict competence in another. But at the narrowest, finest grained level documented — in the case of NOSs, the analysis of a unit or element into items of skill and items of knowledge — it means that, though we can see the abstractions, it is not obvious how to use them, and in particular it is not clear how to represent them in information systems in a way that they might be automatically compared, or otherwise managed.

    There has been much written, speculatively, about how competence descriptions and structures might effectively be used with information systems, for example acting as the common language between the outcomes of learning, education and training on the one hand, and occupational requirements on the other. But to make this effective in practice, we need to get to grips properly with these questions of abstraction, structure and representation, to move forward from the common sense but informal abstractions and loose structures presently in use, to a more formally structured, though still flexible and intuitive approach.

    The next two blog entries will attempt to explore two possible aspects of formalisation: level, and other features often left out from competence definitions, including context or conditions.

    Competence concepts and competence transfer

    (2nd in my logic of competence series)

    If we take competence as the ability to do what is required in a particular situation, then there is a risk that competence concepts could proliferate wildly. This is because “what is required” is rarely exactly the same in different kinds of situations. Competence concepts group together the abilities to do what is required in related situations, where there is at least some correlation between the competence required in the related situations — sometimes talked about in terms of transfer of competence from one situation to another.

    For example, horticulture can reasonably be taken as an area of competence, because if one is an able horticulturalist in one area — say growing strawberries — there will be some considerable overlap in one’s ability in another, less practiced area — say growing apples. Yes, there are differences, and a specialist in strawberries may not be very good with apples. But he or she will probably be much better at it than a typical engineer. Surgery might be a completely different example. A specialist in hip replacements might not be immediately competent in kidney transplants, but the training necessary to achieve full competence in kidney transplants would be very much shorter than for a typical engineer.

    Some areas of competence, often known as “key skills”, appear across many different areas of work, and probably transfer well. Communication skills, team working skills, and other areas at the same level play a part in full competence of many different roles, though the communication skills required of a competent diplomat may be at a different level to those required of a programmer. Hence, we can meaningfully talk about skill, or competence, or competency, in team work. But if we consider the case of “dealing with problems” (and that may reasonably be taken as part of full competence in many areas) there is probably very little in common between those different areas. We therefore do not tend to think of “dealing with problems” as a skill in its own right.

    But we do recognise that the competence in dealing with problems in, say, horticultural contexts shares something in common, and when someone shows themselves able to deal with problems in one situation, probably we only need to inform them of what problems may occur and what action they are meant to take, and they will be able to take appropriate actions in another area of horticulture. As people gain experience in horticulture, one would expect that they would gain familiarity with the general kinds of equipment and materials they have to deal with, although any particularly novel items may need learning about.

    Clearing and preparing sites for crops may well have some similarity to other tasks or roles in production horticulture and agriculture more generally, but is unlikely to have much in common with driving or surgery. The more skills or competences in two fields have in common, the more that competence in one field is likely to transfer to competence in another.

    So, we naturally accept competence concepts as meaningful, I’m claiming, in virtue of the fact that they refer to types of situation where there is at least some substantial transfer of skill between one situation and another. The more that we can identify transfer going on, the more naturally we are inclined to see it as one area of competence. Conversely, to the extent to which there is no transfer, we are likely to see competences as distinct. This way of doing things naturally supports the way we informally deal with reputation, which is generally done in as general terms as seems to be adequate. Though this failure to look into the details of what we mean to require does lead to mistakes. How did we not know that the financial adviser we took on didn’t know about the kind of investments we really wanted, or was indeed less than wholly ethical in other ways?

    Having a clearer idea of what a competence is prepares the way for thinking more about the analysis and structure of competence.

    The basis of competence ideas

    (1st in my logic of competence series)

    Let’s start with a deceptively simple definition. Competence means the ability to do what is required. It is the unpacking of “what is required” that is not simple.

    I don’t want to make any claims for that particular form of words — there are any number of definitions current, most of them quite reasonable in their own way. But, in the majority of definitions, you can pick out two principle components: here, they are “the ability to do” and “what is required”. Rowin’s and my earlier paper does offer some other reasonable definitions of what competence means, but I wanted here to start from something as simple-looking as possible.

    If the definition is to be helpful, “the ability to do” has to be something simpler than the concept of competence as a whole. And there are many statements of basic, raw ability that would not normally be seen as amounting to competence in any distinct sense. The answers to questions like “can you perform this calculation in your head”, “can you lift this 50 kg weight” and “can you thread this needle” are generally taken as matters of fact, easily testable by giving people the appropriate equipment and seeing if they can perform the task.

    What does “what is required” mean, then? This is where all the interest and hidden complexity arises. Perhaps it is easiest to go back to the basic use of competence ideas in common usage. For a job — with an employer, perhaps, or just getting a trades person to fix something — “what is required” is that the person doing the job is competent at the role he or she is taking on. Unless we are recruiting someone, we don’t usually think this through in any detail. We just want “a good gardener”, or to go to “a good dentist” without knowing exactly what being good at these roles involves. We often just go on reputation: has that person done a good job for someone we know? would they recommend them?

    The idea is similar from the other point of view. If I want a job as a gardener or a dentist, at the most basic level I want to claim (and convince people) that I am a good gardener, or a good dentist. Exactly what that involves is open to negotiation. What I’m suggesting is that these are the absolute basics in common usage and practice of concepts related to competency. It is, at root, all about finding someone, or claiming that one is the kind of person, that fulfils a role well, according to what is generally required.

    People claim, or require, a wide range of things that they “can do” or “are good at”. At the most familiar end of the spectrum, we think of people’s ability or competence for example at cooking, housework, child care, driving, DIY. There are any number of sports and pastimes that people may be more or less good at. At the formal and organisational end of the spectrum, we may think of people as more or less good at their particular role in an organisation — a position for which they may be employed, and which might consist of various sub-roles and tasks. The important point to base further discussion on is that we tend normally to think about people in these quite general terms, and people’s reputation tends to be passed on in these quite general terms, often without explicit analysis or elaboration, unless specific questions are raised.

    When either party asks more specific questions, as might happen in a recruitment situation, it is easy to imagine the kind of details that might come up. Two things may happen here. First, questions may probe deeper than the generic idea of competence, to the specifics of what is required for this particular job or role. And second, the issue of evidence may come up. I’ll address these questions later, but right next I want to discuss how competence concepts are identified in terms of transferability.

    But the point I have made here is that all this analysis is secondary. Because common usage does not rely on it, we must take the concept of competence as resting primarily just on the claim and on the requirement for a person to fill a role.

    The logic of competence

    This is a note introducing a series of posts setting out the logic of competence as I see it. I will link from here to other posts in the series as I write them.

    This work as a whole is intended to feed in to several activities in which I have been taking part, including InLOC, eCOTOOL, ICOPER, MedBiquitous Competencies WG, Competence Structures for E-Portfolio Tools, and the CEN WS-LT Competency SIG, which had its 3rd Annual meeting in Berlin near the beginning of the series. It builds on and complements Rowin’s and my earlier paper, intending not to set out an academic case, which we did in that paper, but rather the detailed logic, that can be evaluated on its own terms, requiring reference only to common language and practice.

    The first step is to express a working definition, and a logical basis for further discussion, which is that it is expressions like claims to competence, rather than competency definitions, that are logically prior. See № 1, “The basis of competence ideas”.

    I will continue by considering (please click to go to the posts)

    1. how transferability gives a competence concept its logical identity
    2. how the analysis of just what a competence claim is claiming results in various possible structures for the competence-related concepts
    3. how to make sense of levels of competence
    4. how to make sense of criteria, conditions or context
    5. basic tree structuring of competence concepts
    6. desirable variants of tree structures (including more on levels)
    7. representing the commonality in different structures of competence
    8. other less precise cross-structure relationships
    9. definitions, and a map, of several of the major concepts used, together with logically related ones.

    Continuing towards practical implementations:

    1. the requirements for implementing the logic of competence
    2. representing the interplay between concept definitions and structures
    3. representing structural relationships
    4. different ways of representing the same logic
    5. optional parts of competence
    6. the logic of National Occupational Standards
    7. the logic of competence assessability
    8. representing level relationships
    9. more and less specificity in competence definitions
    10. the logic of tourism as an analogy for competence
    11. The pragmatics of InLOC competence logic
    12. InLOC as a cornerstone for other initiatives
    13. InLOC and open badges: a reprise
    14. Open frameworks of learning outcomes
    15. Why frameworks of skill and competence?
    16. How to do InLOC
    17. The key to competence frameworks

    I will try, where possible, to motivate and illustrate each point by reference to examples, drawn from existing published materials.

    After all the parts have been published and discussed, I intend to put together a full paper (placement as yet undecided) incorporating and crediting ideas from other people — so please contribute these, ideally as comments on the posts themselves, or alternatively just to me.

    Later addition, February 2011: I recognise that some of these posts are more than just bite sized. Are there some that you find too much of a mouthful to chew and/or swallow? Might that hold you back from commenting? If so, here is an offer: get in touch with me and I will talk you through any of this material you are interested in, while at the same time I will try to understand where you are coming from, and what is easier or harder for you to grasp. That will help me to express myself more clearly and simply, where I have not yet achieved clarity. I hope this will help!