(19th in my logic of competence series.)
Descriptions of personal ability can serve either as claims, like “This is what I am good at …”, or as answers to questions like “What are you good at?” or “can you … ?” In conversations — whether informally, or formally as in a job interview — the claims, questions, and answers may be more or less specific. That is a necessary and natural feature of communication. It is the implications of this that I want to explore here, as they bear on my current work, in particular including the InLOC project.
This is a new theme in my logic of competence series. Since the previous post in that series, I had to focus on completing the eCOTOOL competence model and managing the initial phases of InLOC, which left little time for following up earlier thinking. But there were ideas clearly evident in my last post in this series (representing level relationships) and now is the time for followup and development. The terms introduced previously there can be linked to this new idea of specificity. Simply: binarily assessable concepts are ones that are defined specifically enough for a yes/no judgement about a person’s ability; rankably assessable concepts have an intermediate degree of specificity, and are complemented by level definitions; while unorderly assessable concepts are ones that are less specifically defined, requiring more specificity to be properly assessable. (See that previous post for explanation of those terms.) The least specific competence-related concepts are not properly assessable at all, but serve as tags or headings.
As well as giving weight and depth to this idea of specificity in competence definitions, in this post I want to explore the connection between competence definitions and answering questions, because I think this will help to explain the ideas, because it is relatively straightforward to understand that questions and answers can be more or less specific.
Since the previous post in the series, my terminology has shifted slightly. The goals of InLOC — Integrating Learning Outcomes and Competences — have made it plain that we need to deal equally with learning outcomes and with competence or ability concepts. So I include “learning outcomes” more liberally, always meaning intended learning outcomes.
Job interviews
Imagine you are interviewing someone for a job. To make it more interesting, let’s make it an informal one: perhaps a mutual business contact has introduced you to a promising person at a business event. Add a little pressure by imagining that you have just a few minutes to make up your mind whether you want to ask this person to go through a longer, formal process. How would you structure the interview, and what questions would you ask?
As I envisage the process, one would probably start off with quite general, less specific questions, and then go into more detail where appropriate, where it mattered. So, for instance, one might ask “are you a programmer?”, and if the answer was yes, go into more detail about languages, development environments, length of experience, type of experience, etc. etc. The useful detail in this case would depend entirely on the circumstances of the job. For a graduate to be recruited into a large company, what matters might be aptitude, as it would be likely that full training would be supplied (which you could perhaps see as a kind of technical “enculturation”). On the other hand, for a specialist to join a short-term high-stakes project, even small details might matter a lot, as learning time would probably be minimal.
In reality, most job interviews start, not from a blank sheet, but from the basis of a job advert, and an application form, or CV and covering letter. A job advert may specify requirements; an application form may contain specific questions for which answers are expected, but in the absence of an appliation form, a CV and covering letter needs to try to answer, concisely, some of the key questions that would be asked first in an informal, unprepared job interview. This naturally explains the universal advice that CVs should be designed specifically for each job application. What you say about yourself unprompted not only reveals that information itself, but also says much about what you expect the other person to reckon as significant or interesting.
So, in the job interview, we notice the natural importance of varying specificity in descriptions and questions about abilities and experience.
Recruitment
This then carries over to the wider recruitment process. Potential employers often formulate a list of what is required of prospective employees, in terms of which abilities and experience are essential or desirable, but the detail and specificity of each item will naturally vary. The evidence for a less specific requirement may be assessed at interview with some quick general questions, but a more exacting requirement may want harder evidence such as a qualification, certificate or testimonial from an expert witness.
For example, in a regulated world such as pesticides that I wrote about recently, an employer might well want a prospective employee to have obtained a relevant certificate or qualification, so that they can legally do their job. Even when a certificate is not a legal requirement, some are widely asked for. A prospective sales employee with a driving licence or an office employee with an ECDL might be preferred over one without, and it would be perfectly reasonable for an employer to insist that non-native speakers had obtained a given certified level of proficiency in the principle workplace language. In each case, because the certificate is awarded only to people who have passed a carefully controlled test, the test result serves to answer many quite specific questions about the holder’s abilities, as well as the potential legal fact of their being allowed to perform certain actions in regulated occupations.
Vocational qualifications often detail quite specifically what holders are able to do. This is clearly the intention of the Europass Certificate Supplement (ECS), and has been in the UK, through the system of National Vocational Qualifications, relying on National Occupational Standards. So we could expect that employers with specific learning outcome or competence requirements may specify that candidates should have particular vocational qualifications; but what about less specific requirements? My guess is that those employers who have little regard for vocational qualifications are just those whose requirements are less specific. Time was when many employers looked only for a “good degree”, which in the UK often meant a “2:1″, an upper second class. This was supposed to answer generic questions, as typically the specific subject of the degree was not specified. Now there is a growing emphasis on the detail of the degree transcript or Europass Diploma Supplement (EDS), from which a prospective employer can read at least assessment results, if not yet explicit details of learning outcomes or competences. There is also a increasing trend towards making explicit the intended learning outcomes of courses at all levels, so the course information might be more informative than the transcript of EDS.
Interestingly, the CVs of many technical workers contain highly unspecific lists of programming languages that the individual implicitly claims, stating nothing about the detailed abilities and experience. These lists answer only the most general questions, and serve effectively only to open a conversation about what the person’s actual experience and achievements have been in those programming languages. At least for human languages there is the increasingly used CEFR; there does not appear to be any such widely recognised framework for programming languages. Perhaps, in the case of programming languages, it would be clumsy and ineffective to give answers to more detailed questions, because the individual does not know what those detailed questions would be.
Specificity in frameworks
Frameworks seem to gravitate towards specificity. Given that some people want to know the answers to specific questions, this is quite reasonable; but where does that leave the expression of the less specific requirements? For examples of curriculum frameworks, there is probably nowhere better than the American Achievement Standards Network (ASN). Here, as in many other places, learning outcomes are defined only in one or two levels. The ASN transcribes documents faithfully, then among many other things marks the “indexing status” of the various components. For an arbitrary example, see Earth and Space Science, which is a topic heading and not “indexable”. The heading below just states what the topic is about, and is not “indexable”. It is below this that the content becomes “indexable”, with first some less specific statements about what should be achieved by the end of fourth grade, broken down into the smallest components such as Identify characteristics of soils, minerals, rocks, water, and the atmosphere. It looks like it is just the “indexable” resources that are intended to represent intended learning outcome definitions.
At fourth grade, this is clearly nothing to do with employment, but even so, identifying characteristics of soils etc. is something that students may or may not be able to do, and this is part of the less specifically defined (but still “indexable”) “understanding of the characteristics of earth materials”. It strikes me that the item about identifying characteristics would fit reasonably (in my scheme of the previous post) as a “rankably assessible” concept, and its parent item about understanding might be classified (in my scheme) as unorderly assessable.
How to represent varying specificity
Having pointed out some of the practical examples of varying specificity in definitions of learning outcome or competence, the important issue for work such as InLOC is to provide some way of representing, not only different levels of specificity, but also how they relate to one another.
An approach through considering questions and answers
Any concept that is related to learning outcomes or competence can provide the basis for questions of an individual. Some of these questions have yes/no answers; some invite answers on a scale; some invite a longer, less straightforward reply, or a short reply that invites further questions. A stated concept can be both the answer to a question, and the ground for further questions. So, to go back to some of the above examples, a CV might somewhere state “French” or “Java”. These might be answers to the questions “what languages have you studied?” or “what languages do you use?” They also invite further questions, such as “how well do you know …?”, or “how much have you used …, and in what contexts?”, or “how good are you at …?” – which, if there is an appropriate scale, could be reformulated as “what level is your ability in …?”
Questions could be found corresponding to the ASN examples as well. “Identify characteristics of soils, minerals, rocks, water, and the atmosphere” has the same format that allows “can you …?” or “I can …”. The less specific statement — “By the end of fourth grade, students will develop an understanding of the characteristics of earth materials,” — looks like it corresponds with questions more like “what do you understand about earth materials?”.
As well as “summative” questions, there are related questions that are used in other ways than assessment. “How confident are you of your ability in …?” and “is your ability in … adequate in your current situation?” both come to mind (stimulated by considerations in LUSID).
What I am suggesting here is that we can adapt some of the natural properties of questions and answers to fit definitions of competence and ability. So what properties do I have in mind? Here is a provisional and tentative list.
- Questions can be classified as inviting one of four kinds of answer:
- yes or no;
- a value on a (predefined) scale;
- examples;
- an explanation that is more complex than a simple value.
- These types of answer probably need little explanation – many examples can readily be imagined.
- The same form of answer can relate to more than one question, but usually the answer will mean different things. To be fully and clearly understood, an answer should relate to just one question. Using the above example, “French” as the answer to “what languages have you studied?” means something substantially different from “French” as the answer to “what languages are you fluent in?”
- A more specific question may imply answers to less specific questions. For example, “what programming languages have you used in software development?” implies answers such as “software development” to the question “what competences do you have in ICT?” Many such implied questions and answers can be formulated. What matters in a particular framework is the other answers in that particular framework that can be inferred.
- An answer to a less specific question may invite further more specific questions.
- Conversely to the example just above, if the question “what competences do you have in ICT?” includes the answer “software development”, a good follow-up question might be “what programming languages have you used in software development?” Similar patterns could be seen for any technical specialty. Often, answers like this may be taken from a known list of options. There are only so many languages, both human and computer.
- Where an answer is a rankable concept, questions about the level of that ability are invited. For instance, the question “what foreign languages can you speak?”, answered with “French” and “Italian”, invites questions such as “what is your European Language Passport level of ability in spoken interaction in French?”
- Where an answer has been analysed into its component parts, questions about each component part make sense. For example, if the answer to “are you able to clear sites for tree planting?”, following the LANTRA Treework NOS (2009) was “yes”, that invites the narrower implied questions set out in that NOS, like “can you select appropriate clearance methods …?” or “do you understand the potential impacts of your work on the environment …?”
- Unless the question is fully specific, admitting only the answers yes and no, and even in that case many times, it is nearly always possible to ask further questions, and give further answers. But everyone’s interest in detail stops sooner or later. The place to stop asking more specific questions is when the answer does not significantly affect the outcome you are looking for. And that varies between different interested parties.
- Questions may be equivalent to other questions in other frameworks. This will come out from the answers given. If the answers given by the same person in the same context are always the same for two questions, they are effectively equivalent. It is genuinely helpful to know this, as it means that one can save time not repeating questions.
- Answers to some questions may imply answers to other questions in different frameworks, without being equivalent. The answers may contain, or be contained by, their counterparts. This is another way of linking together questions from different frameworks, and saving asking unnecessary extra questions.
That covers a view of how to represent varying specificity in questions and answers, but not yet frameworks as they are at present.
Back to frameworks as they are at present
At present, it is not common practice to set out frameworks of competence or ability in terms of questions and answers, but only in terms of the concepts themselves. But, to me, it helps understanding enormously to imagine the frameworks as frameworks of questions, and the learning outcome or competence concepts as potential answers. In practice, all you see in the frameworks is the answers to the implied questions.
Perhaps this has come about through a natural process of doing away with unnecessary detail. The overall question in occupational competence frameworks is, “are you competent to do this job?”, so it can go unstated, with the title of the job standing in for the question. The rest of the questions in the framework are just the detailed questions about the component parts of that competence (see Carroll and Boutall’s ideas of Functional Analysis in their Guide to Developing National Occupational Standards). The formulation with action verbs helps greatly in this approach. To take NOS examples from way back in the 3rd post in this series, the units themselves and the individual performance criteria share a similar structure. Less specifically, “set out and establish crops” relates both to the question “are you able to set out and establish crops” and the competence claim “I am able to set out and establish crops”. More specifically, “place equipment and materials in the correct location ready for use” can be prefixed with “are you able to …” for a question, or “I am able to …” as a claim. Where all the questions take a form that invites answers yes or no, one really does not need to represent the questions at all.
With a less uniform structure, one would need mentally to remove all the questions to get a recognisable framework; or conversely, to understand a framework in terms of questions, one needs to add in those implied questions. This is not as easy, and perhaps that is why I have been drawn to elaborating all those structuring relationships between concepts.
We are left in a place that is very close to where we were before in the previous post. At simplest, we have the individual learning outcome or competence definitions (which are the answers) and the frameworks, which show how the answers connect up, without explicitly mentioning the questions themselves. The relations between the concepts can be factored out, and presented either together in the framework, or separately together with the concepts that are related by those relations.
If the relationships are simply “broader” and “narrower”, things are pretty straightforward. But if we admit less specific concepts and questions, because the questions are not explicitly represented, the structure needs a more elaborate set of relationships. In particular, we have to make particular provision for rankable concepts and levels. I’ll leave detailing the structures we are left with for later.
Before that, I’d like to help towards better grasp of the ideas through the analogy with tourism.