Academic humility

Another conversation with Mark Johnson yesterday got me thinking seriously about what the role of humility and of confidence is in the academic community. If there is a literature on this, I don’t know it, so my remarks here are appropriately tentative. But the issue is of surprising importance to personal development, so perhaps worth the risk of some speculation.

Back when I was doing my PhD (gulp) years ago, I suppose I must have recognised, probably only just in time, the importance of this for a PhD candidate to turn into a fully-fledged member of the academic community. To cut a long story short, I would now say that it should be a requirement of any PhD viva for any candidate who is in line to pass to be thrown a few questions outside of their area of research. I fear I don’t recall which such questions were asked of me, but perhaps they could have been about more mainstream psychology, for example.

There seems to me to be a range of acceptable answers to questions outside one’s area of expertise. The simplest is something like “I’m sorry, I don’t know.” PhD candidates are not meant to be masters of general knowledge. While this answer would be problematic if the question is to do with the candidate’s central thesis, when off topic it is not only allowed, but just about necessary. Or it could be, “I haven’t read enough of the literature to know the answer there, so I’d rather not speculate.” Or, at a pinch, “Well, I don’t know this literature well, but I’d guess that …”. Certainly not just an unqualified unfounded opinion in a PhD viva.

This isn’t really the flavour of the month, though, is it? The world seems to be fuller than ever of people willing to tout their own conjectures, or worse, prejudices, as facts. People in all walks of life seem to be called upon to be confident in their own views, so it is hard to find good role models for appropriate academic or intellectual humility. There was Socrates, of course, but that was rather a long time ago for people with little sense of history.

Why should this matter? One thing pointed out by Mark was the interests of future students of this potential member of the academic community. To help students reach their own full intellectual maturity, it is rarely a good thing to lay down “the truth” and expect students to lap it up. Do this, perhaps, for younger school pupils, if they still need a basic set of working views to start with, and provided that the same line is taken by all other teachers in the establishment. But after that, teachers, lecturers, mentors, coaches, need to provide good and timely questions – another thing Socrates excelled at.

Perhaps undergraduates do need to absorb a corpus of existing established belief on a topic, but they also need a clear awareness of the boundaries of existing knowledge. Focusing on what one does know is less likely to help than focusing on what people dispute.

This takes me back so strongly to the work of William G Perry, Robert Kegan, and Darren Cambridge. There is a hard challenge here when one comes to the stage of PhD study – the academic apprenticeship. On the one hand, as Kegan points out, a graduate needs to come to their own conclusions, not just accept one single authority in a discipline. As Darren Cambridge points out, many subjects have “essentially contestable” concepts there to be studied, and portfolio practice allows the learners to argue for their position; to marshal evidence for it; to be a co-creator of meaning in their part of the academic world. This is the place, perhaps, for confidence, because if one is not confident of one’s own intellectual position, who else will take it seriously? Who will count it as a material contribution to the academic world? As I recall it, this corresponds at least roughly to Kegan’s “4th Order” thinking.

Kegan’s 5th (and final) Order is rather more elusive. I can’t summarise such a subtle concept here quickly – go read “In Over Our Heads” etc. But what Kegan calls the 5th Order to me is to do with being aware of all the orders; being able to adjust one’s response to fit the person with whom one is conversing. This is what teachers should be aiming at, it seems to me. They should be well-developed, intellectually and ethically, in themselves, but also aware of the needs of those who are not so well-developed, and able to speak to their condition. And this feels like a deeper and even more wholesome kind of humility than the humility of knowing the boundaries of one’s specialist area of study. It includes the position that all one’s specialist knowledge is useless and inappropriate in some contexts. It includes the idea that to question every tradition in a post-modernist way is not the right thing to do for people who have not yet even established their own way.

This can be confusing, which I suppose is why it requires unusual maturity. To those at earlier stages, one may need mainly to be a knowledgeable and reliable authority. To those at later stages, one needs first to leave room for advanced learners to develop their own ideas and positions, and then to model comfort with being uncertain, to prepare the way for the learner to be achieve the same true mastery.

I turn back to T.S. Eliot, East Coker:

“The only wisdom we can hope to acquire
Is the wisdom of humility: humility is endless.”

Questions about ACTA

The Anti-Counterfeiting Trade Agreement (ACTA) has gathered much press over recent days. The arguments raised are worrying, but also rather confused. From the point of view of intellectual property and openness (which concerns us in many ways in CETIS, JISC, and UK HE) it is worth aiming some very sharp questions at the weak intellectual foundations of ACTA.

It is worth reading the text of the agreement (just search for a chunk of what I reproduce here). We need go no further than the very first two paragraphs:

The Parties to this Agreement,

Noting that effective enforcement of intellectual property rights is critical to sustaining economic growth across all industries and globally;

Noting further that the proliferation of counterfeit and pirated goods, as well as the proliferation of services that distribute infringing material, undermines legitimate trade and the sustainable development of the world economy, causes significant financial losses for right holders and for legitimate businesses, and, in some cases, provides a source of revenue for organized crime and otherwise poses risks to the public; […]

It’s perhaps not surprising that, if one really believes this, measures can be implemented that lead to bad consequences. We see it so often: people get scared by something, and overreact. I believe, as thinking people, we should be robustly questioning what come across as misleading half-truths contained in this preamble. Before signing up to such a treaty, it should be incumbent on signatories to ensure that the people they represent actually believe in the justifications stated in such a preamble, otherwise there is bound to be trouble.

First, where is the actual evidence that “effective enforcement of intellectual property rights is critical to sustaining economic growth across all industries and globally”? At very least, such a treaty should refer to the literature. I don’t know it, but who does?

So, knowing essentially nothing about the literature, I plunge in via a Google search for “intellectual property” and “economic growth”. One really can’t expect the WIPO literature to be unbiased. And many other academic articles are, of course, hidden behind paywalls, benefiting, … er … the copyright holders … who are only rarely the authors, and much more often the vast wealthy empires of publishing. But then I found one paper on an open website. It is: “International trade, economic growth and intellectual property rights: A panel data study of developed and developing countries” by Patricia Higino Schneider, published in the Journal of Development Economics 78 (2005) 529 – 547. Part of the conclusion is very interesting.

The results regarding intellectual property protection are interesting. They suggest that IPRs have a stronger impact on domestic innovation for developed countries and might even negatively impact innovation in developing countries. These results may be indicative of the fact that most innovation in developing countries may actually be imitation or adaptive in nature. Therefore, providing stronger IPRs protects foreign firms at the expense of local firms.

Even though this is followed by the relatively tame

The policy implication here is not to discourage intellectual property protection in developing countries, but to generate incentives for its strengthening. Innovative activities and IPRs are complementary in nature; therefore, developed countries would benefit by supporting R&D activities in developing countries.

At least, the end of the conclusion

highlights the importance of conducting studies that are inclusive of both developed and developing countries and suggests that pooling together developed and developing countries might lead to misleading conclusions, and consequently to inadequate policy recommendations

And this is from just one article I managed to find. Is that not enough to start casting doubt on the bland assurance of what ACTA “notes”?

That’s only the first paragraph of ACTA.

The second paragraph expresses concern about “significant financial losses for right holders” without questioning the ethics or desirability of this. So what if a right holder is making itself rich exploiting its IP ownership while withholding free useful information and cheap and effective solutions to people who need them? No talk of that, but rather of the cases of “organized crime”. That might remind people of the “war on drugs”, but hasn’t even that fallen into disrepute recently?

And these two paragraphs form the sum total of the explicit justification for ACTA. It strikes me as scandalous that such far-reaching and worrying political conclusions can be based on such a contentious basis.

Yes, of course we don’t want criminals making dangerous counterfeit goods. Let’s encourage our governments to fight that kind of thing that we all agree on, without creating highly problematic treaties and laws based on highly dubious premises.

Where are the customers?

All of us in the learning technology standards community share the challenge of knowing who our real customers are. Discussion at the January CEN Workshop on Learning Technologies (WS-LT) was a great stimulus for my further reflection — should we be thinking more of national governments?

Let’s review the usual stakeholder suspects: education and training providers; content providers; software developers; learners; the European Commission. I’ll gesture (superficially) towards arguing that each one of these may indeed be stakeholders, but the direction of the argument is that there is a large space in our clientele and attendance for those who are directly interested and can pay.

Let’s start with the the providers of education and training. They do certainly have an interest in standards, otherwise why would JISC be supporting CETIS? But rarely do they implement standards directly. They are interested, so our common reasoning goes, in having standards-compliant software, so that they can choose between software and migrate when desired, avoiding lock-in. But do they really care about what those standards are? Do they, specifically, care enough to contribute to their development and to the bodies and meetings that take forward that development?

In the UK, as we know, JISC acts as an agent on behalf of UK HEIs and others. This means that, in the absence of direct interest from HEIs, it is JISC that ends up calling the shots. (Nothing inherently wrong with that – there are many intelligent, sensible people working for JISC.) Many of us play a part in the collective processes by which JISC arrives at decisions about what it will fund. We are left hoping that JISC’s customers appreciate this, but it is less than entirely clear how much they appreciate the standardisation aspect.

I’ll be even more cursory about content providers, as I know little about that field. My guess is that many larger providers would welcome the chance of excluding their competitors, and that they participate in standardisation only because they can’t get away with doing differently. Large businesses are too often amoral beasts.

How about the software vendors, then? We don’t have to look far for evidence that large purveyors of proprietary software may be hostile in spirit to standardisation of what their products do, and that they are kept in line, if at all, only by pressure from those who purchase the software. In contrast, open source developers, and smaller businesses, typically welcome standards, allowing work to be reused as much as possible.

In my own field of skills and competence, there are several players interested in managing the relevant information about skills and competence, including (in the UK) Sector Skills Councils, and bodies that set curricula for qualifications. But they will naturally need some help to structure their skill and competence information, and for that they will need tools, either that they develop themselves or buy. It is those tools that are in line to be standards compliant.

And what of the learners themselves? Seems to me “they” (including “we” users) really appreciate standards, particularly if it means that our information can be moved easily between different systems. But, as users, few of us have influence. Outside the open source community, which is truly wonderful, I can’t easily recall any standards initiative funded by ordinary users. Rather, the influence we and other users have is often doubly indirect: filtered through those who pay for the tools we use, and through those who develop and sell those tools.

The European Commission, then? Maybe. We do have the ICT Standardisation Work Programme (ICTSWP), sponsored by DG Enterprise and Industry. I’m grateful that they are sponsoring the work I am doing for InLOC, though isn’t the general situation a bit like JISC? It is all down to which priorities happen to be on the agenda (of the EC this time), and the EC is rather less open to influence than JISC. Whether an official turns up to a CEN Workshop seems to depend on the priorities of that official. André Richier (the official named in “our” bit of the ICTSWP) often turns up to the Workshop on ICT Skills, but rarely to our Workshop. In any case they are not the ultimate customers.

What are the actual interests of the EC? Mobility, evidently. There has been so much European funding over the years with the term “mobiity” attached. Indeed, the InLOC work is seen as part of the WS-LT’s work on European Learner Mobility. Apart from mobility, the EC must have some general interest in the wellbeing of the European economy as a whole, but this is surely difficult, where the interests of different nations surely diverge. More of this later.

In the end, many people don’t turn up, for all these reasons. They don’t turn up at the WS-LT; they don’t turn out in any real strength for the related BSI committee, IST/43; few of the kinds of customer I’m thinking about even turn up at ISO SC36.

Who does turn up then? They are great people. They are genuinely enthusiastic about standardisation, and have many bright ideas. They are mostly in academia, small (often one-person) consultancy, projects, networks or consortia. They like European, national, or any funding for developing their often genuinely good ideas. Aren’t so many of us like that? But there were not even many of us lot at this WS-LT meeting in Berlin. And maybe that is how it goes – when starved of the direct stimulus of the people we are doing this for, we risk losing our way, and the focus, enthusiasm and energy dwindles, even within our idealistic camp.

Before I leave our esteemed attendees, however, I would like to point out the most promising bodies that were represented at the WS-LT meeting: KION from Italy and the University of Oslo’s USIT, both members of RS3G, the Rome Student Systems and Standards Group, an association of software providers. They are very welcome and appropriate partners with the WS-LT.

Which brings me back to the question, where are the other (real) customers? We could ask the same thing of IST/43, and of ISO SC36. Which directly interested parties might pay? Perhaps a good place to start the analysis is to divide the candidates roughly between private and public sectors.

My guess here is that private sector led standardisation works best in the classic kinds of situation. What would be the point of a manufacturer developing their own range of electrical plugs and sockets? Even with telephones, there are huge advantages in having a system where everyone can dial everyone else, and indeed where all handsets work everywhere (well, nearly…). But the systems we are working with are not in that situation. There are reasons for these vendors to want to try their own new non-standard things. And much of what we do leads, more than follows, implementation. That ground sometimes seems a bit shaky.

Private sector interest in skills and competence is focused in the general areas of personnel, recruitment, HR, and training. Perhaps, for many businesses, the issues are not seen as complex enough to merit the involvement of standards.

So what are the real benefits that we see from learning technology standardisation, and put across to our customers? Surely these include better, more effective as well as efficient education; in the area of skills and competence, easier transition between education and work; and tools to help with professional and vocational development. These relate to classic areas of direct interest from government, because all governments want a highly skilled, competent, professional work force, able to “compete” in the global(ised) economy, and to upskill themselves as needed. The foundations of these goals are laid in traditional education, but they go a long way beyond the responsibilities of schools, HEIs, and traditional government departments of education. Confirmation of the blurring of boundaries comes from recalling that the the EC’s ICTSWP is sponsored not by DG Education and Culture, but DG Enterprise and Industry.

My conclusion? Government departments need our help in seeing the relevance of learning technology standardisation, across traditional departmental boundaries. This is not a new message. What I am adding to it is that I think national government departments and their agencies are our stakeholders, indeed our customers, and that we need to be encouraging them to come along to the WS-LT. We need to pursuade them that different countries do share an interest in learning technology standardisation. This would best happen alongside their better involvement in national standards bodies, which is another story, another hill to climb…

ICT Skills

Several of us in CETIS have been to the CEN Workshop Learning Technologies (WS-LT), but as far as I know none yet to a closely related Workshop on ICT Skills. Their main claim to fame is the European e-Competence Framework (e-CF), a simpler alternative to SFIA (developed by the BCS and partners). It was interesting on several counts, and raises some questions we could all give an opinion on.

The meeting was on 2011-12-12 at the CEN meeting rooms in Brussels. I was there on two counts: first as a CETIS and BSI member of CEN WS-LT and TC 353, and second as the team leader of InLOC, which has the e-CF mentioned in its terms of reference. There was a good attendance of 35 people, just a few of whom I had met before. Some members are ICT employers, but more are either self-employed or from various organisations with an interest in ICT skills, and in particular, CEPIS (not to be confused with CETIS!) of which the BCS is a member. A surprising number of Workshop members are Irish, including the chair, Dudley Dolan.

The WS-LT and TC353 think a closer relationship with the WS ICT Skills would be of mutual benefit, and I personally agree. ICT skills are a vital component of just about any HE skills programme, essential as they are for the great majority of graduate jobs. As well as the e-CF, which is to do with competences used in ICT professions, the WS ICT Skills have recently started a project to agree a framework of key skills for ICT users. So for the WS-LT there is an easy starting point for which we can offer to apply various generic approaches to modelling and interoperability. The strengths of the two workshops are complementary: the WS-LT is strong in the breadth of generalities about metadata, theory, interoperability; the WS ICT Skills is strong in depth, about practice in the field of ICT.

The meeting revealed that the two workshops share several concerns. Both need to manage their CWAs, withdrawing outdated ones; both are concerned about the length and occasional opaqueness of the procedure to fund standardisation expert team work. Both are concerned with the availability and findability of their CWAs. André Richier is interested in both Workshops, though more involved in the WS ICT Skills. Both are concerned, in their own different ways, with the move through education and into employment. Both are concerned with creating CWAs and ENs (European “Norm” Standards), though the WS-LT is further ahead on this front, having prompted the formation of CEN TC353 a few years ago, to deal with the EN business. The WS ICT Skills doesn’t have a TC, and it is discussing whether to attempt ENs without a TC, or to start their own TC, or to make use of the existing TC353.

On the other hand, the WS ICT Skills seems to be ahead in terms of membership involvement. They charge money for voting membership, and draw in big business interest, as well as small. Would the WS-LT (counterintuitively perhaps) draw in a larger membership if it charged fees?

I was lucky to have a chance (in a very full agenda) to introduce the WS-LT and the InLOC project. I mentioned some of the points above, and pointed out how relevant InLOC is to ICT skills, with many links including shared experts. While understanding is built up between the two workshops, it was worth stressing that nothing in InLOC is sector-specific; we will not be developing any learning outcome or competence content; and that far from being in any way competitive, we are perfectly set up for collaboration with the WS ICT Skills, and the e-CF.

Work on e-CF version 3 is expected to be approved very soon, and there is a great opportunity there to try to ensure that the InLOC structures are suited to representing the e-CF, and that any useful insights from InLOC are worked into the e-CF. The e-CF work is ably led by Jutta Breyer who runs her own consultancy. Another project of great interest to InLOC is their work on “end user” ICT skills (the e-CF deals with professional competences), led by Neil Farren of the ECDL Foundation. The term “end user” caused some comment and will probably not feature in the final outputs of this project! Their project is a mere month or so ahead of InLOC in time. In particular, they envisage developing some kind of “framework shell”, and to me it is vital that this coordinates well with the InLOC outputs, as a generalisation-specialisation.

Another interesting piece of work is looking at ICT job profiles. The question of how a job profile relates to competence definitions is something that needs clarifying and documenting within the InLOC guidelines, and again, the closer we can coordinate this, the better for both of us.

Finally, should there be an EN for the e-CF? It is a tricky question. Sector Skills Councils in the UK find it hard enough to write National Occupation Standards for the UK – would it be possible to reach agreement across Europe? What would it mean for SFIA? If SFIA saw it as a threat, it would be likely to weigh in strongly against such a move. Instead, would it be possible to persuade SFIA to accept a suitably adapted e-CF as a kind of SFIA “Lite”? Some of us believe that would help, rather than conflict with, SFIA itself. Or could there be an EN, not rigidly standardising the descriptions of “e-Competences”, but rather giving an indication for how such frameworks should be expressed, with guidelines on ICT skills and competences in particular?

Here, above all, there is room for detailed discussion between the Workshops, and between InLOC and the ongoing ICT Skills Workshop teams, to achieve something that is really credible, coherent and useful to interested stakeholders.

Badges – another take

Badges can be seen as recognisable tokens of status or achievement. But tokens don’t work in a vacuum, they depend on other things to make them work. Perhaps looking at these may help us understand how they might be used, both for portfolios and elsewhere.

Rowin wrote a useful post a few weeks ago, and the topic has retained a buzz. Taking this forward, I’d like to discuss specifically the aspects of badges — and indeed any other certificate — relevant both to portfolio tools and to competence definitions. Because the focus here is on badges, I’ll use the term “badge” occasionally to include what is normally thought of as a certificate.

A badge, by being worn, expresses a claim to something. Some real badges may express the proposition that the wearer is a member of some organisation or club. Anyone can wear an “old school tie”, but how does one judge the truth of the claim to belong to a particular alumni group? Much upset can be caused by the misleading wearing of medals, in the same way as badges.

Badges could often do with a clarification of what is being claimed. (That would be a “better than reality” feature.) Is my wearing a medal a statement that I have been awarded it, or it is just in honour of the dead relative that earned it? Did I earn this badge on my own, was I helped towards it, or am I just wearing it because it looks “cool”? An electronic badge, e.g. on a profile or e-portfolio, can easily link to an explicit claim page including a statement of who was awarded this badge, and when, beyond information about what the badge is awarded for. These days, a physical badge could have a QR code so that people can scan it and be taken to the same claim page.

If the claim is, for example, simply to “be” a particular way, or to adhere to some opinion, or perhaps to support some team (in each case where the natural evidence is just what the wearer says), then probably no more is needed. But most badges, at least those worn with pride, represent something more than that the wearer self-certifies something. Usually, they represent something like a status awarded by some other authority than the wearer, and to be worth wearing, they show something that the wearer has, but might not have had, which is of some significance to the intended observers.

If a badge represents a valued status, then clearly badges may be worn misleadingly. To counter that, there will need to be some system of verification, through which an observer can check on the validity of the implied claim to that status. Fortunately, this is much easier to arrange with an electronic badge than a physical one. Physical badges really need some kind of regulatory social system around them, often largely informal, that deters people from wearing misleading badges. If there is no such social system, we are less in the territory of badges, and more of certificates, where the issues are relatively well known.

When do you wear physical badges? When I do it is usually a conference, visitor or staff badge. Smart badges can be “swiped” in some way, and that could, for instance, lead to a web page on the authority’s web site with a photo of the person. That would be a pretty good quick check that would be difficult to fake effectively. “Swiping” can these days be magnetic, RFID, or QR code.

My suggestion for electronic badges is that the token badge links directly to a claim page. The claim page ideally holds the relevant information in a form that is both machine processable and human readable. But, as a portfolio is typically under the control of the individual, more portfolio pages cannot easily provide any official confirmation. The way to do this within a user-controlled portfolio would be with some kind of electronic signature. But probably much more effective in the long term is for the portfolio claim page to refer to other information held by the awarding authority. This page can either be public or restricted, and could hold varying amounts of information about the person as well as the badge claim.

Here are some first ideas of information that could relate to a badge (or indeed any certificate):

  • what is claimed (competence, membership, permission, values, etc.);
  • identity of the person claiming;
  • what authority is responsible for validating the claim and awarding;
  • when and on what grounds the award was made;
  • how and when any assessment process was done;
  • assurance that the qualifying performance was not by someone else.

But that’s only a quick attempt. A much slower attempt would be helpful.

It’s important to be able to separate out these components. The “what is claimed” part is very closely related to learning outcome and competence definitions, the subject of the InLOC work. All the assessment and validation information is separable, and the information models (along with any interoperability specifications) should be created separately.

Competence and values can be defined independently of any organisation — they attach just to an individual. This is different from membership, permission, and the like, that are essentially tied to systems and organisations, and not as such transferable.

The future of Leap2A?

We’ve done a great job with Leap2A in terms of providing a workable starting point for interoperability of e-portfolio systems and portability of learner-ownable information, but what are the next steps we (and JISC) should be taking? That’s what we need to think about.

The role of CETIS was only to co-ordinate this work. The ones to take the real credit are the vendors and developers of e-portfolio and related systems, who worked well together to make the decisions on how Leap2A should be, representing all the information that is seen as sharable between actual e-portfolio tools, allowing it to be communicated between different systems.

The current limitations come from the lack of coherent practice in personal and professional development, indeed in all the areas that e-portfolio and related tools are used for. Where some institutions support activities that are simply different from those supported by a different institution, there is no magic wand that can be waved over the information related to one activity that can turn it into a form that supports a fundamentally different one. We need coherent practice. Not identical practice, by any means, but practice where it is as clear as possible what the building blocks of stored lifelong learning information are.

What we really need is for real users — learners — to be taking information between systems that they use or have used. We need to have motivating stories of how this opens up new possibilities; how it enables lifelong personal and professional development in ways that haven’t been open before. When learners start needing the interoperability, it will naturally be time to start looking again, and developing Leap2A to respond to the actual needs. We’ve broken the deadlock by providing a good initial basis, but now the baton passes to real practice, to take advantage of what we have created.

What will help this? Does it need convergence, not on individual development practice necessarily, but on the concepts behind it? Does it need tools to be better – and if so, what tools? Does it need changes in the ways institutions support PDP? In November, we held a meeting co-located with the annual residential seminar of the CRA, as a body that has a long history of collaboration with CETIS in this area.

And how do we provide for the future of Leap2A more generally? Is it time to form a governing group of software developers who have implemented Leap2A? Is there any funding, or are there any initiatives, that can keep Leap2A fresh and increasingly relevant?

Please consider sharing your views, and contributing to the future of Leap2A.

Representing level relationships

(18th in my logic of competence series)

Having prepared the ground, I’m now going to address in more detail how levels of competence can best be represented, and the implications for the rest of representing competence structures. Levels can be represented similar to other competence concept definitions, but need different relationships.

I’ve written about how giving levels to competence reflects common usage, at least for competence concepts that are not entirely assessable, and that the labels commonly used for levels are not unique identifiers; about how defining levels of assessment fits into a competence structure; and lately about how defining levels is one approach to raising the assessability of competence concepts.

Later: shortly after first writing this, I put together the ideas on levels more coherently in a paper and a presentation for the COME-HR conference, Brussels.

Some new terms

Now, to take further this idea of raising assessability of concepts, it would be useful to define some new terms to do with assessability. It would be really good to know if anyone else has thought along this direction, and how their thoughts compare.

First, may we define a binarily assessable concept, or “binary” for short, as a concept typically formulated as something that a person either has or does not have, and where there is substantial agreement between assessors over whether any particular person actually has or does not have it. My understanding is that the majority of concepts used in NOSs are intended to be of this type.

Second, may we define a rankably assessable concept, or “rankable” for short, as a concept typically formulated as something a person may have to varying degrees, and where there is substantial agreement between assessors over whether two people have a similar amount of it, or who has more. IQ might be a rather old-fashioned and out-of-favour example of this. Speed and accuracy of performing given tasks would be another very common example (and widely used in TV shows), though that would be more applicable to simpler skills than occupational competence. Sports have many scales of this kind. On the occupational front, a rankable might be a concept where “better” means “more additional abilities added on”, while still remaining the same basic concept. Many complex tasks have a competence scale, where people start off knowing about it and being able to follow someone doing it, then perform the tasks in safe environments under supervision, working towards independent ability and mastery. In effect, what is happening here is that additional abilities are being added to the core of necessary understanding.

Last, may we define a unorderly assessable concept, or “unordered” for short, as any concept that is not binary or rankable, but still assessable. For it to remain assessable despite possible disagreement about who is better, there at least has to be substantial agreement between assessors about the evidence which would be relevant to an assessment of the ability of a person in this area. In these cases, assessors would tend to agree about each others’ judgements, though they might not come up with the same points. Multi-faceted abilities would be good examples: take management competence. I don’t think there is just one single accepted scale of managerial ability, as different managers are better or worse at different aspects of management. Communication skills (with no detailed definition of what is meant) might be another good example. Any vague competence-related concept that is reasonably meaningful and coherent might fall into this category. But it would probably not include concepts such as “nice person” where people would disagree even about what evidence would count in its support.

Defining level relationships

If you allow these new terms, definitions of level relationships can be more clearly expressed. The clearest and most obvious scenario is that levels can be defined as binaries related to rankables. Using an example from my previous post, success as a pop songwriter based on song sales/downloads is rankable, and we could define levels of success in that in terms of particular sales, hits in the top ten, etc. You could name the levels as you liked — for instance, “beginner songwriter”, “one hit songwriter”, “established songwriter”, “successful songwriter”, “top flight songwriter”. You would write the criteria for each level, and those criteria would be binary, allowing you to judge clearly which category would be attributed to a given songwriter. Of course, to recall, the inner logic of levels is that higher levels encompass lower levels. We could give the number 1 to beginner, up to number 5 for top flight.

To start formalising this, we would need an identifier for the “pop songwriter” ability, and then to create identifiers for each defined level. Part of a pop songwriter competence framework could be the definitions, along with their identifiers, and then a representation of the level relationships. Each level relationship, as defined in the framework, would have the unlevelled ability identifier, the level identifier, the level number and the level label.

If we were to make an information model of a level definition/relationship as an independent entity, this would mean that it would include:

  • the fact that this is a level relationship;
  • the levelled, binary concept ID;
  • the framework ID;
  • the level number;
  • the unlevelled, rankable concept ID;
  • the level label.

If this is represented within a framework, the link to the containing framework is implicit, so might not show clearly. But the need for this should be clear if a level structure is represented separately.

As well as defining levels for a particular area like songwriting, it is possible similarly (as many actual level frameworks do) to define a set of generic levels that can apply to a range of rankable, or even unordered, concepts. This seems to me to be a good way of understanding what frameworks like the EQF do. Because there is no specific unlevelled concept in such a framework, we have to make inclusion of the unlevelled concept within the information model optional. The other thing that is optional is the level label. Many levels have labels as well as numbers, but not all. The number, however, though it is frequently left out from some level frameworks, is essential if the logic of ordering is to be present.

Level attribution

A key point that has been growing in conviction in me is that relationships for level attribution and level definition need to be treated separately. In this context, the word “attribution” suggests that a level is an attribute, either of a competence concept or of a person. It feels quite close to other sorts of categorisation.

Representing the attribution of levels is pretty straightforward. Whether levels are educational, professional, or developmental, they can be attributed to competence concepts, to individual claims and to requirements. Such an attribution can be expressed using the identifier of the competence concept, a relationship meaning “… is attributed the level …”, and an identifier for the level.

If we say that a certain well-defined and binarily assessable ability is at, say, EQF competence level 3, it is an aid to cross-referencing; an aid to locating that ability in comparison with other abilities that may be at the same or different levels.

A level can be attributed to:

  • a separate competence concept definition;
  • an ability item claimed by an individual;
  • an ability item required in a job specification;
  • a separate intended learning outcome for a course or course unit;
  • a whole course unit;
  • to a whole qualification, but care needs to be exercised, as many qualifications have components at mixed levels.

An assessment can result in the assessor or awarding body attributing an ability level to an individual in a particular area. This means that, in their judgement, that individual’s ability in the area is well described by the level descriptors.

Combining generic levels with areas of skill or competence

Let’s look more closely at combining generic levels with general areas of skill or competence, in such a way that the combination is more assessable. A good example of this is associated with the Europass Language Passport (ELP) that I mentioned in post 4. The Council of Europe’s “Common European Framework of Reference for Languages” (CEFRL), embodied in the ELP, make little sense without the addition of specific languages in which proficiency is assessed. Thus, the CEFRL’s “common reference levels” are not binarily assessable, just as “able to speak French” is not. The reference levels are designed to be independent of any particular language.

Thus, to represent a claim or a requirement for language proficiency, one needs both a language identifier and an identifier for the level. It would be very easy in practice to construct a URI identifier for each combination. The exact method of construction would need to be widely agreed, but as an example, we could define a URI for the CEFRL — e.g. http://example.eu/CEFRL/ — and then binary concept URIs expressing levels could be constructed something like this:

http://example.eu/CEFRL/language/mode/level#number

where “language” is replaced by the appropriate IETF language tag; “mode” is replaced by one of “listening”, “reading”, “spoken_interaction”, “spoken_production” or “writing” (or agreed equivalents, possibly in other languages); “level” is replaced by one of “basic_user”, “independent_user”, “proficient_user”, “A1″, “A2″, “B1″, “B2″, “C1″, “C2″; and “number” is replaced by, say, 10, 20, 30, 40 , 50 or 60 corresponding to A1 through to C2. (These numbers are not part of the CEFRL, but are needed for the formalisation proposed here.) A web service would be arranged where putting the URI into a browser (making an http request) would return a page with a description of the level and the language, plus other appropriate machine readable metadata, including links to components that are not binarily assessable in themselves. “Italian reading B1″ could be a short description, generated by formula, not separately, and a long description could also be generated automatically combining the descriptions of the language, reading, and the level criteria.

In principle, a similar approach could be taken for any other level system. The defining authority would define URIs for all separate binarily assessable abilities, and publish a full structure expressing how each one relates to each other one. Short descriptions of the combinations could simply combine the titles or short descriptions from each component. No new information is needed to combine a generic level with a specific area. With a new URI to represent the combination, a request for information about that combination can return information already available elsewhere about the generic level and the specific area. If a new URI for the combination is not defined, it is not possible to represent the combination formally. What one can do instead is to note a claim or a requirement for the generic level, and give the particular area in the description. This seems like a reasonable fall-back position.

Relating levels to optionality

Optionality was one of the less obvious features discussed previously, as it does not occur in NOSs. It’s informative to consider how optionality relates to levels.

I’m not certain about this, but I think we would want to say that if a definition has optional parts, it is not likely to be binarily assessable, and that levelled concepts are normally binarily assessable. A definition with optional parts is more likely to be rankable than binary, and it could even fail to be rankably assessable, rather being merely unordered. So, on the whole, defining levels should surely reduce, and ideally eliminate, optionality: levelled concepts should ideally have no optionality, or at least less than the “parent” unlevelled concept.

Proposals

So in conclusion here are my proposals for representing levels, as level-defining relations.

  1. Use of levels Use levels as one way of relating binarily assessable concepts to rankable ones.
  2. The framework Define a set of related levels together in a coherent framework. Give this framework a URI identifier of its own. The framework may or may not include definitions of the related unlevelled and levelled concepts.
  3. The unlevelled concept In cases of levels of a concept more general than the set of levels you are defining, ensure the unlevelled concept has one URI. In a generic framework, this may not be present.
  4. The levels Represent each level as a competence concept in its own right, complete with short and long descriptions, and a URI as identifier.
  5. Level numbering Give each level a number, such that higher levels have higher numbers. Sometimes consecutive numbers from 0 or 1 will work, but if you think judgements of personal ability may lie in between the levels you define, you may want to choose numbers that make good sense to people who will use the levels.
  6. Level labels If you are trying to represent levels where labels already exist in common usage, record these labels as part of the structured definition of the appropriate level. Sometimes these labels may look numeric, but (as with UK degree classes) the numbers may be the wrong way round, so they really are labels, not level numbers. Labels are optional: if a separate label is not defined, the level number is used as the label.
  7. The level relationships These should be represented explicitly as part of the framework. This can either be separately, or within a hierarchical structure.

Representing level definitions allows me to add to the diagram that last appeared at the bottom of post 14, showing the idea of what should be there to represent levels. The diagram includes defining level relationships, but not yet attributing levels (which is more like categorising in other ways.)

information model diagram including levels

Later, I’ll go back to the overall concept map to see how the ideas that I’ve been developing in recent months fit in to the whole, and change the picture somewhat. But first, some long-delayed extra thoughts on specificity, questions and answers related to competence.

The logic of competence assessability

(17th in my logic of competence series)

The discussion of NOS in the previous post clearly implicated assessability. Actually, assessment has been on the agenda right from the start of this series: claims and requirements are for someone “good” for a job or role. How do we assess what is “good” as opposed to “poor”? The logic of competence partly relies on the logic of assessability, so the topic deserves a closer look.

“Assessability” isn’t a common word. I mean, as one might expect, the quality of being assessable. Here, this applies to competence concept definitions. Given a definition of skill or competence, will people be able to use that definition to consistently assess the extent to which an individual has that skill or competence? If so, the definition is assessable. Particular assessment methods are usually designed to be consistent and repeatable, but in all the cases I can think of, a particular assessment procedure implies the existence of a quality that could potentially be assessed in other ways. So “assessability” doesn’t necessarily mean that one particular assessment method has been defined, but rather that reliable assessment methods can be envisaged.

The contrast between outcomes and behaviours / procedures

One of the key things I learned from discussion with Geoff Carroll was the importance to many people of seeing competence in terms of assessable outcomes. The NOS Guide mentioned in the previous post says, among other things, that “the Key Purpose statement must point clearly to an outcome” and “each Main Function should point to a clear outcome that is valued in employment.” This is contrasted with “behaviours” — some employers “feel it is important to describe the general ways in which individuals go about achieving the outcomes”.

How much emphasis is put on outcomes, and how much on what the NOS Guide calls behaviours, depends largely on the job, and should determine the nature of the “performance criteria” written in a related standard. And, moreover, I think that this distinction between “outcomes” and “behaviours” is quite close to the very general distinction between “means” and “ends” that crops up as a general philosophical topic. To illustrate this, I’ll try giving two example jobs that differ greatly along this dimension: writing commercial pop songs; and flying commercial aeroplanes.

You could write outcome standards for a pop songwriter in terms of the song sales. It is very clear when a song reaches “the charts”, but how and why it gets there are much less clear. What is perhaps more clear is that the large majority of attempts to write pop songs result in — well — very limited success (i.e. failure). And although there are some websites that give e.g. Shortcuts to Hit Songwriting (126 Proven Techniques for Writing Songs That Sell), or How to Write a Song, other commentators e.g. in the Guardian are less optimistic: “So how do you write a classic hit? The only thing everyone agrees on is this: nobody has a bloody clue.”

The essence here is that the “hit” outcome is achieved, if it is achieved at all, through means that are highly individual. It seems unlikely that any standards setting organisation will write an NOS for writing hit pop songs. (On the other hand, some of the composition skills that underlie this could well be the subject of standards.)

Contrast this with flying commercial aeroplanes. The vast majority of flights are carried out successfully — indeed, flight safety is remarkable in many ways. Would you want your pilot to “do their own thing”, or try out different techniques for piloting your flight? A great deal of basic competence in flying is accuracy and reliability in following set procedures. (Surely set procedures are essentially the same kind of thing as behaviours?) There is a lot of compliance, checking and cross-checking, and little scope for creativity. Again it is interesting to note that there don’t seem to be any NOSs for airline pilots. (There are for ground and cabin staff, maintained by GoSkills. In the “National Occupational Standards For Aviation Operations on the Ground, Unit 42 – Maintain the separation of aircraft on or near the ground”, out of 20 performance requirements, no fewer than 11 start “Make sure that…”. Following procedures is explicitly a large part of other related NOSs.)

However, it is clear that there are better and worse pop songwriters, and better and worse pilots. One should be able to write some competence definitions in each case that are assessable, even if they might not be worth making into NOSs.

What about educational parallels for these, as most of school performance is assessed? Perhaps we could think of poetry writing and mathematics. Probably much of what is good in poetry writing is down to individual inspiration and creativity, tempered by some conventional rules. On the other hand, much of what is good in mathematics is the ability to remember and follow the appropriate procedures for the appropriate cases. Poetry, closely related to songwriting, is mainly to do with outcomes, and not procedures — ends, not means; mathematics, closer to airline piloting, is mainly to do with procedures, with the outcome pretty well assured as long as you follow the appropriate procedure correctly.

Both extremes of this “outcome” and “procedure” spectrum are assessable, but they are assessable in different ways, with different characteristics.

  1. Outcome-focused assessment (getting results, main effects, “ends”) allows variation in the component parts that are not standardised. What may be specified are the incidental constraints, or what to avoid.
  2. Assessment on procedures and conformance to constraints (how to do it properly, “means”, known procedures that minimise bad side effects) tends to have little variability in component procedural parts. As well as airline pilots, we may think of train drivers, power plant supervisors, captains of ships.

Of course, there is a spectrum between these extremes, with no clear boundary. Where the core is procedural conformance, handling unexpected problems may also feature (often trained through simulators). Coolness under pressure is vital, and could be assessed. We also have to face the philosophical point that someone’s ends may be another’s means, and vice versa. Only the most menial of means cannot be treated as an end, and only the greatest ends cannot be treated as a means to a greater end.

Outcomes are often quantitative in nature. The pop song example is clear — measures of songs sold (or downloaded, etc.) allow songwriters to be graded into some level scheme like “very successful”, “fairly successful”, “marginally successful” (or whatever levels you might want to establish). There is no obvious cut-off point for whether you are successful as a hit songwriter, and that invites people to define their own levels. On the other hand, conformance to defined procedures looks pretty rigid by comparison. Either you followed the rules or you didn’t. It’s all too clear when a passenger aeroplane crashes.

But here’s a puzzle for National Occupational Standards. According to the Guide, NOSs are meant to be to do with outcomes, and yet they admit no levels. If they acknowledged that they were about procedures, perhaps together with avoiding negative outcomes, then I could see how levels would be unimportant. And if they allowed levels, rather than being just “achieved” or “not yet achieved” I could see how they would cover all sorts of outcomes nicely. What are we to do about outcomes that clearly do admit of levels, as do many of the more complex kind of competences?

The apparent paradox is that NOSs deny the kind of level system that would allow them properly to express the kind of outcomes that they aspire to representing. But maybe it’s no paradox after all. It seems reasonable that NOSs actually just describe the known standards people need to reach to function effectively in certain kinds of roles. That standard is a level in itself. Under that reading, it would make little sense for a NOS to be subject to different levels, as it would imply that the level of competence for a particular role is unknown — and in that case it wouldn’t be a standard.

Assessing less assessable concepts

Having discussed assessable competence concepts from one extreme to the other, what about less assessable concepts? We are mostly familiar with the kinds of general headings for abilities that you get with PDP (personal/professional development planning) like teamwork, communication skills, numeracy, ICT skills, etc. You can only assess a person as having or not having a vague concept like “communication skills” after detailing what you include within your definition. With a competence such as the ability to manage a business, you can either assess it in terms of measurable outcomes valued by you (e.g. the business is making a profit, has grown — both binary — or perhaps some quantitative figure relating to the increase in shareholder value, or a quantified environmental impact) or in terms of a set of abilities that you consider make up the particular style of management you are interested in.

These less assessable concepts are surely useful as headings for gathering evidence about what we have done, and what kinds of skills and competences we have practiced, which might be useful in work or other situations. It looks to me that they can be made more assessable in one of a few ways.

  1. Detailing assessable component parts of the concept, in the manner of NOSs.
  2. Defining levels for the concept, where each level definition gives more assessable detail, or criteria.
  3. Defining variants for the concept, each of which is either assessable, or broken down further into assessable component parts.
  4. Using a generic level framework to supply assessable criteria to add to the concept.

Following this last possibility, there is nothing to stop a framework from defining generic levels as a shorthand for what needs to be covered at any particular level of any competence. While NOSs don’t have to define levels explicitly, it is still potentially useful to be able to have levels in a wider framework of competence.

[added 2011-09-04] Note that generic levels designed to add assessability to a general concept may not themselves be assessable without the general concept.

Assessability and values in everyday life

Defined concepts, standards, and frameworks are fine for established employers in established industries, who may be familiar with and use them, but what about for other contexts? I happen to be looking for a builder right now, and while my general requirements are common enough, the details may not be. In the “foreground”, so to speak, like everyone else, I want a “good” quality job done within a competitive time interval and budget. Maybe I could accept that the competence I require could be described in terms of NOSs, while price and availability are to do with the market, not competence per se. But when it comes to more “background” considerations, it is less clear. How do I rate experience? Well, what does experience bring? I suspect that experience is to do with learning the lessons that are not internalised in an educational or training setting. Perhaps experience is partly about learning to avoid “mistakes”. But, what counts as mistakes depends on one’s values. Individuals differ in the degree to which they are happy with “bending rules” or “cutting corners”. With experience, some people learn to bend rules less detectably, others learn more personal and professional integrity. If someone’s values agree with mine, I am more likely to find them pleasant.

There’s a long discussion here, which I won’t go into deeply, involving professional associations, codes of conduct and ethics, morality, social responsibility and so on. It may be possible to build some of these into performance criteria, but opinions are likely to differ. Where a standard talks about procedural conformance, it can sometimes be framed as knowing established procedures and then following them. A generic competence at handling clients might include the ability to find out what the client’s values are, and to go along with those to the extent that they are compatible with one’s own values. Where they aren’t, a skill in turning away work needs to be exercised in order to achieve personal integrity.

Conclusions

It’s all clearly a complex topic, more complex indeed than I had reckoned back last November. But I’d like to summarise what I take forward from this consideration of assessability.

  1. Less assessable concepts can be made more assessable by detailing them in any of several ways (see above).
  2. Goals, ends, aims, outcomes can be assessed, but say little about constraints, mistakes, or avoiding occasional problems. In common usage, outcomes (particularly quantitative ones) may often have levels.
  3. Means, procedures, behaviours, etc. can be assessed in terms of (binary) conformity to prescribed pattern, but may not imply outcomes (though constraints may be able to be formulated as avoidance outcomes).
  4. In real life we want to allow realistic competence structures with any of these features.

In the next post, I’ll take all these extra considerations forward into the question of how to represent competence structures, partly through discussing more about what levels are, along with how to represent them. Being clear about how to represent levels will leave us also clearer about how to represent the less precise, non-assessable concepts.

The logic of National Occupational Standards

(16th in my logic of competence series)

I’ve mentioned NOSs (UK National Occupational Standards) many times in earlier posts in this series, (3, 5, 6, 8, 9, 12, 14) but last week I was fortunate to visit a real SSCLANTRA — talk to some very friendly and helpful people there and elsewhere, and reflect further on the logic of NOSs.

One thing that became clear is that NOSs have specific uses, not exactly the same as some of the other competence-related concepts I’ve been writing about. Following this up, on the UKCES website I soon found the very helpful “Guide to Developing National Occupational Standards” (pdf) by Geoff Carroll and Trevor Boutall, written quite recently: March 2010. For brevity, I’ll refer to this as “the NOS Guide”.

The NOS Guide

I won’t review the whole NOS Guide, beyond saying that it is an invaluable guide to current thinking and practice around NOSs. But I will pick out a few things that are relevant: to my discussion of the logic of competence; to how to represent the particular features of NOS structures; and towards how we represent the kinds of competence-related structures that are not part of the NOS world.

The NOS Guide distinguishes occupational competence and skill. Its definitions aren’t watertight, but generally they are in keeping with the idea that a skill is something that is independent of its context, not necessarily in itself valuable, whereas an occupational competence in a “work function” involves applying skills (and knowledge). Occupational competence is “what it means to be competent in a work role” (page 7), and this seems close enough to my formulation “the ability to do what is required“, and with the corresponding EQF definitions. But this doesn’t help greatly in drawing a clear line between the two. What is considered a work function might depend not only on the particularities of the job itself, and also the detail in which it has been analysed for defining a particular job role. In the end, while the distinction makes some sense, the dividing line still looks fairly arbitrary, which justifies my support for not making a distinction in representation. This seems confirmed also by the fact that, later, when the NOS Guide discusses Functional Analysis (more of which below), the competence/skill distinction is barely mentioned.

The NOS Guide advocates a common language for representing skill or occupational competence at any granularity, ideally involving one brief sentence, containing:

  1. at least one action verb;
  2. at least one object for the verb;
  3. optionally, an indication of context or conditions.

Some people (including M. David Merrill, and following him, Lester Gilbert) advocate detailed vocabularies for the component parts of this sentence. While one may doubt the practicality of ever compiling complete general vocabularies, perhaps we ought to allow at least for the possiblity of representing verbs, objects and conditions distinctly, for any particular domain, represented in a domain ontology. If it were possible, this would help with:

  • ensuring consistency and comprehensibility;
  • search and cross-referencing;
  • revision.

But it makes sense not to make these structures mandatory, as most likely there are too many edge cases.

The whole of Section 2 of the NOS Guide is devoted to what the authors refer to as “Functional Analysis”. This involves identifying a “Key Purpose”, the “Main Functions” that need to happen to achieve the Key Purpose, and subordinate to those, the possible NOSs that set out what needs to happen to achieve each main function. (What is referred to in the NOS Guide as “a NOS” has also previously been called a “Unit”, and for clarity I’ll refer to them as “NOS units”.) Each NOS unit in turn contains performance criteria, and necessary supporting “knowledge and understanding”. However, these layers are not rigid. Sometimes, a wide-reaching purpose may be analysed by more than one layer of functions, and sometimes a NOS unit is divided into elements.

It makes sense not to attempt to make absolute distinctions between the different layers. (See also my post #14.) For the purposes of representation, this implies that each competence concept definition is represented in the same way, whichever layer it might be seen as belonging to; layers are related through “broader” and “narrower” relationships between the competence concepts, but different bodies may distinguish different layers. In eCOTOOL particularly, I’ve come to call competence concept definitions, in any layer, “ability items” for short, and I’ll use this terminology from here.

One particularly interesting section of the NOS Guide is its Section 2.9, where attention turns to the identification of NOS units themselves, as the component parts of the Main Functions. In view of the authority of this document, it is highly worthwhile studying what the Guide says about the nature of NOS units. Section 2.9 directly tackles the question of what size a NOS should be. Four relevant points are made, of which I’ll distinguish just two.

First, there is what we could call the criterion of individual activity. The Guide says: “NOS apply to the work of individuals. Each NOS should be written in such a way that it can be performed by an individual staff member.” I look at this both ways for complementary views. When two aspects of a role may reasonably and justifiably be performed separately by separate individuals, there should be separate NOS units. Conversely, when two aspects of a role are practically always performed by the same person, they naturally belong within the same NOS unit.

Second, I’ve put together manageability and distinctness. The Guide says that, if too large, the “size of the resulting NOS … could result in a document that is quite large and probably not well received by the employers or staff members who will be using them”, and also that it matters “whether or not things are seen as distinct activities which involve different skills and knowledge sets.” These seem to me both to be to do with fitting the size of the NOS unit to human expectations and requirements. In the end, however, the size of NOS units is a matter of good practice, not formal constraint.

Section 3 of the NOS Guide deals with using existing NOS units, and given the good sense of reuse, it seems right to discuss this before detailing creating your own. The relationship between the standards one is creating and existing NOS units could well be represented formally. Other existing NOS units may be

  • “imported” as is, with the permission of the originating body
  • “tailored”, that is modified slightly to suit the new context, but without any substantive change in what is covered (again, with permission)
  • used as the basis of a new NOS unit.

In the first two cases, the unit title remains the same; but in the other case where the content changes, the unit title should change as well. Interestingly, there seems no formal way of stating that a new NOS unit is based on an existing one, but changed too much to be counted as “tailored”.

Section 4, on creating your own NOSs, is useful particularly from the point of view of formalising NOS structures. The “mandatory NOS components” are set out as:

  1. Unique Reference Number
  2. Title
  3. Overview
  4. Performance Criteria
  5. Knowledge and Understanding
  6. Technical Data

and I’ll briefly go over each of these here.

It would be so easy, in principle, to recast a Unique Reference Number as a URI! However, the UKCES has not yet mandated this, and no SSC seems to have taken it up either. (I’m hoping to persuade some.) If a URI was also given to the broader items (e.g. key purposes and main functions) then the road would be open to a “linked data” approach to representing the relationships between structural components.

Title is standard Dublin Core, while Overview maps reasonably to dcterms:description.

Performance criteria may be seen as the finest granularity ability items represented in a NOS, and are strictly parts of NOS units. They have the same short sentence structure as both NOS units and broader functions and purposes. In principle, each performance criterion could also have its own URI. A performance criterion could then be treated like other ability items, and further analysed, explained or described elsewhere. An issue for NOSs is that performance criteria are not identified separately, and therefore there is no way within a NOS structure to indicate similarity or overlap between performance criteria appearing in different NOS units, whether or not the wording is the same. On the other hand, if NOS structures could give URIs to the performance criteria, they could be reused, for example to suggest that evidence within one NOS unit would provide also useful evidence within a different NOS unit.

Performance criteria within NOS units need to be valid across a sector. Thus they must not embody methods, etc., that are fine for one typical employer but wrong for another. They must also be practically assessable. These are reasons for avoiding evaluative adverbs, like the Guide’s example “promptly”, which may be evaluated differently in different contexts. If there are going to be contextual differences, they need to be more clearly signalled by referring e.g. to written guidance that forms part of the knowledge required.

Knowledge and understanding are clearly different from performance criteria. Items of knowledge are set out like performance criteria, but separately in their own section within a NOS unit. As hinted just above, the inclusion of explicit knowledge can mean that a generalised performance criterion can often work if the knowledge dependent on context is factored out, in places where there would otherwise be no common approach to assessment.

In principle, knowledge can be assessed, but the methods of assessment differ from those of performance criteria. Action verbs such as “state”, “recall”, “explain”, “choose” (on the basis on knowledge) might be introduced, but perhaps are not absolutely essential, in that a knowledge item may be assessed on the basis of various behaviour. Knowledge is then treated (by eCOTOOL and others) as another kind of ability item, alongside performance criteria. The different kinds of ability item may be distinguished — for example following the EQF, as knowledge, skills, and competence — but there are several possible categorisations.

The NOS Guide gives the following technical data as mandatory:

  1. the name of the standards-setting organisation
  2. the version number
  3. the date of approval of the current version
  4. the planned date of future review
  5. the validity of the NOS: “current”; “under revision”; “legacy”
  6. the status of the NOS: “original”; “imported”; “tailored”
  7. where the status is imported or tailored, the name of the originating organisation and the Unique Reference Number of the original NOS.

These could very easily be incorporated into a metadata schema. For imported and tailored NOS units, a way of referring to the original could be specified, so that web-based tools could immediately jump to the original for comparison. The NOS Guide goes on to give more optional parts, each of which could be included in a metadata schema as optional.

Issues emerging from the NOS Guide

One of the things that is stressed in the NOS Guide (e.g. page 32) is that the Functional Analysis should result in components (main functions, at least) that are both necessary and sufficient. That’s quite a demand — is it realistic, or could it be characterised as reductionist?

Optionality

The issue of optionality has been covered in the previous post in this series. Clearly, if NOS structures are to be necessary and sufficient, logically there can be no optionality. It seems that, practically, the NOS approach avoids optionality in two complementary ways. Some options are personal ways of doing things, at levels more finely grained than NOS units. Explicitly, NOS units should be written to be inclusive of the diversity of practice: they should not prescribe particular behaviours that represent only some people’s ways of doing things. Other options involve broader granularity than the NOS unit. The NOS Guide implies this in the discussion of tailoring. It may be that one body wants to create a NOS unit that is similar to an existing one. But if the “demand” of the new version NOS unit is not the same as the original, it is a new NOS unit, not a tailored version of the original one.

The NOS Guide does not offer any way of formally documenting the relationship between variant ways of achieving the same aim, or function (other than, perhaps, simple reference). This may lead to some inefficiencies down the line, when people recognise that achieving one NOS unit is really good evidence for reaching the standard of a related NOS unit, but there is no general and automatic way of documenting that or taking it into account. We should, I suggest, be aiming at an overall structure, and strategy, that documents as much relationship as we can reliably represent. This suggests allowing for optionality in an overall scheme, but leaving it out for NOSs.

Levels and assessability

The other big issue is levels. The very idea of level is somehow anathema to the NOS view. A person either has achieved a NOS, and is competent in the area, or has not yet acheived that NOS. There is no provision for grades of achievement. Compare this with the whole of the academic world, where people almost always give marks and grades, comparing and ranking people’s performance. The vocational world does have levels — think of the EQF levels, that are intended for the vocational as well as the academic world — but often in the vocational world a higher level is seen as the addition of other separate skills or occupational competences, not as improving levels of the same ones.

A related idea came to me while writing this post. NOSs rightly and properly emphasise the need to be assessable — to have a effective standard, you must be able to tell if someone has reached the standard or not — though the assessment method doesn’t have to be specified in advance. But there are many vaguer competence-related concepts. Take “communication skills” as a common example. It is impossible to assess whether someone has communication skills in general, without giving a specification of just what skills are meant. Every wakeful person has some ability to communicate! But we frequently see cases where that kind of unassessably vague concept is used as a heading around which to gather evidence. It does make sense to ask a person about evidence for their “communication skills”, or to describe them, and then perhaps to assess whether these are adequate for a particular job or role.

But then, thinking about it, there is a correspondence here. A concept that is too vague to assess is just the kind of concept for which one might define (assessable) levels. And if a concept has various levels, it follows that whether a person has the (unlevelled) concept cannot be assessed in the binary way of “competent” and “not yet competent”. This explains why the NOS approach does not have levels, as levels would imply a concept that cannot be assessed in the required binary way. Rather than call unlevelled concepts “vague”, we could just call them something like “not properly assessable”, implying the need to add extra detail before the concept becomes assessable. That extra detail could be a whole level scheme, or simply a specification of a single-level standard (i.e. one that is simply reached or not yet reached).

In conclusion, I cannot see a problem with specifying a representation for skill and competence structures that includes non-assessable concepts, along with levels as one way of detailing them. The “profile” for NOS use can still explicity exclude them, if that is the preferred way forward.

Update 2011-08-22 and later

After talking further with Geoff Carroll I’ve clarified above that NOSs are to do specifically with occupational competence rather than, e.g. learning competence. And having been pushed into this particular can of worms, I’d better say more about assessability to get a clear run up to levels.

E-portfolios and identity: more!

The one annual e-portfolio (and identity) conference that I attend reliably was this year co-sponsored by CRA, on top of the principal EIfEL — London, 11th to 13th July. Though it wasn’t a big gathering, I felt it was somehow a notch up from last time.

Perhaps this was because it was just a little more grounded in practice, and this could have been the influence of the CRA. Largely gone were speculations about identity management and architecture, but in was more of the idea of identity as something that was to be developed personally.

We heard from three real recent students, who have used their portfolio systems for their own benefit. Presumably they developed their identity? That’s not a representative sample, and of course these are the converted, not the rank and file dissatisfied or apathetic. A message that surprisingly came from them was that e-portfolio use should be compulsory, at least at some point during the student’s studies. That’s worth reflecting on.

And as well as some well-known faces (Helen, Shane, et al.) there were those, less familiar in these settings, of our critical-friendly Mark Stiles, and later Donald Clark (who had caused slight consternation by his provocative blog post, finding fault with the portfolio concept, and was invited to speak as a result). Interestingly, I didn’t think Donald’s presentation worked as well as his blog (it was based on the same material). In a blog, you can be deliberately provocative, let the objections come, and then gracefully give way to good counter-arguments. But in the conference there wasn’t time to do this, so people may have gone away thinking that he really held these ideas, which would be a pity. Next year we should be more creative about the way of handling that kind of contribution. Mark’s piece — may I call it a friendly Jeremiad? I do have a soft spot for Jeremiah! — seemed to go down much better. We don’t want learners themselves to be commodified, but we can engage with Mark through thinking of plausible ways of avoiding that fate.

Mark also offered some useful evidence for my view that learners’ interests are being systematically overlooked, and that people are aware of this. Just let your eye off the ball of learner-centricity for a moment, and — whoops! — your learner focus is sneakily transformed into a concern of the institution that wants to know all kinds of things about learners — probably not what the learners wanted at all. There is great depth and complexity of the challenge to be truly learner-focused or learner-centred.

One of the most interesting presentations was by Kristin Norris of IUPUI, looking at what the Americans call “civic identity” and “civic-mindedness”. This looks like a laudibly ambitious programme for helping students to become responsible citizens, and seems related to our ethical portfolios paper of 2006 as well as the personal values part of my book.

Kristin knows about Perry and Kegan, so I was slightly surprised that I couldn’t detect any signs in the IUPUI programme of diagnosis of the developmental stage of individual students. I would have thought that what you do on a programme to develop students ethically should depend on the stage they have already arrived at. I’ll follow up on this with her.

So, something was being pointed to from many directions. It’s around the idea that we need richer models of the learner, the student, the person. And in particular, we need better models of learner motivation, so that we can really get under their (and our own) skins, so that the e-portfolio (or whatever) tools are things that they (and we) really want to use.

Intrinsic motivation to use portfolio tools remains largely unsolved. We are faced again and again with the feedback that students don’t want to know about “personal development” or “portfolios” (unless they are creatives who know about these anyway) or even less “reflection”! Yes, there are certainly some (counterexemplifying Donald Clark’s over-generalisation) who want to reflect. Perhaps they are similar to those who spontaneously write diaries — some of the most organised among us. But not many.

This all brings up many questions that I would like to follow up, in no particular order.

  • How are we, then, to motivate learners (i.e. people) to engage in activities that we recognise as involving reflection or leading to personal development?
  • Could we put more effort into deepening and enriching the model we have of each of our learners?
  • Might some “graduate attributes” be about this kind of personal and ethical development?
  • Are we suffering from a kind of conspiracy of the social web, kidding people that they are actually integrated, when they are not?
  • Can we use portfolio-like tools to promote growth towards personal integrity?
  • “Go out and live!” we could say. “But as you do it, record things. Reflect on your feelings as well as your actions. Then, later, when you ‘come up for air’, you will have something really useful to reflect on.” But how on earth can we motivate that?
  • Should we be training young people to reflect as a habit, like personal hygiene habits?
  • Is critical friendship a possible motivator?

I’m left with the feeling that there’s something really exciting waiting to be grasped here, and the ePIC conference has it all going for itself to grasp that opportunity. I wonder if, next year, we could

  • keep it as ePIC — e-portfolios and identity — a good combination
  • keep close involvement of the CRA and others interested in personal development
  • put more focus on the practice of personal-social identity development
  • discuss the tools that really support the development of personal social identity
  • talk about theories and architectures that support the tools and the development?