The key to competence frameworks

(27th in my logic of competence series.)

So here I am … continuing the thread of the logic of competence, nearly 7 years on. I’m delighted to see renewed interest from several quarters in the field of competence frameworks. There’s work being done by the LRMI; and much potential interest from those interested in various kinds of soft skills. And some kinds of “badges” – open credentials intended to be displayed and easily recognised – often rely on competence definitions for their award criteria.

I just have to say to everyone who explores this area, beware! There are two different kinds of things that are both called similar things: “competencies”; “competences”; “competence definitions”; skills; etc.

  1. There is one kind of statements of ability that people measure up to or not. My favourite simple understandable examples are things like “can juggle 5 balls for a minute without dropping any” or “can type at 120 words per minute from dictation making fewer than 10 mistakes”. But there are many less exact examples of similar things, that clearly either do or do not apply to individuals at a given time of testing. “Knows how to solve quadratic equations using the formula” or “can apply Pythagoras’ theorem to find the length of the third side of a right-angled triangle” might be two from mathematics. Many more from the vocational world, but they would mean less to those not in that profession or occupation.
  2. Then there is another kind, more of a statement indicating an ability or area of competence in which someone can be more or less proficient. Taking the examples above, these might be: “can juggle” or “juggling skills”; “can type” or “typing ability”; “knows about mathematics” or “mathematical ability”. There are vast numbers of these, because they are easier to construct than the other kind. “Can manage a small business”; “good communicator”; “can speak French”; “good at knitting”; “a good diplomat”; “programming”; “chess”; you think of your own.

What you can see quite plainly, on looking, is that with the first kind of statement, it is possible to say whether or not someone comes up to that standard; while with the second kind of phrase, either there is no standard defined, or the standard is too vague to judge whether or not someone “has” that ability or not — it’s more like, how much of that ability do you have?

In the past, I’ve called the first kind form of words a “binary” competence definition, and the second kind “rankable”. (Just search for “binary rankable” and you’ll get plenty.) But these are so unmemorable that I even forgot myself what I had called them. I’m looking for better names, that people (including myself) can easily remember.

Woe betide anyone who mixes the two kinds without realising what they are doing! Woe betide also anyone who uses one kind only, and imagines that the other kind either don’t exist or don’t matter.

The world is full of lists of skills which people should have some of. “Communication skills”. “Empathy”. “Resilience”. Loads of them. And in most cases, these are just of the second kind. They have not defined any particular level of the skill, and expect people to produce evidence about how good they are at the given skill, when asked.

In the vocational world of occupations and professions, however, we see very many well-defined statements that are of the first kind. This is to be expected, because to give someone a professional qualification requires that they are assessed as possessing skills to a certain, sufficient level.

The two kinds of statements are intimately related. Take any statement of the first kind. What would be better, or not so good? Juggling 3 balls for 30 seconds? Typing a 60 words per minute? These belong, as points on scales, respectively, of juggling skills and typing ability. Thus, every statement of the first kind has at least one scale that it is a point on. Conversely, every scale description, of the second kind, can, with sufficient insight, be detailed with positions on that scale, which will be statements of the first kind.

In the InLOC information model, these reciprocal relationships are given identifiers hasDefinedLevel and isDefinedLevelOf. These is perhaps the most essential and vital pair of relationships in InLOC.

So what about competence frameworks? Well, a framework, whether explicitly or implicitly, is about relating these two kind of statements together. It is about defining areas of ability that are important, perhaps to an activity or a role; and then also defining levels of those abilities that people can be assessed at. It’s only when these levels are defined that one has criteria, not only for passing exams or recruiting employees, but also for awarding badges. And the interest in badges has held this space open for the seven years I’ve been writing about the logic of competence. Thank you, those working with badges!

Now I’ve explained this again, could you help me by saying which pair of terms would best describe for you the two kinds of statements, better than “binary” and “rankable”? I’d be most grateful.

How do I go about doing InLOC?

(26th in my logic of competence series.)

It’s been three years now since the European expert team started work on InLOC, working out a good model for representing structures and frameworks of learning outcomes, skill and competence. As can be expected of forward-looking, provisional work, there has not yet been much take-up, but it’s all in place, and more timely now than ever.

Then yesterday I received a most welcome call from a training company involved in one particular sector, who are interested in using the principles of InLOC to help their LMS map course and module information to qualification frameworks. Yes! I enthusiastically replied.

What might help people in that situation is a simple, basic approach that sets you on the right path for doing things the InLOC way. I realised that this isn’t so easy to find in the main documentation, so here I set out this basic approach, which will reliably get anyone started on mapping anything to the InLOC model, and cross-references the InLOC documentation.

One description of what to do is documented in the section How to follow InLOC, but, for all the reasons above, here I will try going back to basics and starting again, in the hope that describing the approach in a different way may be helpful.

LOC definitions

The most basic feature that occurs many many times in any published framework is called, by InLOC, a “LOC definition”. This is, simply, any concept, described by any form of words, that indicates an ability – whether it be knowledge, skill, competence or any other learning outcome – that can be attributed to an individual person, and in some way – any way – assessed. It’s hard to define more clearly or succinctly than that, and to get a better understanding you may want to look at examples.

In the documentation, the best place to start is probably the section on InLOC explained through example. In that section, a framework (the European e-Competence Framework, e-CF) is thoroughly analysed. You can see in Figure 2 how, for just one page of the documentation, each LOC definition has been picked out separately.

A LOC definition includes at least these overlapping classes of concept:

  • anything that is listed as a learning outcome, a skill, a competency, an ability;
  • any separate parts of any learning outcomes;
  • anything that expresses an assessment criterion;
  • any level of any outcome, skill, competence, etc. (at any granularity);
  • a generic definition of what is required by a level.

Pieces of text that relate to the same concept – e.g. title and description of the same thing – are treated together. Everything that can be assessed separately is treated as a separate LOC definition. The grammatical structure of the text is of little importance. Often, though, in amongst the documentation, you read text that is not to do with abilities. Just pass over this for the moment.

One thing I’ve noticed sometimes is that some concepts, which could have their own LOC definitions, are implied but not explicit in the documentation. In yesterday’s discussion, one example was the levels of the unit as a whole. Assessment criteria are often specified for different levels of particular abilities, but the level as a whole is implied.

The first step, then, is to look for all the LOC definitions in your documentation, and any implied ones that are not explicitly documented. ANY piece of text that represents something that could potentially be assessed as an outcome of learning is most likely a LOC definition.

Binary and rankable

If you’ve looked through the documentation, you’ve probably come across this distinction, and it is very helpful if you are going to structure something in the InLOC way. But when I was writing the documentation, I don’t think I had grasped quite how central it is. It is so central that more recently I have come to putting it as a vital first concept to grasp. Very recently I quickly put together a slide deck about this, on Slideshare now, under the title Distinguishing binary and rankable definitions is key to structuring competence frameworks.

I first publicly clarified this distinction in a blog post before InLOC even started: Representing level relationships; and more recently mentioned in InLOC and OpenBadges: a reprise.

In essence: a binary learning outcome or competence (LOC) concept is one where it makes sense to ask, have you reached this level or standard? Are you as good as this? The answer gives a binary distinction between “yes”, for those people who have reached the level, and “not yet” for those who have not. The example I give in the recent slide deck is “can touch type in English at 60 wpm with fewer than 1 mistake per hundred words”. The answer is clearly yes or no. Or, “can juggle with three juggling balls for a minute or longer” (which I can’t yet).

On the other hand, a rankable concept is one where there is no clear binary criterion, but instead you can rank people in order of their ability in that concept. A rankable concept related to the previous binary one would simply be “touch typing” or “can touch type”. A good question for juggling would be “how well can you juggle?” You may want to analyse this more finely, and distinguish different independent dimensions of juggling ability, but more probably I guess you would be content to roughly rank people in order of a general juggling ability.

The second step is to look at all the LOC definitions you have isolated, and judge whether they are binary or (at least roughly) rankable.

Relating LOC definitions together

The third step is to relate all the LOC definitions you found to each other. It is commonplace that frameworks have a structure that is often hierarchical. An ability at a “high” level (of granularity) involves many abilities at “lower” levels. The simplest way of representing that is that the wider definition “has parts”, which are the narrower definitions, perhaps the products of “functional analysis” of the wider definition. InLOC allows you to relate definitions in this way, using the relationship “hasLOCpart”.

But InLOC also allows several other relationships between LOC definitions. These can be seen in the three tables on the relationships page in the documentation. To see how the relationships themselves are related, look at the third table, “ontology”. The tables together give you a clear and powerful vocabulary for describing relationships between LOC definitions. Naturally, it has been carefully thought through, and is a vital part of InLOC as a whole.

Very simple structures can be described using only the “hasLOCpart” relationship. However, when you have levels, you will need at least the “hasDefinedLevel” relationship as well. Broadly speaking, it will be a rankable LOC definition that “hasDefinedLevel” of a binary definition. Find these connections in particular!

For the other relationships, decide whether “hasLOCpart” is a good enough representation, or whether you need “hasNecessaryPart”, “hasOptionalPart” or “hasExample”. Each of these has a different meaning in the real world. Mostly, you will probably find that rankable definitions have rankable parts, and binary definitions have binary parts.

There is more related discussion in another of the blog posts from my “logic of competence” series, More and less specificity in competence definitions.

Putting together the LOC structure

In InLOC, a “LOC structure” is the collection of LOC definitions along with the relationships between them. Relationships between LOC definitions are only defined in LOC structures. This is to allow LOC definitions to appear in different structures, potentially with different relationships. You may think you know what comprises, for example, communication skills, but other people may have different opinions, and classify things differently.

A LOC structure often corresponds to a complete documented scheme of learning outcomes, and often has a name which is clearly not something that is a LOC definition, as described previously. You can’t assess how good someone is at “the European e-competence framework”(the e-CF) (unless you mean knowledge of that framework) but you can assess how good people are at its component parts, the LOC definitions (for rankable ones) or whether they reach the defined levels (for binary ones).

And the e-CF, analysed in detail in the InLOC documentation, is a good example where you can trace the structure down in two ways: either by topic, then later by levels; or by level, and then levelled (binary) topic definitions that are part of those levels.

Your aim is to document all the relationships between LOC definitions that are relevant to your application, and wrap those up with other related information in a LOC structure.

What you will have gained

The task of creating an InLOC structure is more than simply creating a file that can potentially be transmitted between web applications, and related to, referred to by, other structures that you are dealing with. It is also an exercise that can reveal more about the structure of the framework than was explicitly written into it. Often one finds oneself making explicit the relationships that are documented implicitly in terms of page and table layout. Often one fills in LOC definitions that have been left out. Whichever way you do it, you will be left with firmer, more principled structures on which to build your web applications.

We expect that sooner or later InLOC will be adopted as at least the basis of a model underlying interoperable and portable representations of frameworks of learning outcomes, skills, competences, abilities, and related knowledge structures. Much of the work has been done, but it may need revising in the light of future developments.

What is there to learn about standardization?

Cetis (the Centre for Educational Technology, Interoperability and Standards) and the IEC (Institute for Educational Cybernetics) are full of rich knowledge and experience in several overlapping topics. While the IEC has much expertise in learning technologies, it is Cetis in particular where there is a body of knowledge and experience of many kinds of standardization organisations and processes, as well as approaches to interoperability that are not necessarily based on formal standardization. We have an impressive international profile in the field of learning technology standards.

But how can we share and pass on that expertise? This question has arisen from time to time during the 12 years I’ve been associated with Cetis, including the last six working from our base in the IEC in Bolton. While Jisc were employing us to run Special Interest Groups, meetings, and conferences, and to support their project work, that at least gave us some scope for sharing. The SIGs are sadly long gone, but what about other ways of sharing? What about running some kind of courses? To run courses, we have to address the question of what people might want to learn in our areas of expertise. On a related question, how can we assemble a structured summary even of what have we ourselves have learned about this rich and challenging area?

These are my own views about what I sense I have learned and could pass on; but also about the topics where I would think it worthwhile to know more. All of these views are in the context of open standards in learning technology and related areas.

How are standards developed?

A formal answer for formal standards is straightforward enough. But this is only part of the picture. Standards can start life in many ways, from the work of one individual inventing a good way of doing something, through to a large corporation wanting to impose its practice on the rest of the world. It is perhaps more significant to ask …

How do people come up with good and useful standards?

The more one is involved in standardization, the richer and more subtle one’s answer to this becomes. There isn’t one “most effective” process, nor one formula for developing a good standard. But in Cetis, we have developed a keen sense of what is more likely to result in something that is useful. It includes the close involvement of the people who are going to implement the standard – perhaps software developers. Often it is a good idea to develop the specification for a standard hand in hand with its implementation. But there are many other subtleties which could be brought out here. This also begs a question …

What makes a good and useful standard?

What one comes to recognise with time and experience is that the most effective standards are relatively simple and focused. The more complex a standard is, the less flexible it tends to be. It might be well suited to the precise conditions under which it was developed, but those conditions often change.

There is much research to do on this question, and people in Cetis would provide an excellent knowledge base for this, in the learning technology domain.

What characteristics of people are useful for developing good standards?

Most likely anyone who has been involved in standardization processes will be aware of some people whose contribution is really helpful, and others who seem not to help so much. Standardization works effectively as a consensus process, not as a kind of battle for dominance. So the personal characteristics of people who are effective at standardization is similar to those who are good at consensus processes more widely. Obviously, the group of people involved must have a good technical knowledge of their domain, but deep technical knowledge is not always allied to an attitude that is consistent with consensus process.

Can we train, or otherwise develop, these useful characteristics?

One question that really interests me is, to what extent can consensus-friendly attitudes be trained or developed in people? It would be regrettable if part of the answer to good standardization process were simply to exclude unhelpful people. But if this is not to happen, those people would need to be to be open to changing their attitudes, and we would have to find ways of helping them develop. We might best see this as a kind of “enculturation”, and use sociological knowledge to help understand how it can be done.

After answering that question, we would move on to the more challenging “how can these characteristics be developed?”

How can standardization be most effectively managed?

We don’t have all the answers here. But we do have much experience of the different organisations and processes that have brought out interoperability standards and specifications. Some formal standardization bodies adopt processes that are not open, and we find this quite unhelpful to the management of standardization in our area. Bodies vary in how much they insist that implementation goes hand in hand with specification development.

The people who can give most to a standardization process are often highly valued and short of time. Conversely, those who hinder it most, including the most opinionated, often seem to have plenty of time to spare. To manage the standardization process effectively, this variety of people needs to be allowed for. Ideally, this would involve the training in consensus working, as imagined above, but until then, sensitive handling of those people needs considerable skill. A supplementary question would be, how does one train people to handle others well?

If people are competent at consensus working, the governance of standardization is less important. Before then, the exact mechanisms for decision making and influence, formal and informal, are significant. This means that the governance of standards organisations is on the agenda for what there is to learn. There is still much to learn here, through suitable research, about how different governance structures affect the standardization process and its outcomes.

Once developed, how are standards best managed?

Many of us have seen the development of a specification or standard, only for it never really to take hold. Other standards are overtaken by events, and lose ground. This is not always a bad thing, of course – it is quite proper for one standard to be displaced by a better one. But sometimes people are not aware of a useful standard at the right time. So, standards not only need keeping up to date, but they may also need to be continually promoted.

As well as promotion, there is the more straightforward maintenance and development. Web sites with information about the standard need maintaining, and there is often the possibility of small enhancements to a standard, such as reframing it in terms of a new technology – for instance, a newly popular language.

And talking of languages, there is also dissemination through translation. That’s one thing that working in a European context keeps high in one’s mind.

I’ve written before about management of learning technology standardization in Europe and about developments in TC353, the committee responsible for ICT in learning, education and training.

And how could a relevant qualification and course be developed?

There are several other questions whose answers would be relevant to motivating or setting up a course. Maybe some of my colleagues or readers have answers. If so, please comment!

  • As a motivation for development, how can we measure the economic value of standards, to companies and to the wider economy? There must be existing research on this question, but I am not familiar with it.
  • What might be the market for such courses? Which individuals would be motivated enough to devote their time, and what organisations (including governmental) would have an incentive to finance such courses?
  • Where might such courses fit? Perhaps as part of a technology MSc/MBA in a leading HE institution or business school?
  • How would we develop a curriculum, including practical experience?
  • How could we write good intended learning outcomes?
  • How would teaching and learning be arranged?
  • Who would be our target learners?
  • How would the course outcomes be assessed?
  • Would people with such a qualification be of value to standards developing organisations, or elsewhere?

I would welcome approaches to collaboration in developing any learning opportunity in this space.

And more widely

Looking again at these questions, I wonder whether there is something more general to grasp. Try reading over, substituting, for “standard”, other terms such as “agreement”, “law”, “norm” (which already has a dual meaning), “code of conduct”, “code of practice”, “policy”. Many considerations about standards seem to touch these other concepts as well. All of them could perhaps be seen as formulations or expressions, guiding or governing interaction between people.

And if there is much common ground between the development of all of these kinds of formulation, then learning about standardization might well be adapted to learn knowledge, skills, competence, attitudes and values that are useful in many walks of life, but particularly in the emerging economy of open co-operation and collaboration on the commons.

Why, when and how should we use frameworks of skill and competence?

(25th in my logic of competence series.)

When we understand how frameworks could be used for badges, it becomes clearer that we need to distinguish between different kinds of ability, and that we need tools to manage and manipulate such open frameworks of abilities. InLOC gives a model, and formats, on which such tools can be based.

I’ll be presenting this material at the Crossover Edinburgh conference, 2014-06-05, though my conference presentation will be much more interactive and open, and without much of this detail below.

What are these frameworks?

Frameworks of skill or competence (under whatever name) are not as unfamiliar as they might sound to some people at first. Most of us have some experience or awareness of them. Large numbers of people have completed vocational qualifications — e.g. NVQs in England — which for a long time were each based on a syllabus taken from what are called National Occupational Standards (NOSs). Each NOS is a statement of what a person has to be able to do, and what they have to know to support that ability, in a stated vocational role, or job, or function. The scope of NOSs is very wide — to list the areas would take far too much space — so the reader is asked to take a look at the national database of current NOSs, which is hosted by the UKCES on their dedicated web site.

Several professions also have good reason to set out standards of competence for active members of that profession. One of the most advanced in this development, perhaps because of the consequences of their competence on life and death, is the medical profession. Documents like Good Medical Practice, published by the General Medical Council, starts by addressing doctors:

Patients must be able to trust doctors with their lives and health. To justify that trust you must show respect for human life and make sure your practice meets the standards expected of you in four domains.

and then goes on to detail those domains:

  • Knowledge, skills and performance
  • Safety and quality
  • Communication, partnership and teamwork
  • Maintaining trust

The GMC also publishes the related Tomorrow’s Doctors, in which it

sets the knowledge, skills and behaviours that medical students learn at UK medical schools: these are the outcomes that new UK graduates must be able to demonstrate.

These are the kinds of “framework” that we are discussing here. The constituent parts of these frameworks are sometimes called “competencies”, a term that is intended to cover knowledge, skills, behaviours, attitudes, etc., but as that word is a little unfriendly, and bearing in mind that practical knowledge is shown through the ability to put that knowledge into practice, I’ll use “ability” as a catch all term in this context.

Many larger employers have good reasons to know just what the abilities of their employees are. Often, people being recruited into a job are asked in person, and employers have to go through the process of weighing up the evidence of a person’s abilities. A well managed HR department might go beyond this to maintaining ongoing records of employees’ abilities, so that all kinds of planning can be done, skills gaps identified, people suggested for new roles, and training and development managed. And this is just an outsider’s view!

Some employers use their own frameworks, and others use common industry frameworks. One industry where common frameworks are widely used is information and communications technology. SFIA, the Skills Framework for the Information Age, sets out all kinds of skills, at various levels, that are combined together to define what a person needs to be able to do in a particular role. Similar to SFIA, but simpler, is the European e-Competence Framework, which has the advantage of being fully and openly available without charge or restriction.

Some frameworks are intended for wider use than just employment. A good example is Mozilla’s Web Literacy Map, which is “a map of competencies and skills that Mozilla and our community of stakeholders believe are important to pay attention to when getting better at reading, writing and participating on the web.” They say “map”, but the structure is the same as other frameworks. Their background page sets out well the case for their common framework. Doug Belshaw suggests that you could use the Web Literacy Map for “alignment” of the kind of Open Badges that are also promoted by Mozilla.

Links to badges

You can imagine having badges for keeping track of people’s abilities, where the abilities are part of frameworks. To help people move between different roles, from education and training to work, and back again, having their abilities recognised, and not having to retrain on abilities that have already been mastered, those frameworks would have to be openly published, able to be referenced in all the various contexts. It is open frameworks that are of particular interest to us here.

Badges are typically issued by organisations to individuals. Different organisations relate to abilities differently. Some organisations, doing business or providing a service, just use employees’ abilities to deliver products and services. Other organisations, focusing around education and training, just help people develop abilities, which will be used elsewhere. Perhaps most organisations, in practice, are somewhere on the spectrum between these two, where abilities are both used and developed, in varied proportions. Looking at the same thing from an individual point of view, in some roles people are just using their abilities to perform useful activities; in other roles they are developing their abilities to use in a different role. Perhaps there are many roles where, again, there is a mixture between these two positions. The value of using the common, open frameworks for badges is that the badges could (in principle) be valued across different kinds of organisation, and different kinds of role. This would then help people keep account of their abilities while moving between organisations and roles, and have those abilities more easily recognised.

The differing nature of different abilities

However, maybe we need to be more careful than simply to take every open framework, and turn it into badges. If all the abilities than were used in all roles and organisations had separate badges, vast numbers of badges would exist, and we could imagine the horrendous complexity of maintaining them and managing them. So it might make sense to select the most appropriate abilities for badging, as follows.

  • Some abilities are plentiful, and don’t need special training or rewarding — maybe organisations should just take them for granted, perhaps checking that what is expected is there.
  • Some abilities are hard, or impossible, to develop: you have them or you don’t. In this case, using badges would risk being discriminatory. Badges for e.g. how high a person can reach, or how long they can be in the sun without burning, would be unnecessary as well as seriously problematic, while one can think of many other personal characteristics, potentially framed as abilities, which might be less visible on the surface, but potentially lead to discrimination, as people can’t just change them.
  • Some abilities might only be able to be learned within a specific role. There is little point in creating badges for these abilities, if they do not transfer from role to role.
  • Some abilities can be developed, are not abundant, and can be transferred substantially from one role to another. These are the ones that deserve to be tracked, and for which badges are perhaps most worth developing. This still leaves open the question of the granularity of the badges.

Practical considerations governing the creation and use of frameworks

It’s hard to create a good, generally accepted common skills or competence framework. In order to do so, one has to put together several factors.

  • The abilities have to be sufficiently common to a number of different roles, between which people may want to move.
  • The abilities have to be described in a way that makes sense to all collaborating parties.
  • It must be practical to include the framework into other tools.
  • The framework needs to be kept up to date, to reflect changing abilities needed for actual roles.
  • In particular, as the requirements for particular jobs vary, the components of a framework need to be presented in such a way that they can be selected, or combined with components of other frameworks, to serve the variety of roles that will naturally occur in a creative economy.
  • Thus, the descriptions of the abilities, and the way in which they are put together, need all to be compatible.

Let’s look at some of this in more detail. What is needed for several purposes is the ability to create a tailored set of abilities. This would be clearly useful in describing both job opportunities, and actual personal abilities. It is of course possible to do all of this in a paper-like way, simply cutting and pasting between documents. But realistically, we need tools to help. As soon as we introduce ICT tools, we have the requirement for standard formats which these tools can work with. We need portability of the frameworks, and interoperability of the tools.

For instance, it would be very useful to have a tool or set of tools which could take frameworks, either ones that are published, or ones that are handed over privately, and manipulate them, perhaps with a graphical interface, to create new, bespoke structures.

Contrast with the actual position now. Current frameworks rarely attempt to use any standard format, as there are no very widely accepted standards for such a format. Within NOSs, there are some standards; the UK government has a list of their relevant documents including “NOS Quality Criteria” and a “NOS Guide for Developers” (by Geoff Carroll and Trevor Boutall). But outside this area practice varies widely. In the area of education and training, the scene is generally even less developed. People have started to take on the idea of specifying the “learning outcomes” that are intended to be achieved as a result of completing courses of learning, educaction or training, but practice is patchy, and there is very little progress towards common frameworks of learning outcomes.

We need, therefore, a uniform “model”, not for skills themselves, which are always likely to vary, but for the way of representing skills, and for the way in which they are combined into frameworks.

The InLOC format

Between 2011 and 2013 I led a team developing a specification for just this kind of model and format. The project was called “Integrating Learning Outcomes and Competences”, or InLOC for short. We developed CEN Workshop Agreement CWA 16655 in three parts, available from CEN in PDF format by ftp:

  1. Information Model for Learning Outcomes and Competences
  2. Guidelines including the integration of Learning Outcomes and Competences into existing specifications
  3. Application Profile of Europass Curriculum Vitae and Language Passport for Integrating Learning Outcomes and Competences

The same content and much extra background material is available on the InLOC project web site. This post is not the place to explain InLOC in detail, but anyone interested is welcome to contact me directly for assistance.

What can people do in the meanwhile?

I’ve proposed elsewhere often enough that we need to develop tools and open frameworks together, to achieve a critical mass where there enough frameworks published to make it worthwhile for tool developers, and sufficiently developed tools to make it worthwhile to make the extra effort to format frameworks in the common way (hopefully InLOC) that will work with the tools.

There will be a point at which growth and development in this area will become self-sustaining. But we don’t have to wait for that point. This is what I think we could usefully be doing in the meanwhile, if we are in a position to do so.

1. Build your own frameworks
It’s a challenge if you haven’t been involved in skill or competence frameworks before, but the principles are not too hard to grasp. Start out by asking what roles, and what functions, there are in your organisation, and try to work out what abilities, and what supporting knowledge, are needed for each role and for each function. You really need to do this, if you are to get started in this area. Or, if you are a microbusiness that really doesn’t need a framework, perhaps you can build one for a larger organisation.
2. Use parts of frameworks that are there already, where suitable
It may not be as difficult as you thought at first. There are many resources out there, such as NOSs, and the other frameworks mentioned above. Search, study, see if you can borrow or reuse. Not all frameworks allow it, but many do. So, some of your work may already be done for you.
3. Publish your frameworks, and their constituent abilities, each with a URL
This is the next vital step towards preparing your frameworks for open use and reuse. The constituent abilities (and levels, see the InLOC documentation) really need their own identifiers, as well as the overall frameworks, whether you call those identifiers URLs, URIs or IRIs.
4. Use the frameworks consistently throughout the organisation
To get the frameworks to stick, and to provide the motivation for maintaining them, you will have to use them in your organisation. I’m not an expert on this side of practice, but I would have thought that the principles are reasonably obvious. The more you have a uniform framework in use across your organisation, the more people will be able to see possibilities for transfer of skills, flexible working, moving across roles, job rotation, and other similar initiatives that can help satisfy employees.
5. Use InLOC if possible
It really does provide a good, general purpose model of how to represent a framework, so that it can be ready for use by ICT systems. Just ask if you need help on this!
6. Consider integrating open badges
It makes sense to consider your badge strategy and your framework strategy together. You may also find this old post of mine helpful.
7. Watch for future development of tools, or develop some yourself!
If you see any, try to help them towards being really useful, by giving constructive feedback. I’d be happy to help any tool developers “get” InLOC.

I hope these ideas offer people some pointers on a way forward for skill and competence frameworks. See other of my posts for related ideas. Comments or other feedback would be most welcome!

InLOC and OpenBadges: a reprise

(23rd in my logic of competence series.)

InLOC is well designed to provide the conceptual “glue” or “thread” for holding together structures and planned pathways of achievement, which can be represented by Mozilla OpenBadges.

Since my last post — the last of the previous academic year, also about OpenBadges and InLOC — I have been invited to talk at OBSEG – the Open Badges in Scottish Education Group. This is a great opportunity, because it involves engaging with a community with real aspirations for using Open Badges. One of the things that interests people in OBSEG is setting up combinations of lesser badges, or pathways for several lesser badges to build up to greater badges. I imagine that if badges are set up in this way, the lesser badges are likely to become the stepping stones along the pathway, while it is the greater badge that is likely to be of direct interest to, e.g., employers.

All this is right in the main stream of what InLOC addresses. Remember that, using InLOC, one can set out and publish a structure or framework of learning outcomes, competenc(i)es, etc., (called “LOC definitions”) each one with its own URL (or IRI, to be technically correct), with all the relationships between them set out clearly (as part of the “LOC structure”).

The way in which these Scottish colleagues have been thinking of their badges brings home another key point to put the use of InLOC into perspective. As with so many certificates, awards, qualifications etc., part of the achievement is completion in compliance with the constraints or conditions set out. These are likely not to be learning outcomes or competences in their own right.

The simplest of these non-learning-outcome criteria could be attendance. Attendance, you might say, stands in for some kind of competence; but the kind of basic timekeeping and personal organisation ability that is evidenced by attendance is very common in many activities, so is unlikely to be significant in the context of a Badge awarded for something else. Other such criteria could be grouped together under “ability to follow instructions” or something similar. A different kind of criterion could be the kinds of character “traits” that are not expected to be learned. A person could be expected to be cheerful; respectful; tall; good-looking; or a host of other things not directly under their control, and either difficult or impossible to learn. These non learning outcome aspects of criteria are not what InLOC is principally designed for.

Also, over the summer, Mozilla’s Web Literacy Standard (“WebLitStd”) has been progressing towards version 1.0, to be featured in the upcoming MozFest in London. I have been tracking this with the help of Doug Belshaw, who after great success as an Open Badges evangelist has been focusing on the WebLitStd as its main protagonist. I’m hoping soon (hopefully by MozFest time) to have a version of the WebLitStd in InLOC, and this brings to the fore another very pragmatic question about using InLOC as a representation.

Many posts ago, I was drawing out the distinction between LOC (that is, Learning Outcome or Competence) definitions that are, on the one hand, “binary”, and on the other hand, “rankable”. This is written up in the InLOC documentation. “Binary” ones are the ones for which you can say, without further ado, that someone has achieved this learning outcome, or not yet achieved it. “Rankable” ones are ones where you can put people in order of their ability or competence, but there is no single set of criteria distinguishing two categories that one could call “achieved” and “not yet achieved”.

In the WebLitStd, it is probably fair to say that none of the “competencies” are binary in these terms. One could perhaps characterise them as rankable, though perhaps not fully, in that there may be two people with different configurations of that competency, as a result perhaps of different experiences, each of whom were better in some ways than the other, and each conversely less good in other ways. It may well be similar in some of the Scottish work, or indeed in many other Badge criteria. So what to do for InLOC?

If we recognise a situation where the idea is to issue a badge for an achievement that is clearly not a binary learning outcome, we can outline a few stages of development of their frameworks, which would result in a progressively tighter matching to an InLOC structure or InLOC definitions. I’ll take the WebLitStd as illustrative material here.

First, someone may develop a badge for something that is not yet well-defined anywhere — it could have been conceived without reference to any existing standards. To illustrate this case, an example of a title could be “using Web sites”. There is no one component of the WebLitStd that covers “using the web”, and yet “using” it doesn’t really cover Web literacy as a whole. In this case, the Badge criteria would need to be detailed by the Badge awarder, specifically for that badge. What can still be done within OpenBadges is that there could be alignment information; however it is not always entirely clear what the relationship is meant to be between a badge and a standard it is “aligned” to. The simplest possibility is that the alignment is to some kind of educational level. Beyond this it gets trickier.

A second possibility for a single badge would be to refer to an existing “rankable” definition. For example, consider the WebLitStd skill, “co-creating web resources”, which is part of the “sharing & collaborating” competency of the “Connecting” strand. To think in detail about how this kind of thing could be badged, we need to understand what would count (in the eye of the badge issuer) as “co-creating web resources”. There are very many possible examples that readily come to mind, from talking about what a web page could have on it, to playing a vital part in a team building a sophisticated web service. One may well ask, “what experiences do you have of co-creating web resources?” and, depending on the answer, one could roughly rank people in some kind of order of amount and depth of experience in this area. To create a meaningful badge, a more clearly cut line needs to be drawn. Just talking about what could be on a web page is probably not going to be very significant for anyone, as it is an extremely common experience. So what counts as significant? It depends on the badge issuer, of course, and to make a meaningful badge, the badge issuer will need to define what the criteria are for the badges to be issued.

A third and final stage, ideal for InLOC, would be if a badge is awarded with clearly binary criteria. In this case there is nothing standing in the way of having the criteria property of the Badge holding a URL for a concept directly represented as a binary InLOC LOCdefinition. There are some WebLitStd skills that could fairly easily be seen as binary. Take “distinguishing between open and closed licensing” as an example. You show people some licenses; either they correctly identify the open ones or they don’t. That’s (reasonably) clear cut. Or take “understanding and labeling the Web stack”. Given a clear definition of what the “Web stack” is, this appears to be a fairly clear-cut matter of understanding and memory.

Working back again, we can see that in the third stage, a Badge can have criteria (not just alignments) which refer directly to InLOC information. At the second and first stage, badge criteria need something more than is clearly set out in InLOC information already published elsewhere. So the options appear to be:

  1. describing what the criteria are in plain text, with reference to InLOC information only through alignment; and
  2. defining an InLOC structure specifically for the badge, detailing the criteria.

The first of these options has its own challenges. It will be vital to coherence to ensure that the alignments are consistent with each other. This will be possible, for example, if the aspects of competence covered are separate (independent; orthogonal even). So, if one alignment is to a level, and the second to a topic area, that might work. But it is much less promising if more specific definitions are referred to.

(I’d like to write an example at this point, but can’t decide on a topic area — I need someone to give me their example and we can discuss it and maybe put it here.)

From the point of view of InLOC, the second option is much more attractive. In principle, any badge criteria could be analysed in sufficient detail to draw out the components which can realistically be thought of as learning outcomes — properties of the learners — that may be knowledge, skill, competence, etc. No matter how unusual or complex these are, they can in principle be expressed in InLOC form, and that will clarify what is really “aligned” with what.

I’ll say again, I would really like to have some well-worked-out examples here. So please, if you’re interested, get in touch and let’s talk through some of interest to you. I hope to be starting that in Glasgow this week.

The pragmatics of InLOC competence logic

(21st in my logic of competence series.)

Putting together a good interoperability specification is hard, and especially so for competence. I’ve tried to work into InLOC as many of the considerations in this Logic of Competence series as I could, but these are all limited by the scope of a pragmatically plausible goal. My hypothesis is that it’s not possible to have a spec that is at the same time both technically simple and flexible, and intuitively understandable to domain practitioners.

Here I’ll write now about why I believe that, and later follow on to finalise on the pragmatics of the logic of competence as represented by InLOC.

Doing a specification like InLOC gives one an opportunity to attract all kinds of criticism from people, much of it constructive. No attempts to do such a spec in the past have been great successes, and one wonders why that is. Some of the criticism I have heard has helped me to formulate the hypothesis above, and I’ll try to explain my reasoning here.

Turn the hypothesis on its head. What would make it possible to have a spec that is technically simple, and at the same time intuitively understandable to domain practitioners? Fairly obviously, there would have to be a close correspondence between the objects of the domain of expertise, and the constructs of the specification.

For each reader, there may appear to be a simple solution. Skills, competences, learning outcomes, etc., have this structure — don’t they? — and so one just has to reproduce that structure in the information model to get a workable interoperability spec that is intuitively understandable to people — well, like me. Well, “not”, as people now say as a one-word sentence.

Actually, there is great diversity in the ways people conceive of and structure learning outcomes, competences and the like. Some structures have different levels of the same competence, others do not. Some competences are defined in a binary fashion, that allows one to say “yes” or “no” to whether people have that competence; other competences are defined in a way that allows people to be ranked in order of that competence. Some competence structures are quite vague, with what look like a few labels that give an indication of the kinds of quality that someone is looking for, without defining what exactly those labels mean. Some structures — particularly level frameworks like the EQF — are deliberately defined in generic terms that can apply across a wide range of areas of knowledge and skill. And so on.

This should really be no surprise, because it is clear from many people’s work (e.g. my PhD thesis) that different people simplify complex structures in their own different ways, to suit their own purposes, and in line with their own backgrounds and assumptions. There is, simply, no way in which all these different approaches to defining and structuring competence can be represented in a way that will make intuitive sense to everyone.

What one can do is to provide a relatively simple abstract representation that can cover all kinds of existing structures. This is just what InLOC is aiming to do, but up to now we haven’t been quite clear enough about that. To get to something that is intuitive for domain practitioners, one needs to rely on tools being built that reflect, in the user interface, the language and assumptions of that particular group of practitioners. The focus for the “direct” use of the spec then clearly shifts onto developers. What, I suggest, developers need is a specification adapted to their needs — to build those interfaces for domain practitioners. The main requirements of this seem to me to be that the spec:

  1. gives enough structure so that developers can map any competence structure into that format;
  2. does not have any unnecessary complexity;
  3. gives a readily readable format, debuggable by developers (not domain practitioners).

So when you look at the draft InLOC CWAs, or even better if you come to the InLOC dissemination event in Brussels on 16th April, you know what to expect, and you know the aims against which to evaluate InLOC. InLOC offers no magic wand to bring together incompatible views of diverse learning outcome and competence structures. But it does offer a relatively simple technical solution, that allows developers who have little understanding of competence domains to develop tools that really do match the intuitions of various domain practitioners.

Three InLOC drafts for CEN Workshop Agreements are currently out for public comment — links from the InLOC home page — please do comment if you possibly can, and please consider coming to our dissemination event in Brussels, April 16th.

What is my work?

Is there a good term for my specialist area of work for CETIS? I’ve been trying out “technology for learner support”, but that doesn’t fully seem to fit the bill. If I try to explain, reflecting on 10 years (as of this month) involvement with CETIS, might readers be able to help me?

Back in 2002, CETIS (through the CRA) had a small team working with “LIPSIG”, the CETIS special interest group involved with Learner Information (the “LI” of “LIPSIG”). Except that “learner information” wasn’t a particularly good title. It was also about the technology (soon to be labelled “e-portfolio”) that gathered and managed certain kinds of information related to learners, including their learning, their skills – abilities – competence, their development, and their plans. It was therefore also about PDP — Personal Development Planning — and PDP was known even then by its published definition “a structured and supported process undertaken by an individual to reflect upon their own learning, performance and/or achievement and to plan for their personal, educational and career development”.

There’s that root word, support (appearing as “supported”), and PDP is clearly about an “individual” in the learner role. Portfolio tools were, and still are, thought of as supporting people: in their learning; with the knowledge and skills they may attain, and evidence of these through their performance; their development as people, including their learning and work roles.

If you search the web now for “learner support”, you may get many results about funding — OK, that is financial support. Narrowing the search down to “technology for learner support”, the JISC RSC site mentions enabling “learners to be supported with their own particular learning issues”, and this doesn’t obviously imply support for everyone, but rather for those people with “issues”.

As web search is not much help, let’s take a step back, and try to see this area in a wider perspective. Over my 10 years involvement with CETIS, I have gradually come to see CETIS work as being in three overlapping areas. I see educational (or learning) technology, and related interoperability standards, as being aimed at:

  • institutions, to help them manage teaching, learning, and other processes;
  • providers of learning resources, to help those resources be stored, indexed, and found when appropriate;
  • individual learners;
  • perhaps there should be a branch aimed at employers, but that doesn’t seem to have been salient in CETIS work up to now.

Relatively speaking, there have always seemed to be plenty of resources to back up CETIS work in the first two areas, perhaps because we are dealing with powerful organisations and large amounts of money. But, rather than get involved in those two areas, I have always been drawn to the third — to the learner — and I don’t think it’s difficult to understand why. When I was a teacher for a short while, I was interested not in educational adminstration or writing textbooks, but in helping individuals learn, grow and develop. Similar themes pervade my long term interests in psychology, psychotherapy, counselling; my PhD was about cognitive science; my university teaching was about human-computer interaction — all to do with understanding and supporting individuals, and much of it involving the use of technology.

The question is, what does CETIS do — what can anyone do — for individual learners, either with the technology, or with the interoperability standards that allow ICT systems to work together?

The CETIS starting point may have been about “learner information”, but who benefits from this information? Instead of focusing on learners’ needs, it is all too easy for institutions to understand “learner information” as information than enables institutions to manage and control the learners. Happily though, the group of e-portfolio systems developers frequenting what became the “Portfolio” SIG (including Pebble, CIEPD and others) were keen to emphasise control by learners, and when they came together over the initiative that became Leap2A, nearly six years ago, the focus on supporting learners and learning was clear.

So at least then CETIS had a clear line of work in the area of e-portfolio tools and related interoperability standards. That technology is aimed at supporting personal, and increasingly professional, development. Partly, this can be by supporting learners taking responsibility for tracking the outcomes of their own learning. Several generic skills or competences support their development as people, as well as their roles as professionals or learners. But also, the fact that learners enter information about their own learning and development on the portfolio (or whatever) system means that the information can easily be made available to mentors, peers, or whoever else may want to support them. This means that support from people is easier to arrange, and better informed, thus likely to be more effective. Thus, the technology supports learners and learning indirectly, as well as directly.

That’s one thing that the phrase “technology for learner support” may miss — support for the processes of other people supporting the learner.

Picking up my personal path … building on my involvement in PDP and portfolio technology, it became clear that current representations of information about skills and competence were not as effective as they could be in supporting, for instance, the transition from education to work. So it was, that I found myself involved in the area that is currently the main focus of my work, both for CETIS, and also on my own account, through the InLOC project. This relates to learners rather indirectly: InLOC is enabling the communication and reuse of definitions and descriptions of learning outcomes and competence information, and particularly structures of sets of such definitions — which have up to now escaped an effective and well-adopted standard representation. Providing this will mean that it will be much easier for educators and employers to refer to the same definitions; and that should make a big positive difference to learners being able to prepare themselves effectively for the demands of their chosen work; or perhaps enable them to choose courses that will lead to the kind of work they want. Easier, clearer and more accurate descriptions of abilities surely must support all processes relating to people acquiring and evidencing abilities, and making use of related evidence towards their jobs, their well-being, and maybe the well-being of others.

My most recent interests are evidenced in my last two blog posts — Critical friendship pointer and Follower guidance: concept and rationale — where I have been starting to grapple with yet more complex issues. People benefit from appropriate guidance, but it is unlikely there will ever be the resources to provide this guidance from “experts” to everyone — if that is even what we really wanted.

I see these issues also as part of the broad concern with helping people learn, grow and develop. To provide full support without information technology only looks possible in a society that is stable — where roles are fixed and everyone knows their place, and the place of others they relate to. In such a traditionalist society, anyone and everyone can play their part maintaining the “social order” — but, sadly, such a fixed social order does not allow people to strike out in their own new ways. In any case, that is not our modern (and “modernist”) society.

I’ve just been reading Herman Hesse’s “Journey to the East” — a short, allegorical work. (It has been reproduced online.) Interestingly, it describes symbolically the kind of processes that people might have to go through in the course of their journey to personal enlightenment. The description is in no way realistic. Any “League” such as Hesse described, dedicated to supporting people on their journey, or quest, would practically be able to support only very few at most. Hesse had no personal information technology.

Robert K. Greenleaf was inspired by Hesse’s book to develop his ideas on “Servant Leadership“. His book of that name was put together in 1977, still before the widespread use of personal information techology, and the recognition of its potential. This idea of servant leadership is also very clearly about supporting people on their journey; supporting their development, personally and professionally. What information would be relevant to this?

Providing technology to support peer-to-peer human processes seems a very promising approach to allowing everyone to find their own, unique and personal way. What I wrote about follower guidance is related to this end: to describe ways by which we can offer each other helpful mutual support to guide our personal journeys, in work as well as learning and potentially other areas of life. Is there a short name for this? How can technology support it?

My involvement with Unlike Minds reminds me that there is a more important, wider concept than personal learning, which needs supporting. We should be aspiring even more to support personal well-being. And one way of doing this is through supporting individuals with information relevant to the decisions they make that affect their personal well-being. This can easily be seen to include: what options there are; ideas on how to make decisions; what the consequences of those decision may be. It is an area which has been more than touched on under the heading “Information, Advice and Guidance”.

I mentioned the developmental models of William G Perry and Robert Kegan back in my post earlier this year on academic humility. An understanding of these aspects of personal development is an essential part of what I have come to see as needed. How can we support people’s movement through Perry’s “positions”, or Kegan’s “orders of consciousness”? Recognising where people are in this, developmental, dimension is vital to informing effective support in so many ways.

My professional interest, where I have a very particular contribution, is around the representation of the information connected with all these areas. That’s what we try to deal with for interoperability and standardisation. So what do we have here? A quick attempt at a round-up…

  • Information about people (learners).
  • Information about what they have learned (learning outcomes, knowledge, skill, competence).
  • Information that learners find useful for their learning and development.
  • Information about many subtler aspects of personal development.
  • Information relevant to people’s well-being, including
    • information about possible choices and their likely outcomes
    • information about individual decision-making styles and capabilities
    • and, as this is highly context-dependent, information about contexts as well.
  • Information about other people who could help them
    • information supporting how to find and relate to those people
    • information supporting those relationships and the support processes
    • and in particular, the kind of information that would promote a trusting and trusted relationship — to do with personal values.

I have the strong sense that this all should be related. But the field as a whole doesn’t seem have a name. I am clear that it is not just the same as the other two areas (in my mind at least) of CETIS work:

  • information of direct relevance to institutions
  • information of direct relevance to content providers.

Of course my own area of interest is also relevant to those other players. Personal well-being is vital to the “student experience”, and thus to student retention, as well as to success in learning. That is of great interest to institutions. Knowing about individuals is of great value to those wanting to sell all kinds of services to to them, but particularly services to do with learning and resources supporting learning.

But now I ask people to think: where there is an overlap between information that the learner has an interest in, and information about learners of interest to institutions and content providers, surely the information should be under the control of the individual, not of those organisations?

What is the sum of this information?

Can we name that information and reclaim it?

Again, can people help me name this field, so my area of work can be better understood and recognised?

If you can, you earn 10 years worth of thanks…

Follower guidance: concept and rationale

The idea that I am calling “follower guidance” is about how to relate with chosen others to promote good work, well being, personal growth and development, in an essentially peer-to-peer manner — it’s an alternative to “mentoring”.

Detailing this vision will prepare the ground for thinking about technology to support the relationships and the learning that results from them, which will fill the space left when traditional control hierarchies no longer work well.

The motivation for the idea

Where do people get their direction from? What or who guides someone, and how? How do people find their way, in life, in education, in a work career, etc.? How do people find a way to live a good and worthwhile life, with satisfying, fulfilling work and relationships? All big questions, addressed, as circumstances allow, by others involved in those people’s education, in their personal and professional development, in advice and guidance, coaching and mentoring; as well as by their family and friends.

In my previous post I set out some related challenges. Since then, I was reminded of these kinds of question by a blog post I saw via Venessa Miemis.

To put possible answers in context: in traditionalist societies I would expect people’s life paths to have relatively few options, and the task of orientation and navigation therefore to be relatively straightforward. People know their allotted place in society, and if they are happy with that, fine. But the appropriate place for this attitude is progressively shrinking back into the childhood years, as the world has ever more variety — and ever less certainty — available to adults. Experts often have more options to hand than their own internal decision making can easily process. Perhaps I can illustrate this from my own situation.

Take CETIS, where I currently have a 0.6 FTE contract. It’s a brilliant place to work, within the University of Bolton’s IEC, with so many people who seem somehow to combine expertise and generosity with passion for their own interesting areas or work. It has never felt like a hierarchical workplace, and staff there are expected to be largely self-determining as well as self-motivated. While some CETIS people work closely together, I do so less, because other staff at Bolton are not so interested in the learner-centred side of learning technology and interoperability. Working largely by myself, it is not so easy to decide on priorities for my own effort, and it would be hard for anyone else to give an informed opinion on where I would best devote my time. Happily, the norm is for things to work out, with what I sense as priorities being accepted by others as worthwhile. But what if … ? It’s not the norm in CETIS culture for anyone to be told that they must stop doing what they think is most worthwhile and instead do something less appealing.

Or take Unlike Minds (“UM”), with whom I am currently investigating collaboration, both for myself and for CETIS. UM is a “capability network” — essentially a non-hierarchical grouping of people with fascinatingly rich and diverse backgrounds and approaches, but similarities of situation and motivation. Here, the starting point is that everyone is assumed to be independent and professional (though some, like me, have some employment). It is a challenge to arrange for very busy independent associates to spend significant amounts of their own time “following” the work of other UMs. But if they did so, they might well be able to contribute to filling any orientation deficit of others, as they would in turn be helped if they wanted. I would expect that the more colleagues know about each other’s work, the more they can help focus motivation; the richer will be the collective UM culture; and the more effective UM will become as a capability network.

I mention just these two, because I have personal knowledge, but surely this must apply to so many new-style organisations and networks that shun being governed ultimately by the necessity to maximise profit. Often no one is in a position to direct work from “the top”, either because the management simply don’t have the deep specialist knowledge to work out what people should be doing, or because there is no governance that provides a “top” at all. The risk in all of these cases is of a lack of coordination and coherence. There is also a risk that individuals perform below their potential, because they are not getting enough informed and trusted feedback on their current activities. How many independent workers these days, no matter how supposedly expert, really have the knowledge to ensure even their own optimum decisions? Very few indeed, I guess, if for no other reason that there is too much relevant available knowledge to be on top of it all.

Then there is the danger of over-independent experts falling into the trap of false guru-hood. Without proper feedback, where followers gather largely in admiration, a talented person may have the illusion of being more correct than he or she really is. Conversely, without dedicated and trusted feedback, the highly talented who lack confidence can easily undervalue what they have to offer. The starting point of my previous post was the observation that people are not reliable judges of their own abilities or personality, and the mistakes can be made in either direction.

That is my broad-brush picture of the motivation, the rationale, or the requirement. So how can we address these needs?

The essence of follower guidance

I will refer to the person who is followed, and who receives the guidance, as the “mover”; the other person I will call the “follower guide”. Here are some suggestions about how such a system could work, and they all seem to me to fit together.

  • Follower guidance is not hierarchical. The norm is for everyone to play both roles: mover and guide. Otherwise the numbers don’t add up.
  • Each mover has more than one follower guide. In my own experience, it is much more persuasive to have two or three people tell you something than one alone. The optimal number for a balance between effort and quality (in each situation and for each person) may vary, but I think three might be about right in many cases. The follower guidance idea differs from co-counselling.
  • The mentor role is different. There is a role for someone like a mentor, but in a follower guidance culture they would not be delivering the guidance, but rather trying to arrange the best matching of movers with follower guides.
  • Arrangements are by mutual agreement. It is essential that the mover and follower guide both want to play their roles with each other. Reluctant participants are unlikely to work. Good matches may be helped through mentoring.
  • Follower guides start by following. Central to the idea is that follower guides know the movers well, at least in the area which they are following. Guidance suggestions will then be well-informed and more likely to be well received, growing trust.
  • Follower guides may select areas to follow. The mover needs to spell out the areas of work or life that may be followed; but follower guides cannot be expected to be interested in all of someone’s life and work — nor can a mover be expected to trust people equally in different areas.
  • Follower guides offer questions, suggestions and feedback naturally. Dialogue may be invited through questions or personal suggestions, whenever it seems best. Movers may or may not accept suggestions or address questions; but they are more likely to respond to ideas that come from more than one follower guide.
  • The medium of dialogue needs to be chosen. Positive reinforcement is naturally given openly, e.g. as a comment on a blog post, or a tweet. The media for questions and critical feedback needs to be judged more carefully, to maintain trust. This is one way in which follower guidance may differ from simple following.
  • Follower guides are committed. Movers should be able to rely on their follower guides for feedback and opinion when they need it. That means the follower guides have to stay up to date with the mover’s actions or outputs. This is only likely if they have a genuine interest in the area of the mover’s work they are following. This also will help build trust.
  • Time spent should not be burdensome. If following comes from genuine interest, the time spent should be a natural part of the follower guide’s work. In any case, one can follow quite a lot in, say, half an hour a week. If guidance is natural, spontaneous and gentle, it may be delivered very briefly.
  • Follower guides should not all be older or wiser. This may be appropriate for mentors, but there is value in ideas from all quarters, as recognised in the idea of 360° feedback. Anyway the numbers would not work out.
  • Values fit needs care. Trust will be more easily established the better the values fit. The more secure and confident a mover is, the more they may be able to benefit from feedback from follower guides outside their value set.
  • Trust needs to be built up over time and maintained. Mentoring may help people to trust and to be trustworthy. If trust is nevertheless lost, it is unlikely that a follower guidance relationship would continue.
  • The follower guidance practices should be followed and guided. How could this best be done? Perhaps a question for the cyberneticians?

What do you think about the importance of each one of these points? I’d like to know. And could you imagine practising either side of this kind of relationship? Who with? What would come easily, what would you enjoy, and what would be challenging?

Where does this take us?

This concept is too large to be easily digested at one sitting. I hope I have given enough motivation and outline of the general idea that readers get the sense of what I am trying to get at. I’ve outlined above the way I could see it working, but there is so much more detail to work out. Depending on the response to this post, I will take the ideas forward here or elsewhere.

I do think that this kind of envisioning plays a useful part in the life of CETIS and the IEC. Colleagues are most welcome to criticise the ideas, and link them up to other research. If there already is related practice somewhere, that would be good to know. If people see what I am getting at, they can offer alternative solutions to the challenges addressed. Then, we might think about the kinds of (learning or educational) technology that might support such practices, and the information that might be managed and communicated. We might be able to see links with existing technologies and practices.

In the terms of Robert Kegan, I’m pointing towards a challenge of “modern” life, not, as Kegan focuses more on, in the transition between traditionalist and modern, but rather a challenge inherent in the individualistic nature of current modernism. As Brian said (in Monty Python’s “Life of Brian”) “You’ve all got to work it out for yourselves.” “Don’t let anyone tell you what to do!” This advice can help people grow to a maturity of individualism, but can also hold people back from further growth, through what Kegan calls “deconstructive postmodernism” towards
“reconstructive postmodernism”.

Most significant to me would be the attempt to implement a system such as this that I could participate in myself. This would include my trusted follower guides coming back to me with comments on this post, of course … At the time of writing, thanks to Neil and Alan for commenting on the preceding post, and I very much appreciate those kinds of comment.