What is there to learn about standardization?

Cetis (the Centre for Educational Technology, Interoperability and Standards) and the IEC (Institute for Educational Cybernetics) are full of rich knowledge and experience in several overlapping topics. While the IEC has much expertise in learning technologies, it is Cetis in particular where there is a body of knowledge and experience of many kinds of standardization organisations and processes, as well as approaches to interoperability that are not necessarily based on formal standardization. We have an impressive international profile in the field of learning technology standards.

But how can we share and pass on that expertise? This question has arisen from time to time during the 12 years I’ve been associated with Cetis, including the last six working from our base in the IEC in Bolton. While Jisc were employing us to run Special Interest Groups, meetings, and conferences, and to support their project work, that at least gave us some scope for sharing. The SIGs are sadly long gone, but what about other ways of sharing? What about running some kind of courses? To run courses, we have to address the question of what people might want to learn in our areas of expertise. On a related question, how can we assemble a structured summary even of what have we ourselves have learned about this rich and challenging area?

These are my own views about what I sense I have learned and could pass on; but also about the topics where I would think it worthwhile to know more. All of these views are in the context of open standards in learning technology and related areas.

How are standards developed?

A formal answer for formal standards is straightforward enough. But this is only part of the picture. Standards can start life in many ways, from the work of one individual inventing a good way of doing something, through to a large corporation wanting to impose its practice on the rest of the world. It is perhaps more significant to ask …

How do people come up with good and useful standards?

The more one is involved in standardization, the richer and more subtle one’s answer to this becomes. There isn’t one “most effective” process, nor one formula for developing a good standard. But in Cetis, we have developed a keen sense of what is more likely to result in something that is useful. It includes the close involvement of the people who are going to implement the standard – perhaps software developers. Often it is a good idea to develop the specification for a standard hand in hand with its implementation. But there are many other subtleties which could be brought out here. This also begs a question …

What makes a good and useful standard?

What one comes to recognise with time and experience is that the most effective standards are relatively simple and focused. The more complex a standard is, the less flexible it tends to be. It might be well suited to the precise conditions under which it was developed, but those conditions often change.

There is much research to do on this question, and people in Cetis would provide an excellent knowledge base for this, in the learning technology domain.

What characteristics of people are useful for developing good standards?

Most likely anyone who has been involved in standardization processes will be aware of some people whose contribution is really helpful, and others who seem not to help so much. Standardization works effectively as a consensus process, not as a kind of battle for dominance. So the personal characteristics of people who are effective at standardization is similar to those who are good at consensus processes more widely. Obviously, the group of people involved must have a good technical knowledge of their domain, but deep technical knowledge is not always allied to an attitude that is consistent with consensus process.

Can we train, or otherwise develop, these useful characteristics?

One question that really interests me is, to what extent can consensus-friendly attitudes be trained or developed in people? It would be regrettable if part of the answer to good standardization process were simply to exclude unhelpful people. But if this is not to happen, those people would need to be to be open to changing their attitudes, and we would have to find ways of helping them develop. We might best see this as a kind of “enculturation”, and use sociological knowledge to help understand how it can be done.

After answering that question, we would move on to the more challenging “how can these characteristics be developed?”

How can standardization be most effectively managed?

We don’t have all the answers here. But we do have much experience of the different organisations and processes that have brought out interoperability standards and specifications. Some formal standardization bodies adopt processes that are not open, and we find this quite unhelpful to the management of standardization in our area. Bodies vary in how much they insist that implementation goes hand in hand with specification development.

The people who can give most to a standardization process are often highly valued and short of time. Conversely, those who hinder it most, including the most opinionated, often seem to have plenty of time to spare. To manage the standardization process effectively, this variety of people needs to be allowed for. Ideally, this would involve the training in consensus working, as imagined above, but until then, sensitive handling of those people needs considerable skill. A supplementary question would be, how does one train people to handle others well?

If people are competent at consensus working, the governance of standardization is less important. Before then, the exact mechanisms for decision making and influence, formal and informal, are significant. This means that the governance of standards organisations is on the agenda for what there is to learn. There is still much to learn here, through suitable research, about how different governance structures affect the standardization process and its outcomes.

Once developed, how are standards best managed?

Many of us have seen the development of a specification or standard, only for it never really to take hold. Other standards are overtaken by events, and lose ground. This is not always a bad thing, of course – it is quite proper for one standard to be displaced by a better one. But sometimes people are not aware of a useful standard at the right time. So, standards not only need keeping up to date, but they may also need to be continually promoted.

As well as promotion, there is the more straightforward maintenance and development. Web sites with information about the standard need maintaining, and there is often the possibility of small enhancements to a standard, such as reframing it in terms of a new technology – for instance, a newly popular language.

And talking of languages, there is also dissemination through translation. That’s one thing that working in a European context keeps high in one’s mind.

I’ve written before about management of learning technology standardization in Europe and about developments in TC353, the committee responsible for ICT in learning, education and training.

And how could a relevant qualification and course be developed?

There are several other questions whose answers would be relevant to motivating or setting up a course. Maybe some of my colleagues or readers have answers. If so, please comment!

  • As a motivation for development, how can we measure the economic value of standards, to companies and to the wider economy? There must be existing research on this question, but I am not familiar with it.
  • What might be the market for such courses? Which individuals would be motivated enough to devote their time, and what organisations (including governmental) would have an incentive to finance such courses?
  • Where might such courses fit? Perhaps as part of a technology MSc/MBA in a leading HE institution or business school?
  • How would we develop a curriculum, including practical experience?
  • How could we write good intended learning outcomes?
  • How would teaching and learning be arranged?
  • Who would be our target learners?
  • How would the course outcomes be assessed?
  • Would people with such a qualification be of value to standards developing organisations, or elsewhere?

I would welcome approaches to collaboration in developing any learning opportunity in this space.

And more widely

Looking again at these questions, I wonder whether there is something more general to grasp. Try reading over, substituting, for “standard”, other terms such as “agreement”, “law”, “norm” (which already has a dual meaning), “code of conduct”, “code of practice”, “policy”. Many considerations about standards seem to touch these other concepts as well. All of them could perhaps be seen as formulations or expressions, guiding or governing interaction between people.

And if there is much common ground between the development of all of these kinds of formulation, then learning about standardization might well be adapted to learn knowledge, skills, competence, attitudes and values that are useful in many walks of life, but particularly in the emerging economy of open co-operation and collaboration on the commons.

ICOPER and outcomes

The other European project I’m involved in for CETIS is called ICOPER. Over the last couple of weeks I’ve been doing some work improving the deliverable D2.2, mainly working with Jad Najjar. I flag it here because it uses some of the conceptual modelling work I’ve been involved in. My main direct contribution is Section 2. This starts with part of an adaptation of my diagram in a recent post here. It is adapted by removing the part on the right, for recognition, as that is of relatively minor importance to ICOPER. As ICOPER is focused on outcomes, the “desired pattern” is relabelled as “intended learning outcome or other objective”. I thought this time it would be clearer without the groupings of learning opportunity or assessment. And ICOPER is not really concerned with reflection by individuals, so that is omitted as well.

In explaining the diagram, I explain what the different colours represent. I’m still waiting for critique (or reasoned support, for that matter) of the types of thing I find so helpful in conceptual modelling (again, see previous post on this).

As I find so often, detailed thinking for any particular purpose has clarified one part of the diagram. I have introduced (and will bring back into the mainstream of my modelling) an “assessment result pattern”. I recognise that logically you cannot specify actual results as pre-requisites for opportunities, but rather patterns, such as “pass” or “at least 80%” for particular assessments. It takes a selection process (which I haven’t represented explicitly anywhere yet) to compare actual results with the required result pattern.

Overall, this section 2 of the deliverable explains quite a lot about a part of the overall conceptual model intended to be at least approximately from the point of view of ICOPER. The title of this deliverable, “Model for describing learning needs and learning opportunities taking context ontology modelling into account” was perhaps not what would have been chosen at the time of writing, but we needed to write to satisfy that title. Here, “learning needs” is understood as intended learning outcomes, which is not difficult to cover as it is central to ICOPER.

The deliverable as a whole continues with a review of MLO, the prospective European Standard on Metadata for Learning Opportunities (Advertising), to get in the “learning opportunities” aspect. Then it goes on to suggests an information model for “Learning Outcome Definitions”. This is a tricky one, as one cannot really avoid IMS RDCEO and IEEE RCD. As I’ve argued in the past, I don’t think these are really substantially more helpful than just using Dublin Core, and in a way the ICOPER work here implicitly recognises this, in that even though they still doff a cap to those two specs, most of RDCEO is “profiled” away, and instead a “knowledge / skill / competence” category is added, to square with the concepts as described in the EQF.

Perhaps the other really interesting part of the deliverable was one we put in quite a lot of joint thinking to. Jad came up with the title “Personal Achieved Learning Outcomes” (PALO), which is fine for what is intended to be covered here. What we have come up with (provisionally, it must be emphasised) is a very interesting mixture of bits that correspond to the overall conceptual model, with the addition of the kind of detail needed to turn a conceptual model into an information or data model. Again, not surprisingly, this raises some interesting questions for the overall conceptual model. How does the concept of achievement (in this deliverable) relate to the overall model’s “personal claim expression”? This “PALO” model is a good effort towards something that I haven’t personally written much about – how do you represent context in a helpful way for intended learning outcomes or competences? If you’re interested, see what you think. For most skills and competences, one can imagine several aspects of context that are really meaningful, and without which describing things would definitely lose something. Can you do it better?

I hope I’ve written enough to stimulate a few people at least to skim through that deliverable D2.2.

Development of a conceptual model 5

This conceptual model now includes basic ideas about what goes on in the individual, plus some of the most important concepts for PDP and e-portfolio use, as well as the generalised formalisable concepts processes surrounding individual action. It has come a long way since the last time I wrote about it.

The minimised version is here, first… (recommended to view the images below separately, perhaps with a right-click)

eurolmcm25-min3

and that is complex enough, with so many relationship links looking like a bizarre and distorted spider’s web. Now for the full version, which is quite scarily complex now…

eurolmcm25

Perhaps that is the inevitable way things happen. One thinks some more. One talks to some more people. The model grows, develops, expands. The parts connected to “placement processes” were stimulated by Luk Vervenne’s contribution to the workshop in Berlin of my previous blog entry. But — and I find hard to escape from this — much of the development is based on internal logic, and just looking at it from different points of view.

It still makes sense to me, of course, because I’ve been with it through its growth and development. But is there any point in putting such a complex structure up on my blog? I do not know. It’s reached the stage where perhaps it needs turning into a paper-length exposition, particularly including all the explanatory notes that you can see if you use CmapTools, and breaking it down into more digestible, manageable parts. I’ve put the CXL file and a PDF version up on my own concept maps page. I can only hope that some people will find this interesting enough to look carefully at some of the detail, and comment… (please!) If you’re really interested, get in touch to talk things over with me. But the thinking will in any case surface in other places. And I’ll link from here later if I do a version with comments that is easier to get at.

Development of a conceptual model 4

This version of the conceptual model (of learning opportunity provision + assessment + award of credit or qualification) uses the CmapTools facility for grouping nodes; and it further extends the use of my own “top ontology” (introduced in my book).

There are now two diagrams: a contracted and an expanded version. When you use CmapTools, you can click on the << or >> symbols, and the attached box will expand to reveal the detail, or contract to hide it. This grouping was suggested by several people in discussion, particularly Christian Stracke. Let’s look at the two diagrams first, then go on to draw out the other points.

eurolmcm13-contracted1

You can’t fail to notice that this is remarkably simpler than the previous version. What is important is to note the terms chosen for the groupings. It is vital to the communicative effectiveness of the pair of diagrams that the term for the grouping represents the things contained by the grouping, and in the top case — “learning opportunity provision” — it was Cleo Sgouropoulou who helped find that term. Most of the links seem to work OK with these groupings, though some are inevitably less than fully clear. So, on to the full, expanded diagram…

eurolmcm13-expanded1

I was favourably impressed with the way in which CmapTools allows grouping to be done, and how the tools work.

Mainly the same things are there as in the previous version. The only change is that, instead of having one blob for qualification, and one for credit value, both have been split into two. This followed on from being uncomfortable with the previous position of “qualification”, where it appeared that the same thing was wanted or led to, and awarded. It is, I suggest, much clearer to distinguish the repeatable pattern — that is, the form of the qualification, represented by its title and generic properties — and the particular qualification awarded to a particular learner on a particular date. I originally came to this clear distinction, between patterns and expressions, in my book, when trying to build a firmer basis for the typology of information represented in e-portfolio systems. But in any case, I am now working on a separate web page to try to explain it more clearly. When done, I’ll post that here on my blog.

A pattern, like a concept, can apply to many different things, at least in principle. Most of the documentation surrounding courses, assessment, and the definitions about qualifications and credit, are essentially repeatable patterns. But in contrast, an assessment result, like a qualification or credit awarded, is in effect an expression, relating one of those patterns to a particular individual learner at a particular time. They are quite different kinds of thing, and much confusion may be caused by failing to distinguish which one is talking about, particularly when discussing things like qualifications.

These distinctions between types of thing at the most generic level is what I am trying to represent with the colour and shape scheme in these diagrams. You could call it my “top ontology” if you like, and I hope it is useful.

CmapTools is available free. It has been a great tool for me, as I don’t often get round to diagrams, but CmapTools makes it easy to draw the kinds of models I want to draw. If you have it, you might like to try finding and downloading the actual maps, which you can then play with. Of course, there is only one, not two; but I have put it in both forms on the ICOPER Cmap server, and also directly in CXL form on my own site. If you do, you will see all the explanatory comments I have made on the nodes. Please feel free to send me back any elaborations you create.

Development of a conceptual model 3

I spent 3 days in Lyon this week, in meetings with European project colleagues and learning technology standardization people. This model had a good airing, and there was lots of discussion and feedback. So it has developed quite a lot over the three days from the previous version.
eurolmcm12

So, let’s start at the top left. The French contingent wanted to add some kind of definition of structure to the MLO (Metadata for Learning Opportunities) draft CWA (CEN Workshop Agreement) and it seemed like a good idea to put this in somewhere. I’ve added it as “combination rule set”. As yet we haven’t agreed its inclusion, let alone its structure, but if it is represented as a literal text field just detailing what combinations of learning opportunities are allowed by a particular provider, that seems harmless enough. A formal structure can await future discussion.

Still referring to MLO, the previous “assessment strategy” really only related to MLO and nothing else. As it was unclear from the diagram what it was, I’ve taken it out. There is usually some designed relationship between a course and a related assessment, but though perhaps ideally the relationship should be through intended learning outcomes (as shown), it may not be so — in fact it might involve those combination rules — so I’ve put in a dotted relationship “linked to”. The dotted relationships are meant to indicate some caution: in this case its nature is unclear; while the “results in” relationship is really through a chain of other ones. I’ve also made dotted the relationship between a learning opportunity specification and a qualification. Yes, perhaps the learning opportunity is intended to lead to the award of a qualification, but that is principally the intention of the learning opportunity provider, and may vary with other points of view.

Talking about the learning opportunity provider, discussion at the meetings, particularly with Mark Stubbs, suggested that the important relationships between a provider and an learning opportunity specification are those of validation and advertising. And the simple terms “runs” and “run by” seem to express reasonably well how a provider relates to an instance. I am suggesting that these terms might replace the confusingly ambiguous “offer” terminology in MLO.

Over on the right of the diagram, I’ve tidied up the arrows a bit. The Educational Credit Information Model CWA (now approved) has value, level and scheme on a par, so I though it would be best to reflect that in the diagram with just one blob. Credit transfer and accumulation schemes may or may not be tied to wider qualifications frameworks with levels. I’ve left that open, but represented levels in frameworks separately from credit.

I’ve also added a few more common-sense relationships with the learner, who is and should be central to this whole diagram. Learners aspire to vague things like intended learning outcomes as well as specific results and qualifications. They get qualifications. And how do learners relate to learning opportunity specifications? One would hope that they would be useful for searching, for investigation, as part of the process of a learner deciding to enrol on a course.

I’ve added a key in the top right. It’s not quite adequate, I think, but I’m increasingly convinced that this kind of distinction is very helpful and important for discussing and agreeing conceptual models. I’m hoping to revisit the distinctions I made in my book, and to refine the key so that it is even clearer what kind of concept each one is.

Development of a conceptual model 2

As promised, the model is gently evolving from the initial one posted.

eurolmcm111

Starting from the left, I’ve added a “creates” relationship between the assessing body and the assessment specification, to mirror the one for learning. Then, I’ve reversed the arrows and amended the relationship captions accordingly, for some of the middle part of the diagram. This is to make it easier to read off scenarios from the diagram. Of course, each arrow could be drawn in either direction in principle, just by substituting an inverse relationship, but often one direction makes more sense than the other. I’ve also amended some other captions for clarity.

An obvious scenario to read off would be this: “The learner enrols on a course, which involves doing some activities (like listening, writing, practical work, tests, etc.) These activities result in records (e.g. submitted coursework) which is assessed in a process specified by the assessing body, designed to evaluate the intended learning outcomes that are the objectives of the course. As a result of this summative assessment, the awarding body awards the learner a qualification.” I hope that one sounds plausible.

The right hand side of the diagram hadn’t had much attention recently. To simplify things a little, I decided that level and framework are so tightly joined that there is no need to separate them in this model. Then, mirroring the idea that a learner can aspire to an assessment outcome, it’s natural also to say that a learner may want a qualification. And what happens to credits after they have been awarded? They are normally counted towards a qualification — but this has to be processed, it is not automatic, so I’ve included that in the awarding process.

I’m still reasonably happy with the colour and shape scheme, in which yellow ovals are processes or activities (you can ask, “when did this happen?”), green things are parts of the real world, things that have concrete existence; and blue things are information.

Development of a conceptual model

Reflecting on the challenging field of conceptual models, I thought of the idea of exposing my evolving conceptual model that extends across the areas of learner mobility, learning, evaluation/assessment, credit, qualifications and awards, and intended learning outcomes — which could easily be detailed to cover knowledge, skill and competence.

eurolmcm10

This is more or less the whole thing as it is at present. It will evolve, and I would like that to illustrate how a model can evolve as a result of taking into account other ideas. It also wants a great deal of explanation. I invite questions as comments (or directly) so that I can judge what explanation is helpful. I also warmly welcome views that might be contrasting, to help my conceptual model to grow and develop.

It originates in work with the European Learner Mobility team specifying a model for European Learner Mobility documents — that currently include the Diploma Supplement (DS) and Certificate Supplement. This in turn is based on the European draft standard Metadata for Learning Opportunities (MLO), which is quite similar to the UK’s (and CETIS’s) XCRI. (Note: some terminology has been modified from MLO.) Alongside the DS, the model is intended to cover the UK’s HEAR — Higher Education Achievement Report. And the main advance from previous models of these things, including transcripts of course results, is that it aims to cover intended learning outcomes in a coherent way.

This work is evolving already with valued input from colleagues I talk to in

but I wanted to publish it here so that anyone can contribute, and anyone in any of these groups can refer to it and pass it round — even if as a “straw man”.

It would have been better to start from the beginning, so that I could explain the origin of each part. However that is not feasible, so I will have to be content with starting from where I am, and hoping that the reasoning supporting each feature will become clear in time, as there is an interest. Of course, at any time, the reasoning may not adequately support the feature, and on realising that I will want to change the model.

Please comment if there are discrepancies between this model and your model of the same things, and we can explore the language expressing the divergence of opinion, and the possibility for unification.

Obviously this relates to the SC36 model I discussed yesterday.

See also the next version.