Open : data : co-op

A very interesting event in Manchester on Monday (2014-10-20) called “Open : Data : Cooperation” was focused around the idea of “building a data cooperative”. The central idea was the cooperative management of personal information.

Related ideas have been going round for a long time. In 1999 I first came across a formulation of the idea of managing personal informaton in the book called “Net Worth“. Ten years ago I started talking about personal information brokerage with John Harrison, who has devoted years to this cause. In 2008, Michel Bauwens was writing about “The business case for a User Data Commons“.

A simple background story emerges from following the money. People spend money, whether their own or other people’s, and influence others in their spending of money. Knowing what people are ready to spend money on is valuable, because businesses with something to sell can present their offerings at an opportune moment. Thus, information which might be relevant to anyone buying anything is valuable, and can be sold. Naturally, the more money is at stake, the higher the price of information relevant to that purchase. Some information about a person can be used in this way over and over again.

Given this, it should be possible for people themselves to profit from giving information about themselves. And in small ways, they already do: store cards give a little return for the information about your purchases. But once the information is gathered by someone else, it is open for sale to others. One worry is that, maybe in the future if not right away, that information might enable some “wrong” people to know what you are doing, when you don’t want them to know.

Can an individual manage all that information about themselves better, both to keep it out of the wrong hands, and to get a better price for it from those to whom it is entrusted? Maybe; but it looks like a daunting task. As individuals, we generally don’t bother. We give away information that looks trivial, perhaps, for very small benefits, and we lose control of it.

It’s a small step from these reflections to the idea of people grouping together, the better to control data about themselves. What they can’t practically do separately, there is a chance of doing collectively, with enough efficiencies of scale to make it worthwhile, financially as well as in terms of peace of mind. You could call such a grouping a “personal data cooperative” or a “personal information mutual”, or any of a range of similar names.

Compared with gathering and holding data about the public domain, personal information is much more challenging. There are the minefields of privacy law, such as the Data Protection Act in the UK.

In Manchester on Monday we had some interesting “lightning” talks (I gave one myself – here are the slides on Slideshare,) people wrote sticky notes on relevant topics they were concerned about, and there were six areas highlighted for discussion:

  • security
  • governance
  • participation & inclusivity
  • technical
  • business model
  • legislative

I joined the participation and the technical group discussions. Both fascinated me, in different ways.

The participation discussion led to thoughts about why people would join a cooperative to manage their personal data. They need specific motivation, which could come from the kind of close-knit networks that deal with particular interests. There are many examples of closely knit on-line groups around social or political campaigns, about specific medical issues, or other matters of shared personal concern. Groups of these kinds may well generate enough trust for people to share their personal information, but they are generally not large enough to have much commercial impact, so they might struggle to be sustainable as personal data co-ops. What if, somehow, a whole lot of these minority groups could get together in an umbrella organisation?

Curiously, this has much in common with my personal living situation in a cohousing project. Despite many people’s yearnings (if not cravings) for secure acceptance of their minority positions, to me it looks like our cohousing project is too large and diverse a group for any one “cause” to be a key part of the vision for everyone. What we realistically have is a kind of umbrella in which all these good and worthy causes may thrive. Low carbon footprints; local, organic food; veganism; renewable energy; they’re all here. All these interest groups live within a co-operative kind of structure, where the governance is as far as possible by consensus.

So, my current living situation has resonances with this “participation” – and my current work is highly relevant to the “technical” discussion. But the technical discussion proved to be hard!

If you take just one area of personal-related information, and manage to create a business model using that information, the technicalities start to be conceivable.

For instance, Cetis (particularly my colleague Scott Wilson) has been involved in the HEAR (Higher Education Achievement Report) for quite some time. Various large companies are interested in using the HEAR for recruiting graduates. Sure, that’s not a cooperative scenario, but it does illustrate a genuine business case for using personal data gathered from education. Then one can think about how that information is structured; how it is represented in some transferable format; how the APIs for fetching such information should work. There is definite progress in this direction for HEAR information in the UK – I was closely involved in the less established but wider European initiative around representing the Diploma Supplement, and more can be found under the heading European Learner Mobility.

While the HEAR is progressing towards viability, The “ecosystem” around learner information more widely is not very mature, so there are still questions about how effective our current technical formats are. I’ve been centrally involved in two efforts towards standardization: Leap2A and InLOC. Both have included discussion about the conceptual models, which has never been fully resolved.

More mature areas are more likely to have stable technical solutions. Less mature areas may not have any generally agreed conceptual, structural models for the data; there may be no established business models for generating revenues or profits; and there may be no standards specifically designed for the convenient representation of that kind of data. Generic standards like RDF can cover any linked data, but they are not necessarily convenient or elegant, and may or may not lead to workable practical applications.

Data sources mentioned at this meeting included:

quantified self data
that’s all about your physiological data, and possibly related information
energy (or other utility) usage data
coming from smart meters in the home
purchasing data
from store cards and online shops
communication data
perhaps from your mobile device
learner information
in conjunction with learning technology, as I introduced

I’m not clear how mature any of these particular areas are, but they all could play a part in a personal data co-op. And because of the diversity of this data, as well as its immaturity, there is little one can say in general about technical solutions.

What we could do would be to set out just a strategy for leading up to technical solutions. It might go something like this.

  1. Agree the scope of the data to be held.
  2. Work out a viable business model with that data.
  3. Devise models of the data that are, as far as possible, intuitively understandable to the various stakeholders.
  4. Consider feasible technical architectures within which this data would be used.
  5. Start considering APIs for services.
  6. Look at existing standards, including generic ones, to see whether any existing standard might suffice. If so, try using it, rather than inventing a new one.
  7. If there really isn’t anything else that works, get together a good, representative selection of stakeholders, with experience or skill in consensus standardization, and create your new standard.

It’s all a considerable challenge. We can’t ignore the technical issues, because ignoring them is likely to lead just to good ideas that don’t work in practice. On the other hand, solving the technical issues is far from the only challenge in personal data co-ops. Long experience with Cetis suggests that the technical issues are relatively easy, compared to the challenges of culture and habit.

Give up, then? No, to me the concept remains very attractive and worth working on. Collaboratively, of course!

Badges for singers

We had a badges session last week at the CETIS conference (here are some summary slides and my slides on some requirements). I’d like to reflect on that, not directly (as Phil has done) but instead by looking forward on how a badge system for leisure activity might be put together.

In the discussion part of our conference session, we looked at two areas of potential application of badges. First, for formative assesssment in high-stakes fields (such as medicine); second, for communities of practice such as the ones CETIS facilitates, still in the general area of work. What we didn’t look at was badges for leisure or recreation. The Mozilla Open Badges working paper makes no distinction between badges for skills that are explicitly about work and for skills that are not obviously about work, so looking at leisure applications complements the conference discussion nicely, while providing an example to think through many of the main issues with badges.

The worked example that follows builds on my personal knowledge of one hobby area, but is meant to be illustrative of many. Please think of your own favourite leisure activities.

Motivation

On returning from the conference, on the very same day as the badges session, it happened to be a rehearsal evening for the small choir I currently sing with. So what more natural for me to think about than a badge system for singing. The sense of need for such a system has continued to grow on me. Many people can sing well enough to participate in a choir. While learners of classical instruments have “grade” examinations and certificates indicating the stages of mastery of an instrument, there is no commonly used equivalent for choral singing. Singing is quite a diverse activity, with many genres as well as many levels of ability. The process of established groups and singers getting together tends to be slow and subject to chance, but worse, forming a new group is quite difficult, unless there is some existing larger body (school, college, etc) all the singers belong to.

Badges for singers might possibly help in two different ways. First, badges can mark attainment. A system of attainment badges could help singers find groups and other singers of the right standard for them to enjoy singing with. It may be worthy, but not terribly exciting singing with a group at a lower level, and one can feel out of one’s depth or embarrassed singing with people a lot more accomplished. So, when a group looks for a new member, it could specify levels of any particular skills that were expected, as well as the type of music sung. This wouldn’t necessarily remove the need for an audition, but it would help the right kind of singer to consider the choir. Compared with current approaches, including singers hearing a choir performing and then asking about vacancies, or learning of openings through friends, a badge system could well speed things up. But perhaps the greatest benefit would be to singers trying to start new groups or choirs, where there is no existing group to hear about or to go to listen. Here, a badge system could make all the difference between it being practical to get a new group together, or failing.

Second, the badges could provide a structured set of goals that would help motivate singers to broaden and improve their abilities. This idea of motivating steps in a pathway is a strong theme in the Open Badges documentation. There must be singers at several levels who would enjoy and gain satisfaction from moving on, up a level maybe. In conjunction with groups setting out their badge requirements, badges in the various aspects of choral singing would at very least provide a framework within which people could more clearly see what they needed to gain experience of and practice, in order to join the kind of group they really want.

By the way, I recognise that not all singing groups are called “choirs”. Barbershop groups tend to be “choruses”, while very small groups are more often called “ensembles”; but for simplicity here I use the term “choir” to refer to any singing group.

Teachers / coaches

Structured goals lead on to the next area. If there were a clear set of badged achievements to aim for, then the agenda for coaches, tutors, et al. would be more transparent. This might not produce a great increase in demand for paid tuition (and therefore “economic” activity) but might well be helpful for amateur coaching. Whichever way, a clear set of smaller, more limited goals on tried and tested pathways would provide a time-honoured approach to achieving greater goals, with whatever amount of help from others that is needed.

Badge content

I’ve never been in charge of a choir for more than a few songs, but I do have enough experience to have a reasonable guess at what choirmasters and other singers want from people who want to join them. First, there are the core singing skills, and these might be broken down for example like this:

  • vocal range and volume (most easily classified as soprano / alto / tenor / bass)
  • clarity and diction
  • voice quality and expressiveness (easy to hear in others, but hard to measure)
  • ability to sing printed music at first sight (“sight-singing”)
  • attentiveness to and blend with other singers
  • ability to sing a part by oneself
  • speed at learning one’s part if necessary
  • responsiveness to direction during rehearsal and performance
  • specialist skills

It wouldn’t be too difficult to design a set of badges that expressed something like these abilities, but this is not the time to do that job, as such a structure needs to reflect a reasonable consensus involving key stakeholders.

Then there are other personal attributes, not directly related to singing, that are desirable in choir members, e.g.:

  • reliability of attendance at rehearsals and particularly performances
  • helpfulness to other choir members
  • diligence in preparation

Badges for these could look a little childish, but as a badge system for singing would be entirely voluntary, perhaps there would be no harm in defining them, for the benefit of junior choirs at least.

Does this cover everything? It may or may not cover most of what can be improved — those things that can be better or not so good — but there is one other area that is vital for the mutual satisfaction of choir and singer. Singers have tastes in music; choirs have repertoires or styles they focus on. To get an effective matching system, style and taste would have to be represented.

Assessing and awarding badges

So who would assess and award these badges? The examinations for musical instrument playing are managed by bodies such as the ABRSM (indeed including solo singing). These exams have a very long history, and are often recognised, e.g. for entry to musical education institutions. But choral singers are usually wanting to enjoy themselves, not gain qualifications so they can be professional musicians. They are unlikely to want to pay for exams for what is just a hobby. That leaves three obvious options: choirmasters, fellow singers, and oneself.

In any case, the ABRSM and similar bodies already have their own certificates and records. A badge system for them would probably be just a new presentation of what currently exists. The really interesting challenge is to consider how badges can work effectively without an official regulating body.

On deeper consideration, there really isn’t much to choose between choirmasters and fellow singers as the people who award choral singing badges. There is nothing to stop any singer being a choirmaster, anyway. There is not much incentive for people to misrepresent their choral singing skills: as noted before, it’s not much fun being in a choir of the wrong standard, nor in singing music one doesn’t like much. So, effectively, a badge system would have the job of making personal and choir standards clear.

There is an analogy here with language skills, which are closely related in any case. The Europass Language Passport is designed to be self-assessed, with people judging their own ability against a set of criteria that were originally defined by the Council of Europe. The levels — A1 to C2 — all have reasonably clear descriptors, and one sees people describing their language skills using these level labels increasingly often.

This is all very well if people can do this self-assessment accurately. The difficulty is that some of the vital aspects of choral singing are quite hard to assess by oneself. Listening critically to one’s own voice is not particularly easy when singing in a group. It might be easier if recording were more common, but again, most people are unfamiliar with the sound of their own voice, and may be uncomfortable listening to it.

On the other hand, we don’t want people, in an attempt to be “kind”, to award each other badges over-generously. We could hope that dividing up the skills into enough separate badges would mean that there would be some badges for everyone, and no one need be embarrassed by being “below average” in some ways. Everyone in a choir can have a choir membership badge, which says something about their acceptance and performance within the choir as a whole. Then perhaps all other choir members can vote anonymously about the levels which others have reached. Some algorithm could be agreed for when to award badges based on peer votes.

The next obvious thing would be to give badges to the choir as a whole. Choirs have reputations, and saying that one has sung in a particularly choir may mean something. This could be done in several ways, all involving some external input. Individual singers (and listeners) could compare the qualities of different choirs in similar genres. Choral competitions are another obvious source of expert judgement.

Setting up a badge system

The more detailed questions come to a head in the setting up of an actual badge system. The problem would not only be the ICT architecture (such as Mozilla Open Badges is a working prototype for) but also the organisational arrangements for creating the systems around badges for singers. Now, perhaps, we can see more clearly that the ICT side is relatively easy. This is something that we are very familiar with in CETIS. The technology is hardly ever the limiting factor — it is the human systems.

So here are some questions or issues (among possibly many more) that would need to be solved, not necessarily in this order.

  • Who would take on responsibility for this project as a whole? Setting up a badge system is naturally a project that needs to be managed.
  • Who hosts the information?
  • How is the decision made about what each badge will be, and how it is designed?
  • How would singers and choirs be motivated to sign up in the first place?
  • If a rule is set for how badges are to be awarded, how is this enforced, or at least checked?
  • Is Mozilla Open Badges sufficient technical infrastructure, and if not, who decides what is?
  • Could this system be set up on top of other existing systems? (Which, or what kind?)

Please comment with more issues that need to be solved. I’ll add them if they fit!

Business case

And how does such a system run, financially? The beneficiaries would primarily be choirs and singers, and perhaps indirectly people who enjoy listening to live choral music. Finding people or organisations in whose financial interests this would be seems difficult. So it would probably be useful for the system to run with minimal resources.

One option might be to offer this as a service to one or more membership organisations that collects fees from members, or alternatively, as an added service that has synergy with an existing paid-for service. However, the obvious approach of limiting the service to paid members would work against its viability in terms of numbers. In this case, the service would in effect be advertising promoting the organisation. Following the advertising theme, it might be seen as reasonable, for users who do not already pay membership, to receive adverts from sellers of music or organisers of musical events, which could provide an adequate income stream. The nice thing is that the kind of information that makes sense for individuals to enter, to improve the effectiveness of the system, could well be used to target adverts more effectively.

Would this be enough to make a business case? I hope so, as I would like to use this system!

Reflection

I hope that this example illustrates some of the many practical and possibly thorny issues that lie before a real working badge system can be implemented, and these issues are not primarily technical. What would be really useful would be to have a working technical infrastructure available so that at least some of the technical issues are dealt with in advance. As I wrote in comments on a previous post, I’m not convinced that Mozilla Open Badges does the job properly, but at least it is a signpost in the right direction.

Badges – another take

Badges can be seen as recognisable tokens of status or achievement. But tokens don’t work in a vacuum, they depend on other things to make them work. Perhaps looking at these may help us understand how they might be used, both for portfolios and elsewhere.

Rowin wrote a useful post a few weeks ago, and the topic has retained a buzz. Taking this forward, I’d like to discuss specifically the aspects of badges — and indeed any other certificate — relevant both to portfolio tools and to competence definitions. Because the focus here is on badges, I’ll use the term “badge” occasionally to include what is normally thought of as a certificate.

A badge, by being worn, expresses a claim to something. Some real badges may express the proposition that the wearer is a member of some organisation or club. Anyone can wear an “old school tie”, but how does one judge the truth of the claim to belong to a particular alumni group? Much upset can be caused by the misleading wearing of medals, in the same way as badges.

Badges could often do with a clarification of what is being claimed. (That would be a “better than reality” feature.) Is my wearing a medal a statement that I have been awarded it, or it is just in honour of the dead relative that earned it? Did I earn this badge on my own, was I helped towards it, or am I just wearing it because it looks “cool”? An electronic badge, e.g. on a profile or e-portfolio, can easily link to an explicit claim page including a statement of who was awarded this badge, and when, beyond information about what the badge is awarded for. These days, a physical badge could have a QR code so that people can scan it and be taken to the same claim page.

If the claim is, for example, simply to “be” a particular way, or to adhere to some opinion, or perhaps to support some team (in each case where the natural evidence is just what the wearer says), then probably no more is needed. But most badges, at least those worn with pride, represent something more than that the wearer self-certifies something. Usually, they represent something like a status awarded by some other authority than the wearer, and to be worth wearing, they show something that the wearer has, but might not have had, which is of some significance to the intended observers.

If a badge represents a valued status, then clearly badges may be worn misleadingly. To counter that, there will need to be some system of verification, through which an observer can check on the validity of the implied claim to that status. Fortunately, this is much easier to arrange with an electronic badge than a physical one. Physical badges really need some kind of regulatory social system around them, often largely informal, that deters people from wearing misleading badges. If there is no such social system, we are less in the territory of badges, and more of certificates, where the issues are relatively well known.

When do you wear physical badges? When I do it is usually a conference, visitor or staff badge. Smart badges can be “swiped” in some way, and that could, for instance, lead to a web page on the authority’s web site with a photo of the person. That would be a pretty good quick check that would be difficult to fake effectively. “Swiping” can these days be magnetic, RFID, or QR code.

My suggestion for electronic badges is that the token badge links directly to a claim page. The claim page ideally holds the relevant information in a form that is both machine processable and human readable. But, as a portfolio is typically under the control of the individual, more portfolio pages cannot easily provide any official confirmation. The way to do this within a user-controlled portfolio would be with some kind of electronic signature. But probably much more effective in the long term is for the portfolio claim page to refer to other information held by the awarding authority. This page can either be public or restricted, and could hold varying amounts of information about the person as well as the badge claim.

Here are some first ideas of information that could relate to a badge (or indeed any certificate):

  • what is claimed (competence, membership, permission, values, etc.);
  • identity of the person claiming;
  • what authority is responsible for validating the claim and awarding;
  • when and on what grounds the award was made;
  • how and when any assessment process was done;
  • assurance that the qualifying performance was not by someone else.

But that’s only a quick attempt. A much slower attempt would be helpful.

It’s important to be able to separate out these components. The “what is claimed” part is very closely related to learning outcome and competence definitions, the subject of the InLOC work. All the assessment and validation information is separable, and the information models (along with any interoperability specifications) should be created separately.

Competence and values can be defined independently of any organisation — they attach just to an individual. This is different from membership, permission, and the like, that are essentially tied to systems and organisations, and not as such transferable.

Portfolios need verifiability

Having verified information included in learner-owned portfolios looks attractive to employers and others, but perhaps it would be better to think in terms of verifiable information, and processes that can arrange verification on demand.

Along with Scott Wilson and others, I was at a meeting recently with a JISC-funded project about doing electronic certificates, somewhat differently from the way that Digitary do them. Now, the best approach to certifying portfolio information is far from obvious. But Higher Education is interested in providing information to various people about activities and results of those who have attended their institution, and employers and others are keen to know what can be officially certified. When people start by imagining an electronic transcript in terms their understanding of a paper transcript, inevitably the question of how to make it “secure” will come up, echoing questions of how to prevent forgery of paper certificates.

Lately, I have been giving people my opinion that portfolio information and institutionally (“primary source”) verified information are different, and don’t need to interact too closely. Portfolio holders may write what they like, just as they do in CVs, and if certificates or verification are needed, perhaps the unverified portfolio information can provide a link to a verified electronic certificate of achievement information (like the HEAR, the UK Higher Education Achievement Report, under development). This meeting moved my understanding forward from this fairly simple view, but there are still substantial gaps, so I’ll try to set out what I do understand, and ask readers for kind suggestions about what I don’t.

As Scott could tell you much more ably than me, there are plenty of problems with providing digitally signed certificates for graduates to keep in their own storage. I won’t go into those, just to say that the problem is a little like banknotes: you can introduce a new clever technology that is harder to forge, but sooner or later the crooks will catch up with you, and you have to move on to ever more complex and sophisticated techniques. So, in what perhaps I may call “our” view, it seems normally preferable to keep original verified information at source, or at some trusted service provider delegated by the primary source. There are then several ways in which this information can be shown to the people who wish to rely on its verified authority. In principle, these are set out in Scott’s page on possible architectures for the HEAR. But in detail, again at the meeting I realised something I hadn’t figured out before.

We have already proposed in outline a way in which each component part of an achievement document could have its own URI, so that links could be made to particular parts, and differential permissions given to each part. (See e.g. the CEN EuroLMAI PDF.) If each part of an achievement document is separately referenceable, the person to whom the document refers (let’s call this person the holder again) could allow different people to view different parts, for different times, etc., providing that achievement information servers can store that permission information alongside the structured achievement information itself.

Another interesting technical approach, possible at least in PebblePad (Shane Sutherland was helpfully contributing to the meeting), is transparently to include information from other servers, to view and manage in your portfolio tool. The portfolio holder would directly see what he or she was making available for others to view. The portfolio system itself might have general permission to access any information on the achievement information server, with the onward permissions managed by the portfolio system. Two potential issues might arise.

  1. What does giving general permission to an e-portfolio system mean for security? Would this be too much like leaving an open door into the achievement information server?
  2. As the information is presented by the portfolio server, how would the viewer know that the information really comes from the issuer’s server, and is thus validated? A simple mark may not be convincing.

A potential solution to the second point might start with the generation of a permission token on the issuer’s server whenever a new view is put together on the portfolio system. Then the viewer could request a certificate that combined just the information that was presented in that view. But, surely, there must be other more general solutions?

The approach outlined above might be satisfactory just for one achievement information server, but if the verified information covering a portfolio is distributed across several such servers, the process might be rather cumbersome, confusing even, as several part certificates would have to be shown. Better to deal with such certificates only as part of a one-off verification process, perhaps as part of induction to a new opportunity. Instead, if the holder were able to point from a piece of information to the one or more parts of the primary records that backed it up, and then to set permissions within the portfolio system for the viewer to be able to follow that link, the viewer could be given the permission to see the verified information behind any particular piece of information. Stepping back a little, it might look like this. Each piece of information in a portfolio presentation or system is part of a web of evidence. Some of that evidence is provided by other items in the portfolio, but some refers to primary trustable sources. The method of verification can be provided, at the discretion of the portfolio holder, for permitted viewers to follow, for each piece of information.

One last sidestep: the nice thing about electronic information is that it is very easy to duplicate exactly. If there is a piece of information on a trusted server, belonging to a portfolio holder, it is in principle easy for the holder to reproduce that piece of information in the holder’s own personal portfolio system. Given this one-to-one correspondence, for that piece of information there is exactly one primary source of verification, which is the achievement information server’s version of just that piece of information. The information in the portfolio can be marked as “verifiable”, and associated with its means of verification. A big advantage of this is that one can query a trusted server in the least revealing way possible: simply to say, does this person have this information associated with them? The answer would be, “yes”, or “no”, or “not telling” (if the viewer is not permitted to see that information).

Stepping back again, we no longer need any emphasis on representing “verified” information within a portfolio itself, but instead the emphasis is on representing “verifiable” information. The task of looking after this information then becomes one of making sure that the the verification queries are successful just when they should be. What does this entail? These are the main things that I am unclear about in this vision, and would be grateful to know. How do we use and transform personal information while retaining its verifiability? What is required to maintain that verifiability?

Development of a conceptual model 5

This conceptual model now includes basic ideas about what goes on in the individual, plus some of the most important concepts for PDP and e-portfolio use, as well as the generalised formalisable concepts processes surrounding individual action. It has come a long way since the last time I wrote about it.

The minimised version is here, first… (recommended to view the images below separately, perhaps with a right-click)

eurolmcm25-min3

and that is complex enough, with so many relationship links looking like a bizarre and distorted spider’s web. Now for the full version, which is quite scarily complex now…

eurolmcm25

Perhaps that is the inevitable way things happen. One thinks some more. One talks to some more people. The model grows, develops, expands. The parts connected to “placement processes” were stimulated by Luk Vervenne’s contribution to the workshop in Berlin of my previous blog entry. But — and I find hard to escape from this — much of the development is based on internal logic, and just looking at it from different points of view.

It still makes sense to me, of course, because I’ve been with it through its growth and development. But is there any point in putting such a complex structure up on my blog? I do not know. It’s reached the stage where perhaps it needs turning into a paper-length exposition, particularly including all the explanatory notes that you can see if you use CmapTools, and breaking it down into more digestible, manageable parts. I’ve put the CXL file and a PDF version up on my own concept maps page. I can only hope that some people will find this interesting enough to look carefully at some of the detail, and comment… (please!) If you’re really interested, get in touch to talk things over with me. But the thinking will in any case surface in other places. And I’ll link from here later if I do a version with comments that is easier to get at.

Development of a conceptual model 4

This version of the conceptual model (of learning opportunity provision + assessment + award of credit or qualification) uses the CmapTools facility for grouping nodes; and it further extends the use of my own “top ontology” (introduced in my book).

There are now two diagrams: a contracted and an expanded version. When you use CmapTools, you can click on the << or >> symbols, and the attached box will expand to reveal the detail, or contract to hide it. This grouping was suggested by several people in discussion, particularly Christian Stracke. Let’s look at the two diagrams first, then go on to draw out the other points.

eurolmcm13-contracted1

You can’t fail to notice that this is remarkably simpler than the previous version. What is important is to note the terms chosen for the groupings. It is vital to the communicative effectiveness of the pair of diagrams that the term for the grouping represents the things contained by the grouping, and in the top case — “learning opportunity provision” — it was Cleo Sgouropoulou who helped find that term. Most of the links seem to work OK with these groupings, though some are inevitably less than fully clear. So, on to the full, expanded diagram…

eurolmcm13-expanded1

I was favourably impressed with the way in which CmapTools allows grouping to be done, and how the tools work.

Mainly the same things are there as in the previous version. The only change is that, instead of having one blob for qualification, and one for credit value, both have been split into two. This followed on from being uncomfortable with the previous position of “qualification”, where it appeared that the same thing was wanted or led to, and awarded. It is, I suggest, much clearer to distinguish the repeatable pattern — that is, the form of the qualification, represented by its title and generic properties — and the particular qualification awarded to a particular learner on a particular date. I originally came to this clear distinction, between patterns and expressions, in my book, when trying to build a firmer basis for the typology of information represented in e-portfolio systems. But in any case, I am now working on a separate web page to try to explain it more clearly. When done, I’ll post that here on my blog.

A pattern, like a concept, can apply to many different things, at least in principle. Most of the documentation surrounding courses, assessment, and the definitions about qualifications and credit, are essentially repeatable patterns. But in contrast, an assessment result, like a qualification or credit awarded, is in effect an expression, relating one of those patterns to a particular individual learner at a particular time. They are quite different kinds of thing, and much confusion may be caused by failing to distinguish which one is talking about, particularly when discussing things like qualifications.

These distinctions between types of thing at the most generic level is what I am trying to represent with the colour and shape scheme in these diagrams. You could call it my “top ontology” if you like, and I hope it is useful.

CmapTools is available free. It has been a great tool for me, as I don’t often get round to diagrams, but CmapTools makes it easy to draw the kinds of models I want to draw. If you have it, you might like to try finding and downloading the actual maps, which you can then play with. Of course, there is only one, not two; but I have put it in both forms on the ICOPER Cmap server, and also directly in CXL form on my own site. If you do, you will see all the explanatory comments I have made on the nodes. Please feel free to send me back any elaborations you create.

Development of a conceptual model 3

I spent 3 days in Lyon this week, in meetings with European project colleagues and learning technology standardization people. This model had a good airing, and there was lots of discussion and feedback. So it has developed quite a lot over the three days from the previous version.
eurolmcm12

So, let’s start at the top left. The French contingent wanted to add some kind of definition of structure to the MLO (Metadata for Learning Opportunities) draft CWA (CEN Workshop Agreement) and it seemed like a good idea to put this in somewhere. I’ve added it as “combination rule set”. As yet we haven’t agreed its inclusion, let alone its structure, but if it is represented as a literal text field just detailing what combinations of learning opportunities are allowed by a particular provider, that seems harmless enough. A formal structure can await future discussion.

Still referring to MLO, the previous “assessment strategy” really only related to MLO and nothing else. As it was unclear from the diagram what it was, I’ve taken it out. There is usually some designed relationship between a course and a related assessment, but though perhaps ideally the relationship should be through intended learning outcomes (as shown), it may not be so — in fact it might involve those combination rules — so I’ve put in a dotted relationship “linked to”. The dotted relationships are meant to indicate some caution: in this case its nature is unclear; while the “results in” relationship is really through a chain of other ones. I’ve also made dotted the relationship between a learning opportunity specification and a qualification. Yes, perhaps the learning opportunity is intended to lead to the award of a qualification, but that is principally the intention of the learning opportunity provider, and may vary with other points of view.

Talking about the learning opportunity provider, discussion at the meetings, particularly with Mark Stubbs, suggested that the important relationships between a provider and an learning opportunity specification are those of validation and advertising. And the simple terms “runs” and “run by” seem to express reasonably well how a provider relates to an instance. I am suggesting that these terms might replace the confusingly ambiguous “offer” terminology in MLO.

Over on the right of the diagram, I’ve tidied up the arrows a bit. The Educational Credit Information Model CWA (now approved) has value, level and scheme on a par, so I though it would be best to reflect that in the diagram with just one blob. Credit transfer and accumulation schemes may or may not be tied to wider qualifications frameworks with levels. I’ve left that open, but represented levels in frameworks separately from credit.

I’ve also added a few more common-sense relationships with the learner, who is and should be central to this whole diagram. Learners aspire to vague things like intended learning outcomes as well as specific results and qualifications. They get qualifications. And how do learners relate to learning opportunity specifications? One would hope that they would be useful for searching, for investigation, as part of the process of a learner deciding to enrol on a course.

I’ve added a key in the top right. It’s not quite adequate, I think, but I’m increasingly convinced that this kind of distinction is very helpful and important for discussing and agreeing conceptual models. I’m hoping to revisit the distinctions I made in my book, and to refine the key so that it is even clearer what kind of concept each one is.

Development of a conceptual model 2

As promised, the model is gently evolving from the initial one posted.

eurolmcm111

Starting from the left, I’ve added a “creates” relationship between the assessing body and the assessment specification, to mirror the one for learning. Then, I’ve reversed the arrows and amended the relationship captions accordingly, for some of the middle part of the diagram. This is to make it easier to read off scenarios from the diagram. Of course, each arrow could be drawn in either direction in principle, just by substituting an inverse relationship, but often one direction makes more sense than the other. I’ve also amended some other captions for clarity.

An obvious scenario to read off would be this: “The learner enrols on a course, which involves doing some activities (like listening, writing, practical work, tests, etc.) These activities result in records (e.g. submitted coursework) which is assessed in a process specified by the assessing body, designed to evaluate the intended learning outcomes that are the objectives of the course. As a result of this summative assessment, the awarding body awards the learner a qualification.” I hope that one sounds plausible.

The right hand side of the diagram hadn’t had much attention recently. To simplify things a little, I decided that level and framework are so tightly joined that there is no need to separate them in this model. Then, mirroring the idea that a learner can aspire to an assessment outcome, it’s natural also to say that a learner may want a qualification. And what happens to credits after they have been awarded? They are normally counted towards a qualification — but this has to be processed, it is not automatic, so I’ve included that in the awarding process.

I’m still reasonably happy with the colour and shape scheme, in which yellow ovals are processes or activities (you can ask, “when did this happen?”), green things are parts of the real world, things that have concrete existence; and blue things are information.

Development of a conceptual model

Reflecting on the challenging field of conceptual models, I thought of the idea of exposing my evolving conceptual model that extends across the areas of learner mobility, learning, evaluation/assessment, credit, qualifications and awards, and intended learning outcomes — which could easily be detailed to cover knowledge, skill and competence.

eurolmcm10

This is more or less the whole thing as it is at present. It will evolve, and I would like that to illustrate how a model can evolve as a result of taking into account other ideas. It also wants a great deal of explanation. I invite questions as comments (or directly) so that I can judge what explanation is helpful. I also warmly welcome views that might be contrasting, to help my conceptual model to grow and develop.

It originates in work with the European Learner Mobility team specifying a model for European Learner Mobility documents — that currently include the Diploma Supplement (DS) and Certificate Supplement. This in turn is based on the European draft standard Metadata for Learning Opportunities (MLO), which is quite similar to the UK’s (and CETIS’s) XCRI. (Note: some terminology has been modified from MLO.) Alongside the DS, the model is intended to cover the UK’s HEAR — Higher Education Achievement Report. And the main advance from previous models of these things, including transcripts of course results, is that it aims to cover intended learning outcomes in a coherent way.

This work is evolving already with valued input from colleagues I talk to in

but I wanted to publish it here so that anyone can contribute, and anyone in any of these groups can refer to it and pass it round — even if as a “straw man”.

It would have been better to start from the beginning, so that I could explain the origin of each part. However that is not feasible, so I will have to be content with starting from where I am, and hoping that the reasoning supporting each feature will become clear in time, as there is an interest. Of course, at any time, the reasoning may not adequately support the feature, and on realising that I will want to change the model.

Please comment if there are discrepancies between this model and your model of the same things, and we can explore the language expressing the divergence of opinion, and the possibility for unification.

Obviously this relates to the SC36 model I discussed yesterday.

See also the next version.