Badges for singers

We had a badges session last week at the CETIS conference (here are some summary slides and my slides on some requirements). I’d like to reflect on that, not directly (as Phil has done) but instead by looking forward on how a badge system for leisure activity might be put together.

In the discussion part of our conference session, we looked at two areas of potential application of badges. First, for formative assesssment in high-stakes fields (such as medicine); second, for communities of practice such as the ones CETIS facilitates, still in the general area of work. What we didn’t look at was badges for leisure or recreation. The Mozilla Open Badges working paper makes no distinction between badges for skills that are explicitly about work and for skills that are not obviously about work, so looking at leisure applications complements the conference discussion nicely, while providing an example to think through many of the main issues with badges.

The worked example that follows builds on my personal knowledge of one hobby area, but is meant to be illustrative of many. Please think of your own favourite leisure activities.


On returning from the conference, on the very same day as the badges session, it happened to be a rehearsal evening for the small choir I currently sing with. So what more natural for me to think about than a badge system for singing. The sense of need for such a system has continued to grow on me. Many people can sing well enough to participate in a choir. While learners of classical instruments have “grade” examinations and certificates indicating the stages of mastery of an instrument, there is no commonly used equivalent for choral singing. Singing is quite a diverse activity, with many genres as well as many levels of ability. The process of established groups and singers getting together tends to be slow and subject to chance, but worse, forming a new group is quite difficult, unless there is some existing larger body (school, college, etc) all the singers belong to.

Badges for singers might possibly help in two different ways. First, badges can mark attainment. A system of attainment badges could help singers find groups and other singers of the right standard for them to enjoy singing with. It may be worthy, but not terribly exciting singing with a group at a lower level, and one can feel out of one’s depth or embarrassed singing with people a lot more accomplished. So, when a group looks for a new member, it could specify levels of any particular skills that were expected, as well as the type of music sung. This wouldn’t necessarily remove the need for an audition, but it would help the right kind of singer to consider the choir. Compared with current approaches, including singers hearing a choir performing and then asking about vacancies, or learning of openings through friends, a badge system could well speed things up. But perhaps the greatest benefit would be to singers trying to start new groups or choirs, where there is no existing group to hear about or to go to listen. Here, a badge system could make all the difference between it being practical to get a new group together, or failing.

Second, the badges could provide a structured set of goals that would help motivate singers to broaden and improve their abilities. This idea of motivating steps in a pathway is a strong theme in the Open Badges documentation. There must be singers at several levels who would enjoy and gain satisfaction from moving on, up a level maybe. In conjunction with groups setting out their badge requirements, badges in the various aspects of choral singing would at very least provide a framework within which people could more clearly see what they needed to gain experience of and practice, in order to join the kind of group they really want.

By the way, I recognise that not all singing groups are called “choirs”. Barbershop groups tend to be “choruses”, while very small groups are more often called “ensembles”; but for simplicity here I use the term “choir” to refer to any singing group.

Teachers / coaches

Structured goals lead on to the next area. If there were a clear set of badged achievements to aim for, then the agenda for coaches, tutors, et al. would be more transparent. This might not produce a great increase in demand for paid tuition (and therefore “economic” activity) but might well be helpful for amateur coaching. Whichever way, a clear set of smaller, more limited goals on tried and tested pathways would provide a time-honoured approach to achieving greater goals, with whatever amount of help from others that is needed.

Badge content

I’ve never been in charge of a choir for more than a few songs, but I do have enough experience to have a reasonable guess at what choirmasters and other singers want from people who want to join them. First, there are the core singing skills, and these might be broken down for example like this:

  • vocal range and volume (most easily classified as soprano / alto / tenor / bass)
  • clarity and diction
  • voice quality and expressiveness (easy to hear in others, but hard to measure)
  • ability to sing printed music at first sight (“sight-singing”)
  • attentiveness to and blend with other singers
  • ability to sing a part by oneself
  • speed at learning one’s part if necessary
  • responsiveness to direction during rehearsal and performance
  • specialist skills

It wouldn’t be too difficult to design a set of badges that expressed something like these abilities, but this is not the time to do that job, as such a structure needs to reflect a reasonable consensus involving key stakeholders.

Then there are other personal attributes, not directly related to singing, that are desirable in choir members, e.g.:

  • reliability of attendance at rehearsals and particularly performances
  • helpfulness to other choir members
  • diligence in preparation

Badges for these could look a little childish, but as a badge system for singing would be entirely voluntary, perhaps there would be no harm in defining them, for the benefit of junior choirs at least.

Does this cover everything? It may or may not cover most of what can be improved — those things that can be better or not so good — but there is one other area that is vital for the mutual satisfaction of choir and singer. Singers have tastes in music; choirs have repertoires or styles they focus on. To get an effective matching system, style and taste would have to be represented.

Assessing and awarding badges

So who would assess and award these badges? The examinations for musical instrument playing are managed by bodies such as the ABRSM (indeed including solo singing). These exams have a very long history, and are often recognised, e.g. for entry to musical education institutions. But choral singers are usually wanting to enjoy themselves, not gain qualifications so they can be professional musicians. They are unlikely to want to pay for exams for what is just a hobby. That leaves three obvious options: choirmasters, fellow singers, and oneself.

In any case, the ABRSM and similar bodies already have their own certificates and records. A badge system for them would probably be just a new presentation of what currently exists. The really interesting challenge is to consider how badges can work effectively without an official regulating body.

On deeper consideration, there really isn’t much to choose between choirmasters and fellow singers as the people who award choral singing badges. There is nothing to stop any singer being a choirmaster, anyway. There is not much incentive for people to misrepresent their choral singing skills: as noted before, it’s not much fun being in a choir of the wrong standard, nor in singing music one doesn’t like much. So, effectively, a badge system would have the job of making personal and choir standards clear.

There is an analogy here with language skills, which are closely related in any case. The Europass Language Passport is designed to be self-assessed, with people judging their own ability against a set of criteria that were originally defined by the Council of Europe. The levels — A1 to C2 — all have reasonably clear descriptors, and one sees people describing their language skills using these level labels increasingly often.

This is all very well if people can do this self-assessment accurately. The difficulty is that some of the vital aspects of choral singing are quite hard to assess by oneself. Listening critically to one’s own voice is not particularly easy when singing in a group. It might be easier if recording were more common, but again, most people are unfamiliar with the sound of their own voice, and may be uncomfortable listening to it.

On the other hand, we don’t want people, in an attempt to be “kind”, to award each other badges over-generously. We could hope that dividing up the skills into enough separate badges would mean that there would be some badges for everyone, and no one need be embarrassed by being “below average” in some ways. Everyone in a choir can have a choir membership badge, which says something about their acceptance and performance within the choir as a whole. Then perhaps all other choir members can vote anonymously about the levels which others have reached. Some algorithm could be agreed for when to award badges based on peer votes.

The next obvious thing would be to give badges to the choir as a whole. Choirs have reputations, and saying that one has sung in a particularly choir may mean something. This could be done in several ways, all involving some external input. Individual singers (and listeners) could compare the qualities of different choirs in similar genres. Choral competitions are another obvious source of expert judgement.

Setting up a badge system

The more detailed questions come to a head in the setting up of an actual badge system. The problem would not only be the ICT architecture (such as Mozilla Open Badges is a working prototype for) but also the organisational arrangements for creating the systems around badges for singers. Now, perhaps, we can see more clearly that the ICT side is relatively easy. This is something that we are very familiar with in CETIS. The technology is hardly ever the limiting factor — it is the human systems.

So here are some questions or issues (among possibly many more) that would need to be solved, not necessarily in this order.

  • Who would take on responsibility for this project as a whole? Setting up a badge system is naturally a project that needs to be managed.
  • Who hosts the information?
  • How is the decision made about what each badge will be, and how it is designed?
  • How would singers and choirs be motivated to sign up in the first place?
  • If a rule is set for how badges are to be awarded, how is this enforced, or at least checked?
  • Is Mozilla Open Badges sufficient technical infrastructure, and if not, who decides what is?
  • Could this system be set up on top of other existing systems? (Which, or what kind?)

Please comment with more issues that need to be solved. I’ll add them if they fit!

Business case

And how does such a system run, financially? The beneficiaries would primarily be choirs and singers, and perhaps indirectly people who enjoy listening to live choral music. Finding people or organisations in whose financial interests this would be seems difficult. So it would probably be useful for the system to run with minimal resources.

One option might be to offer this as a service to one or more membership organisations that collects fees from members, or alternatively, as an added service that has synergy with an existing paid-for service. However, the obvious approach of limiting the service to paid members would work against its viability in terms of numbers. In this case, the service would in effect be advertising promoting the organisation. Following the advertising theme, it might be seen as reasonable, for users who do not already pay membership, to receive adverts from sellers of music or organisers of musical events, which could provide an adequate income stream. The nice thing is that the kind of information that makes sense for individuals to enter, to improve the effectiveness of the system, could well be used to target adverts more effectively.

Would this be enough to make a business case? I hope so, as I would like to use this system!


I hope that this example illustrates some of the many practical and possibly thorny issues that lie before a real working badge system can be implemented, and these issues are not primarily technical. What would be really useful would be to have a working technical infrastructure available so that at least some of the technical issues are dealt with in advance. As I wrote in comments on a previous post, I’m not convinced that Mozilla Open Badges does the job properly, but at least it is a signpost in the right direction.

Development of a conceptual model 3

I spent 3 days in Lyon this week, in meetings with European project colleagues and learning technology standardization people. This model had a good airing, and there was lots of discussion and feedback. So it has developed quite a lot over the three days from the previous version.

So, let’s start at the top left. The French contingent wanted to add some kind of definition of structure to the MLO (Metadata for Learning Opportunities) draft CWA (CEN Workshop Agreement) and it seemed like a good idea to put this in somewhere. I’ve added it as “combination rule set”. As yet we haven’t agreed its inclusion, let alone its structure, but if it is represented as a literal text field just detailing what combinations of learning opportunities are allowed by a particular provider, that seems harmless enough. A formal structure can await future discussion.

Still referring to MLO, the previous “assessment strategy” really only related to MLO and nothing else. As it was unclear from the diagram what it was, I’ve taken it out. There is usually some designed relationship between a course and a related assessment, but though perhaps ideally the relationship should be through intended learning outcomes (as shown), it may not be so — in fact it might involve those combination rules — so I’ve put in a dotted relationship “linked to”. The dotted relationships are meant to indicate some caution: in this case its nature is unclear; while the “results in” relationship is really through a chain of other ones. I’ve also made dotted the relationship between a learning opportunity specification and a qualification. Yes, perhaps the learning opportunity is intended to lead to the award of a qualification, but that is principally the intention of the learning opportunity provider, and may vary with other points of view.

Talking about the learning opportunity provider, discussion at the meetings, particularly with Mark Stubbs, suggested that the important relationships between a provider and an learning opportunity specification are those of validation and advertising. And the simple terms “runs” and “run by” seem to express reasonably well how a provider relates to an instance. I am suggesting that these terms might replace the confusingly ambiguous “offer” terminology in MLO.

Over on the right of the diagram, I’ve tidied up the arrows a bit. The Educational Credit Information Model CWA (now approved) has value, level and scheme on a par, so I though it would be best to reflect that in the diagram with just one blob. Credit transfer and accumulation schemes may or may not be tied to wider qualifications frameworks with levels. I’ve left that open, but represented levels in frameworks separately from credit.

I’ve also added a few more common-sense relationships with the learner, who is and should be central to this whole diagram. Learners aspire to vague things like intended learning outcomes as well as specific results and qualifications. They get qualifications. And how do learners relate to learning opportunity specifications? One would hope that they would be useful for searching, for investigation, as part of the process of a learner deciding to enrol on a course.

I’ve added a key in the top right. It’s not quite adequate, I think, but I’m increasingly convinced that this kind of distinction is very helpful and important for discussing and agreeing conceptual models. I’m hoping to revisit the distinctions I made in my book, and to refine the key so that it is even clearer what kind of concept each one is.

Development of a conceptual model 2

As promised, the model is gently evolving from the initial one posted.


Starting from the left, I’ve added a “creates” relationship between the assessing body and the assessment specification, to mirror the one for learning. Then, I’ve reversed the arrows and amended the relationship captions accordingly, for some of the middle part of the diagram. This is to make it easier to read off scenarios from the diagram. Of course, each arrow could be drawn in either direction in principle, just by substituting an inverse relationship, but often one direction makes more sense than the other. I’ve also amended some other captions for clarity.

An obvious scenario to read off would be this: “The learner enrols on a course, which involves doing some activities (like listening, writing, practical work, tests, etc.) These activities result in records (e.g. submitted coursework) which is assessed in a process specified by the assessing body, designed to evaluate the intended learning outcomes that are the objectives of the course. As a result of this summative assessment, the awarding body awards the learner a qualification.” I hope that one sounds plausible.

The right hand side of the diagram hadn’t had much attention recently. To simplify things a little, I decided that level and framework are so tightly joined that there is no need to separate them in this model. Then, mirroring the idea that a learner can aspire to an assessment outcome, it’s natural also to say that a learner may want a qualification. And what happens to credits after they have been awarded? They are normally counted towards a qualification — but this has to be processed, it is not automatic, so I’ve included that in the awarding process.

I’m still reasonably happy with the colour and shape scheme, in which yellow ovals are processes or activities (you can ask, “when did this happen?”), green things are parts of the real world, things that have concrete existence; and blue things are information.

Development of a conceptual model

Reflecting on the challenging field of conceptual models, I thought of the idea of exposing my evolving conceptual model that extends across the areas of learner mobility, learning, evaluation/assessment, credit, qualifications and awards, and intended learning outcomes — which could easily be detailed to cover knowledge, skill and competence.


This is more or less the whole thing as it is at present. It will evolve, and I would like that to illustrate how a model can evolve as a result of taking into account other ideas. It also wants a great deal of explanation. I invite questions as comments (or directly) so that I can judge what explanation is helpful. I also warmly welcome views that might be contrasting, to help my conceptual model to grow and develop.

It originates in work with the European Learner Mobility team specifying a model for European Learner Mobility documents — that currently include the Diploma Supplement (DS) and Certificate Supplement. This in turn is based on the European draft standard Metadata for Learning Opportunities (MLO), which is quite similar to the UK’s (and CETIS’s) XCRI. (Note: some terminology has been modified from MLO.) Alongside the DS, the model is intended to cover the UK’s HEAR — Higher Education Achievement Report. And the main advance from previous models of these things, including transcripts of course results, is that it aims to cover intended learning outcomes in a coherent way.

This work is evolving already with valued input from colleagues I talk to in

but I wanted to publish it here so that anyone can contribute, and anyone in any of these groups can refer to it and pass it round — even if as a “straw man”.

It would have been better to start from the beginning, so that I could explain the origin of each part. However that is not feasible, so I will have to be content with starting from where I am, and hoping that the reasoning supporting each feature will become clear in time, as there is an interest. Of course, at any time, the reasoning may not adequately support the feature, and on realising that I will want to change the model.

Please comment if there are discrepancies between this model and your model of the same things, and we can explore the language expressing the divergence of opinion, and the possibility for unification.

Obviously this relates to the SC36 model I discussed yesterday.

See also the next version.