Development of a conceptual model 2

As promised, the model is gently evolving from the initial one posted.

eurolmcm111

Starting from the left, I’ve added a “creates” relationship between the assessing body and the assessment specification, to mirror the one for learning. Then, I’ve reversed the arrows and amended the relationship captions accordingly, for some of the middle part of the diagram. This is to make it easier to read off scenarios from the diagram. Of course, each arrow could be drawn in either direction in principle, just by substituting an inverse relationship, but often one direction makes more sense than the other. I’ve also amended some other captions for clarity.

An obvious scenario to read off would be this: “The learner enrols on a course, which involves doing some activities (like listening, writing, practical work, tests, etc.) These activities result in records (e.g. submitted coursework) which is assessed in a process specified by the assessing body, designed to evaluate the intended learning outcomes that are the objectives of the course. As a result of this summative assessment, the awarding body awards the learner a qualification.” I hope that one sounds plausible.

The right hand side of the diagram hadn’t had much attention recently. To simplify things a little, I decided that level and framework are so tightly joined that there is no need to separate them in this model. Then, mirroring the idea that a learner can aspire to an assessment outcome, it’s natural also to say that a learner may want a qualification. And what happens to credits after they have been awarded? They are normally counted towards a qualification — but this has to be processed, it is not automatic, so I’ve included that in the awarding process.

I’m still reasonably happy with the colour and shape scheme, in which yellow ovals are processes or activities (you can ask, “when did this happen?”), green things are parts of the real world, things that have concrete existence; and blue things are information.

Development of a conceptual model

Reflecting on the challenging field of conceptual models, I thought of the idea of exposing my evolving conceptual model that extends across the areas of learner mobility, learning, evaluation/assessment, credit, qualifications and awards, and intended learning outcomes — which could easily be detailed to cover knowledge, skill and competence.

eurolmcm10

This is more or less the whole thing as it is at present. It will evolve, and I would like that to illustrate how a model can evolve as a result of taking into account other ideas. It also wants a great deal of explanation. I invite questions as comments (or directly) so that I can judge what explanation is helpful. I also warmly welcome views that might be contrasting, to help my conceptual model to grow and develop.

It originates in work with the European Learner Mobility team specifying a model for European Learner Mobility documents — that currently include the Diploma Supplement (DS) and Certificate Supplement. This in turn is based on the European draft standard Metadata for Learning Opportunities (MLO), which is quite similar to the UK’s (and CETIS’s) XCRI. (Note: some terminology has been modified from MLO.) Alongside the DS, the model is intended to cover the UK’s HEAR — Higher Education Achievement Report. And the main advance from previous models of these things, including transcripts of course results, is that it aims to cover intended learning outcomes in a coherent way.

This work is evolving already with valued input from colleagues I talk to in

but I wanted to publish it here so that anyone can contribute, and anyone in any of these groups can refer to it and pass it round — even if as a “straw man”.

It would have been better to start from the beginning, so that I could explain the origin of each part. However that is not feasible, so I will have to be content with starting from where I am, and hoping that the reasoning supporting each feature will become clear in time, as there is an interest. Of course, at any time, the reasoning may not adequately support the feature, and on realising that I will want to change the model.

Please comment if there are discrepancies between this model and your model of the same things, and we can explore the language expressing the divergence of opinion, and the possibility for unification.

Obviously this relates to the SC36 model I discussed yesterday.

See also the next version.

Skills frameworks, interoperability, portfolios, etc.

Last Thursday (2009-04-16) I went to a very interesting meeting in Leeds, specially arranged, at the Leeds Institute of Medical Education, between various interested parties, about their needs and ideas for interoperability with e-portfolio tools – but also about skills frameworks.

It was interesting particularly because it showed more evidence of a groundswell of willingness to work towards e-portfolio interoperability, and this has two aspects for the people gathered (6 including me). On the one hand, the ALPS CETL is working with MyKnowledgeMap (MKM) – a small commercial learning technology vendor based in York – on a project involving health and social care students in their 5 HEIs around Leeds. They are using the MKM portfolio tool, Multi-Port, but are aware of a need to have records which are portable between their system and others. It looks like being a fairly straightforward case of a vendor with a portfolio tool being drawn in to the LEAP2A fold on the back of the success we have had so far – without the need for extra funding. The outcome should be a classic interoperability win-win: learners will be able to export their records to PebblePad, Mahara, etc., and the MKM tool users will be able to import their records from the LEAP2A-implementing systems to kick-start their portfolio records there with the ALPS CETL or other MKM sites.

MKM tools, as suggested by the MKM name, do cover the representation of skills frameworks, and this forms a bridge between two threads to this meeting: first, the ALPS CETL work, and second, the more challenging area of medical education, where frameworks – of knowledge, skill or competence – abound and are pretty important for medical students and in the professional development of medical practitioners, and health professionals more generally.

In this more challenging side of the meeting, we discussed some of the issues surrounding skills frameworks in medical education – including the transfer of students at undergraduate level; the transfer between a medical school like Leeds and a teaching hospital, where the doctors may well soon be using the NHS Foundation Year e-portfolio tools in conjunction with their further training and development; and then on to professional life.

The development of LEAP2A has probably been helped greatly by not trying to do too much all at once. We haven’t yet fully dealt with how to integrate skills frameworks into e-portfolio information. At one very simple level we have covered it – if each skill definition has a URI, that can be referred to by an “ability” item in the LEAP2A. But at another level it is greatly challenging. Here in medical education we have not one, but several real-life scenarios calling for interoperable skills frameworks for use with portfolio tools. So how are we actually going to advise the people who want to create skills frameworks, about how to do this in a useful way? Their users, using their portfolio tools, want to carry forward the learning (against learning outcomes) and evidence (of competence) to another setting. They want the information to be ready to use, to save them repetition – potentially wasteful to the institution as well as the learner.

The answer necessarily goes beyond portfolio technology, and needs to tackle the issues which several people are currently working on: European projects like TENCompetence and ICOPER, where I have given presentations or written papers; older JISC project work I have been involved with (ioNW2, SPWS); and now the recently set up a CETIS team on competences.

Happily, it seems like we are all pushing at an open door. I am happy to be able to respond in my role as Learning Technology Advisor for e-portfolio technology, and point MKM towards the documentation on – and those with experience of implementing – LEAP2A. And the new competence team has been looking for a good prompt to hold an initial meeting. I imagine we might hold a meeting, perhaps around the beginning of July, focused on frameworks of skills, competence, knowledge, and their use together with curriculum learning outcomes, with assessment criteria, and with portfolio evidence? The Leeds people would be very willing to contribute. Then, perhaps JISC might offer a little extra funding (on the same lines as previous PIOP and XCRI projects) to get together a group of medical educators to implement LEAP2A and related skills frameworks together – in whatever way we all agree is good to take forward the skills framework developments.

Representing defining and using ability competency and similar concepts

I’ve been telling people, for quite a while, that I will write up suggestions for how to deal with abilities (competence, competencies, knowledge, etc. etc.) for many reasons, including particularly e-portfolio related uses. Well, here are my current ideas for the New Year.

They are expressed in a set of linked pages, each dealing with a facet of the issues. The pages are very much draft ideas, but serve the purpose of getting out the basic ideas and inviting other ideas for improvement. If you have an interest, please do at least leave a comment here, or e-mail me with ideas and suggestions.

ePortfolio 2008

Since going to the annual European Eifel “ePortfolio” conferences is a firm habit of mine (in fact, I have been to every single annual one so far) it seems like a good time to take stock of the e-portfolio world. All credit to Serge and Maureen and their team, they have kept the event as being the best “finger on the pulse” in this field. This year was, as last, in Maastricht. It extended to just 3 rather than 4 days, and there were apparently some hundred fewer people overall. Nevertheless, others as well as I felt that there was an even better overall feel this year. At the excellent social dinner boat trip, I was reflecting, where else can one move so quickly from discussing deeply human issues like personal development, with people who care very insightfully about people, to talking technically about the relative merits of the languages and representations used for implementation of tools and systems, with people who are highly technically competent? It makes sense for this account to take both of those tracks.

Taking the easy one first, then…  We didn’t have a “plugfest” this year, which was in some ways odd: the last three years (since Cambridge, 2005) we have had some attempt at interoperability trials, even though no one was really ready for them. (People did remarkably well, considering.) But this year, when in the PIOP work with LEAP2A we really have started something that is going to work, there were no trials, just presentations. Actually I think that it is much better for being less “hyped”. By next year we should have something really solid to present and demonstrate. I presented our work at two sessions, and in both it was well received.

Not everyone likes XML schema specifications – Sampo Kellomäki enlightened me about some of the gross failings around XML – but luckily, those who aren’t so keen on XML or Atom seemed to appreciate the other side of LEAP 2.0 – the side of RDF and the Semantic Web connections, and the RDFa ideas I first understood in my work for ioNW2. It was good to have something for everyone with a technical interest.

What was disappointing was to understand more closely just what has been happening in the Netherlands. Someone must have made the decision a couple of years ago to follow “the international standard” of IMS ePortfolio, not taking account of the fact that it had not been properly tested in use. That’s how the IMS used to work (though no longer):  get a spec out there quickly, get someone to implement it, and then improve towards something workable based on feedback. But though there were “implementations” of IMS eP, there was no real test of interoperability or portability. Various people we know and work with had tried it, even up to last year’s conference, so we knew many of the problems. Anyway, in the Netherlands, they have been struggling to adapt and profile that difficult spec, and despite the large amount of public funding put in to the project (too much?), most of the couple of dozen national partners have only implemented a subset even of their own limited profile. And IMS eP is not being used as an internal representation by anyone.

Fortunately, Synergetics, who have been involved in the Dutch work (despite being Belgian) have also joined our forthcoming round of PIOP work, and talk towards the end of the conference was that LEAP2A will be added to the Dutch interoperability framework. I do hope this goes through – we will support it as much as we are able. Synergetics also play a leading role in the impressive TAS3 project, so we can expect that as time goes on pathways will emerge to add security architecture to our interoperability approach. But now on to the much more humanly interesting discussions.

I had the good luck to bump into Darren Cambridge (as usual, a keynote speaker) on the evening before the conference, and we talked over some of the ideas I’ve been developing, which at the moment I label as “personal integrity reflection across contexts”. Now that needs writing about separately, but in essence it involves a way of thinking about how to promote real growth, development and change in people’s lives. We also talked about this with Samantha Slade of Percolab – Darren analysed Samantha’s e-portfolio for his forthcoming book (which will be more erudite and better written than mine!).

These discussions were the peak, but elsewhere throughout the conference I got the feeling that the time is now perhaps right to move forward more publicly with discussing values in relation to e-portfolios. Parts of my vision were expressed in Anna’s and my paper two years ago in the Oxford conference – “Ethical portfolios: supporting identities and values.” In essence, it goes like this: portfolio practice can help to develop people’s values, and their understanding of their own values; with that understanding, they can choose occupations which lead to satisfaction and fulfillment; representing those values in machine-readable form may lead to much more potent matching within the labour market – another tool towards “flexicurity”(a term introduced to me 10 minutes ago by Theo Mensen). The new expression of insight is that development of personal values, and understanding them, is supported by some kinds of reflection, and not others. The term I am trying out to point towards the most useful and powerful kind of reflection is that “personal integrity reflection across contexts”. I hope the ideas can be taken forward and presented in more depth next year.

At the conference there was also a focus on “Learning Regions” (the subject of Theo’s call), which I wasn’t able to attend much of. My view of regional initiatives has been somewhat jaded by peripheral involvement years ago with regional development agencies that seemed to have just one agenda item: inward investment. But the vision at the conference was much broader and humane. My input is quite limited. Firstly, to get anything distinctive for a region going, there needs to be a common language for the distinctive concerns (and groups of concerns) for a region. If this is done machine-readably (e.g. RDF) then there is the hope for cross linkage, not just in the labour market but beyond. Again, as in my ioNW2 work, this could well be based on clear and unambiguous URIs being set up for each concept, and possibly this could be extended to having some kind of ontology in the background. Then there is the question of two-way matching, already trialled in a small way by the Dutch public employment service (CWI).

This leads to an opportunity for me to round up. There is so much that could be contributed to by e-portfolio practice and tools; and the sense of this conference was that indeed, things are set to move forward. But it still depends on matters which are not fully and generally understood. There is this issue of representing skills/competences/abilities which will not go away until dealt with satisfactorily (beyond TENCompetence), and alongside that, the issue of assessment of those in a way which makes sense to employers (and of which the results can be machine processed). That “hard” assessment needs to be reconciled with the more humane e-portfolio based assessment, which I think everyone agrees is already very good to get a feel for those last few short-listable candidates. Portfolio tools still have a way to go until they are relevant for search and automatic matching.

But my opinion is that progress here, and elsewhere, can definitely be made.

GMSA advance

As I’ve been involved with GMSA in various ways including through the ioNW2 project, I went to their seminar on 14th May introducing GMSA Advance.  This is to do with providing bite-sized modules of Higher Education, mainly for people at work, and giving awards (including degrees) on that basis – picking up some of the “Leitch” agenda. As I suspected, it was of interest from a portfolio perspective among others.

I’ll start with the portfolio- and PDP-related issues.

The first issue is award coherence. If you put together an award from self-chosen small chunks of learning (“eclectic”, one could call it), there is always an issue of whether that award represents anything coherent. Awarding bodies, including HEIs, may not think it right to give an award for what looks like a random collection of learning. Having awarding bodies themselves define what counts as coherent risks being too restrictive. An awarding body might insist on things which were not relevant to the learner’s workplace, or that had been covered outside the award framework. On the other hand, employers might not understand about academic coherence at all. A possible solution that strikes me and others is

  • have the learner explain the coherence of modules chosen
  • assess that explanation as part of the award requirement.

This explanation of coherence needs to make sense to a variety of people as well as the learner, in particular, to academics and to employers. It invites a portfolio-style approach: the learner is supported through a process of constructing the explanation, and it is presented as a portfolio with links to further information and evidence. One could imagine, for example, a video interview with the learner’s employer as useful extra evidence.

A second issue is the currency and validity of “credit”. Now I have a history of skepticism about credit frameworks and credit transfer, though the above idea of assessed explanation of award coherence at last  brings a ray of light into the gloom. My issue has always been that, to be meaningful, awards should be competence-based, not credit based. And I still maintain that the abilities claimed by someone at the end of a course, suitably validated by the awarding body, should be a key part of the official records of assessment (indeed, part of the “Higher Education Achievement Report” of the Burgess Group – report downloadable as PDF)

One of the key questions for these “eclectic” awards is whether credit should have a limited lifetime. Whether credit should expire surely should depend on what credit is trying to represent. It is just the skills, abilities or competences whose validation needs to expire – this is increasingly being seen in the requirement for professional revalidation. And the expiry of validation itself needs to be based on evidence – bicycle riding and swimming tend to be skills that are learned once for ever; language skills fall off only slowly; but the knowledge of the latest techniques in a leading edge discipline may be lost very quickly.

This is a clear issue for portfolios that present skills. The people with those portfolios need to be aware of the perceived value of old evidence, and to be prepared to back up old formal evidence with more recent, if less formal, additional evidence of currency. We could potentially take that approach back into the the GMSA Advance awards, though there would be many details to figure out, and issues would overlap with accreditation of prior learning.

Other issues at the seminar were not to do with portfolios. There is the question of how to badge such awards. CPD? Several of those attending thought not – “CPD”is often associated with unvalidated personal learning, or even just attendance at events. As an alternative, I rather like the constructive ambiguity of the phrase “employed learning” – it would be both the learners and the learning that are employed – so that is my suggestion for inclusion into award titles.

Another big issue is funding. Current policy is for no government funding to be given for people studying for awards of equal or lower level than one they have already achieved. The trouble is that if each module itself carries an award, then work-based learners couldn’t be funded for this series of bite-sized modules, but only one. The issue is recognised, but not solved. A good idea that was suggested at the seminar is to change and clarify the meaning of credit, so that it takes on the role of measuring public fundability of learning. Learners could have a lifetime learning credit allowance, that they could spend as they preferred. Actually, I think even better would be a kind of “sabbatical” system where one’s study credit allowance continued to build, to allow for retraining. Maybe one year’s (part time?) study credit would be fundable for each (say) 7 years of life – or maybe 7 years of tax-paying work?

So, as you can see, it was a thought-provoking and stimulating seminar.

Assessment think tank, HEA, 2008-01-31

Assessment think tank, at The Higher Education Academy, York, 31st January to 1st February 2008

Several of these events appear to have been arranged, and this one was with the Subject Centres both for History, Classics and Archaeology (HCA), and for Philosophy and Religious Studies (PRS).

Around 20 or so delegates were present, mostly from representative subject areas, but including from the JISC’s Plagiarism Advisory Service. Previously, I only recognised Sharon Waller from the HEA, and had talked with Costas Athanasopoulos (PRS Subject Centre) at the JISC CETIS conference: he was the one who invited me.

I won’t try to document the whole event, but to pick out a few things which were highlights for me.

The discussion around plagiarism was inspiring. There was very little on the mechanics and technology of plagiarism detection (Turnitin is popular now) and plenty on good practice to avoid the motive for plagiarising in the first place. This overlaps greatly with other good practice – heartening, I felt. George MacDonald Ross gave us links to some of his useful resources.

Also from George MacDonald Ross, there was an interesting treatment of multiple-choice questions, for use preferably in formative self-assessment, avoiding factual questions, and focusing on different possible interpretations, in his example within philosophy.

As I’m interested in definitions of ability and competence, I brought up the issue of subject benchmarks, but there was little interest in that specifically. However, for archaeology fieldwork, Nick Thorpe (University of Winchester) uses an assessment scheme where there are several practical criteria, each with descriptors for 5 levels. This perhaps comes closest to practice in vocational education and training, though to me it doesn’t quite reach the clarity and openness of UK National Occupational Standards. Generally, people don’t seem to be yet up to clearly defining the characteristics of graduates of their courses, or they feel that attempts to do that have been poor. And yet, what can be done to provide an overall positive vision, acceptable to staff and student alike, without a clear, shared view of aims? Just as MCQs don’t have to test factual knowledge, learning outcomes don’t have to be on a prosaic, instrumental level. I’d be interested to see more of attempts to define course outcomes at relatively abstract levels, as long as those are assessable, formally or informally, by learner, educator and potential employer alike.

One of the overarching questions of the day was, what assessment-related resources are wanted, and could be provided either through the HEA or JISC? In one of our group discussions, the group I was in raised the issue of what a resource was, anyway? And at the end, the question came back. Given the wide range of the discussion throughout the day and a half, there was no clear answer. But one thing came through in any case. Teaching staff have a sense that much good, innovative practice around assessment is constrained by HEI (and sometimes wider) policies and regulations. Materials which can help overcome these constraints would be welcome. Perhaps these could be case studies, which documented how innovative good practice was able to be reconciled with prevailing policy and regulations. Good examples here, presented in a place which was easy to find, could disseminate quickly – virally even. Elements of self-assessment, peer assessment, collaboration, relevance to life beyond the HEI, clarity of outcomes and assessment criteria, etc., if planted visibly in one establishment, could help others argue the case for better practice elsewhere.

TRACE project, Brussels, 2007-11-19

Monday 19th November: I was invited as an expert to the final meeting of the TRACE project, held in Brussels. TRACE stands for Transparent Competences in Europe. The project web site is meant to be at http://trace.education-observatories.net/ . I didn’t realise how many competence projects there were in Europe at the moment, as well as TEN Competence which some CETIS people are involved with.

The meeting consisted of some presentations of the project work, followed by a general discussion which particularly involved the invited experts.

TRACE has created a prototype system to illustrate the competence transparency concept. In essence, this does employment matching based on inferences using domain knowledge embedded in an ontology, as well as job offers on the one side, and and CV-based personal competence profiles on the other. They didn’t try to do the full two-way matching thing as the Dutch Centre for Work and Income do. On the surface, the TRACE matching looks like a simpler version of what is done by the Belgian company Actonomy.

The meeting seemed to recognise that factors other than competences are also important in employment matching, but this has not been explored in the context of the TRACE project; nor has the idea that a system which can be used for competence-based matching in the employment domain could easily and advantageously be used for several other applications. It would be good to get a wider framework together, and this might go some way towards countering social exclusion worries.

Karsten Lundqvist, working at Reading with the project leader Prof. Keith Baker, was mainly responsible for the detailed ontology work, and he recognises that the relationships chosen to represent in the top-level ontology are vitally essential to what the ontology can support, and what domain ontologies can represent. They have a small number of relationships in their ontology:

  • has part
  • part of
  • more specific
  • more general
  • synonym
  • antonym

While these are reasonable first guesses at useful relationships, some of my previous work (presented at a TEN Competence meeting) proposes slightly different ones. I made the point in this meeting that it would be a good idea to check the relevance, appropriateness and meaningfulness of chosen relationships with people engaged in the domain itself. I’d say it is important in this kind of system to gain the trust of the end users by itself being transparently understandable.

But further than this, comprehensible relationships as well as terms are vital to the end of getting communities to take responsibility for ontologies. People in the community must be able to discuss the ontology. And, if the ontology is worked in to a structure to support communications, by being the basis of tags, people that work in the field will have plenty of motivation to understand the ontology. Put the motivation to understand together with structures and concepts that are easily understandable, and there is nothing in the way of widespread use of ontologies by communities, for a variety of purposes.

Putting together the main points that occurred to me, most of which I was able to say at the meeting:

  • relationships chosen for a top-level ontology for competence are vitally central, providing the building blocks for domain ontologies where the common knowledge of a community is represented;
  • we need further exploration about which relationships are most suitable and comprehensible for the community;
  • this will enable community development and maintenance of their own ontologies;
  • the UK already has some consensus-building communities, in the Sector Skills Councils;
  • SSCs produce National Occupational Standards, and it is worthwhile studying what is already produced and how, rather than reinventing the complete set of wheels (see my work for ioNW2);
  • to get practical success, we should acknowledge the human tendency for everyone to produce their own knowledge structures, including domain ontologies;
  • but we need to help people interrelate different domain ontologies, by providing in particular relationships suited to cross-link nodes in different ontologies (see my previous work on this)

All in all, an interesting and stimulating meeting.