Simon Grant » Assessment http://blogs.cetis.org.uk/asimong Cetis blog Fri, 18 Aug 2017 19:43:02 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.22 What is there to learn about standardization? http://blogs.cetis.org.uk/asimong/2014/10/24/learning-about-standardization/ http://blogs.cetis.org.uk/asimong/2014/10/24/learning-about-standardization/#comments Fri, 24 Oct 2014 06:43:34 +0000 http://blogs.cetis.org.uk/asimong/?p=1570 Cetis (the Centre for Educational Technology, Interoperability and Standards) and the IEC (Institute for Educational Cybernetics) are full of rich knowledge and experience in several overlapping topics. While the IEC has much expertise in learning technologies, it is Cetis in particular where there is a body of knowledge and experience of many kinds of standardization organisations and processes, as well as approaches to interoperability that are not necessarily based on formal standardization. We have an impressive international profile in the field of learning technology standards.

But how can we share and pass on that expertise? This question has arisen from time to time during the 12 years I’ve been associated with Cetis, including the last six working from our base in the IEC in Bolton. While Jisc were employing us to run Special Interest Groups, meetings, and conferences, and to support their project work, that at least gave us some scope for sharing. The SIGs are sadly long gone, but what about other ways of sharing? What about running some kind of courses? To run courses, we have to address the question of what people might want to learn in our areas of expertise. On a related question, how can we assemble a structured summary even of what have we ourselves have learned about this rich and challenging area?

These are my own views about what I sense I have learned and could pass on; but also about the topics where I would think it worthwhile to know more. All of these views are in the context of open standards in learning technology and related areas.

How are standards developed?

A formal answer for formal standards is straightforward enough. But this is only part of the picture. Standards can start life in many ways, from the work of one individual inventing a good way of doing something, through to a large corporation wanting to impose its practice on the rest of the world. It is perhaps more significant to ask …

How do people come up with good and useful standards?

The more one is involved in standardization, the richer and more subtle one’s answer to this becomes. There isn’t one “most effective” process, nor one formula for developing a good standard. But in Cetis, we have developed a keen sense of what is more likely to result in something that is useful. It includes the close involvement of the people who are going to implement the standard – perhaps software developers. Often it is a good idea to develop the specification for a standard hand in hand with its implementation. But there are many other subtleties which could be brought out here. This also begs a question …

What makes a good and useful standard?

What one comes to recognise with time and experience is that the most effective standards are relatively simple and focused. The more complex a standard is, the less flexible it tends to be. It might be well suited to the precise conditions under which it was developed, but those conditions often change.

There is much research to do on this question, and people in Cetis would provide an excellent knowledge base for this, in the learning technology domain.

What characteristics of people are useful for developing good standards?

Most likely anyone who has been involved in standardization processes will be aware of some people whose contribution is really helpful, and others who seem not to help so much. Standardization works effectively as a consensus process, not as a kind of battle for dominance. So the personal characteristics of people who are effective at standardization is similar to those who are good at consensus processes more widely. Obviously, the group of people involved must have a good technical knowledge of their domain, but deep technical knowledge is not always allied to an attitude that is consistent with consensus process.

Can we train, or otherwise develop, these useful characteristics?

One question that really interests me is, to what extent can consensus-friendly attitudes be trained or developed in people? It would be regrettable if part of the answer to good standardization process were simply to exclude unhelpful people. But if this is not to happen, those people would need to be to be open to changing their attitudes, and we would have to find ways of helping them develop. We might best see this as a kind of “enculturation”, and use sociological knowledge to help understand how it can be done.

After answering that question, we would move on to the more challenging “how can these characteristics be developed?”

How can standardization be most effectively managed?

We don’t have all the answers here. But we do have much experience of the different organisations and processes that have brought out interoperability standards and specifications. Some formal standardization bodies adopt processes that are not open, and we find this quite unhelpful to the management of standardization in our area. Bodies vary in how much they insist that implementation goes hand in hand with specification development.

The people who can give most to a standardization process are often highly valued and short of time. Conversely, those who hinder it most, including the most opinionated, often seem to have plenty of time to spare. To manage the standardization process effectively, this variety of people needs to be allowed for. Ideally, this would involve the training in consensus working, as imagined above, but until then, sensitive handling of those people needs considerable skill. A supplementary question would be, how does one train people to handle others well?

If people are competent at consensus working, the governance of standardization is less important. Before then, the exact mechanisms for decision making and influence, formal and informal, are significant. This means that the governance of standards organisations is on the agenda for what there is to learn. There is still much to learn here, through suitable research, about how different governance structures affect the standardization process and its outcomes.

Once developed, how are standards best managed?

Many of us have seen the development of a specification or standard, only for it never really to take hold. Other standards are overtaken by events, and lose ground. This is not always a bad thing, of course – it is quite proper for one standard to be displaced by a better one. But sometimes people are not aware of a useful standard at the right time. So, standards not only need keeping up to date, but they may also need to be continually promoted.

As well as promotion, there is the more straightforward maintenance and development. Web sites with information about the standard need maintaining, and there is often the possibility of small enhancements to a standard, such as reframing it in terms of a new technology – for instance, a newly popular language.

And talking of languages, there is also dissemination through translation. That’s one thing that working in a European context keeps high in one’s mind.

I’ve written before about management of learning technology standardization in Europe and about developments in TC353, the committee responsible for ICT in learning, education and training.

And how could a relevant qualification and course be developed?

There are several other questions whose answers would be relevant to motivating or setting up a course. Maybe some of my colleagues or readers have answers. If so, please comment!

  • As a motivation for development, how can we measure the economic value of standards, to companies and to the wider economy? There must be existing research on this question, but I am not familiar with it.
  • What might be the market for such courses? Which individuals would be motivated enough to devote their time, and what organisations (including governmental) would have an incentive to finance such courses?
  • Where might such courses fit? Perhaps as part of a technology MSc/MBA in a leading HE institution or business school?
  • How would we develop a curriculum, including practical experience?
  • How could we write good intended learning outcomes?
  • How would teaching and learning be arranged?
  • Who would be our target learners?
  • How would the course outcomes be assessed?
  • Would people with such a qualification be of value to standards developing organisations, or elsewhere?

I would welcome approaches to collaboration in developing any learning opportunity in this space.

And more widely

Looking again at these questions, I wonder whether there is something more general to grasp. Try reading over, substituting, for “standard”, other terms such as “agreement”, “law”, “norm” (which already has a dual meaning), “code of conduct”, “code of practice”, “policy”. Many considerations about standards seem to touch these other concepts as well. All of them could perhaps be seen as formulations or expressions, guiding or governing interaction between people.

And if there is much common ground between the development of all of these kinds of formulation, then learning about standardization might well be adapted to learn knowledge, skills, competence, attitudes and values that are useful in many walks of life, but particularly in the emerging economy of open co-operation and collaboration on the commons.

]]>
http://blogs.cetis.org.uk/asimong/2014/10/24/learning-about-standardization/feed/ 2
The logic of competence assessability http://blogs.cetis.org.uk/asimong/2011/08/31/competence-assessability/ http://blogs.cetis.org.uk/asimong/2011/08/31/competence-assessability/#comments Wed, 31 Aug 2011 13:40:09 +0000 http://blogs.cetis.org.uk/asimong/?p=861 17. The discussion of NOS in the previous post clearly implicated assessability. Actually, assessment has been on the agenda right from the start of this series: claims and requirements are for someone "good" for a job or role. How do we assess what is "good" as opposed to "poor"? The logic of competence partly relies on the logic of assessability, so the topic deserves a closer look.]]> (17th in my logic of competence series)

The discussion of NOS in the previous post clearly implicated assessability. Actually, assessment has been on the agenda right from the start of this series: claims and requirements are for someone “good” for a job or role. How do we assess what is “good” as opposed to “poor”? The logic of competence partly relies on the logic of assessability, so the topic deserves a closer look.

“Assessability” isn’t a common word. I mean, as one might expect, the quality of being assessable. Here, this applies to competence concept definitions. Given a definition of skill or competence, will people be able to use that definition to consistently assess the extent to which an individual has that skill or competence? If so, the definition is assessable. Particular assessment methods are usually designed to be consistent and repeatable, but in all the cases I can think of, a particular assessment procedure implies the existence of a quality that could potentially be assessed in other ways. So “assessability” doesn’t necessarily mean that one particular assessment method has been defined, but rather that reliable assessment methods can be envisaged.

The contrast between outcomes and behaviours / procedures

One of the key things I learned from discussion with Geoff Carroll was the importance to many people of seeing competence in terms of assessable outcomes. The NOS Guide mentioned in the previous post says, among other things, that “the Key Purpose statement must point clearly to an outcome” and “each Main Function should point to a clear outcome that is valued in employment.” This is contrasted with “behaviours” — some employers “feel it is important to describe the general ways in which individuals go about achieving the outcomes”.

How much emphasis is put on outcomes, and how much on what the NOS Guide calls behaviours, depends largely on the job, and should determine the nature of the “performance criteria” written in a related standard. And, moreover, I think that this distinction between “outcomes” and “behaviours” is quite close to the very general distinction between “means” and “ends” that crops up as a general philosophical topic. To illustrate this, I’ll try giving two example jobs that differ greatly along this dimension: writing commercial pop songs; and flying commercial aeroplanes.

You could write outcome standards for a pop songwriter in terms of the song sales. It is very clear when a song reaches “the charts”, but how and why it gets there are much less clear. What is perhaps more clear is that the large majority of attempts to write pop songs result in — well — very limited success (i.e. failure). And although there are some websites that give e.g. Shortcuts to Hit Songwriting (126 Proven Techniques for Writing Songs That Sell), or How to Write a Song, other commentators e.g. in the Guardian are less optimistic: “So how do you write a classic hit? The only thing everyone agrees on is this: nobody has a bloody clue.”

The essence here is that the “hit” outcome is achieved, if it is achieved at all, through means that are highly individual. It seems unlikely that any standards setting organisation will write an NOS for writing hit pop songs. (On the other hand, some of the composition skills that underlie this could well be the subject of standards.)

Contrast this with flying commercial aeroplanes. The vast majority of flights are carried out successfully — indeed, flight safety is remarkable in many ways. Would you want your pilot to “do their own thing”, or try out different techniques for piloting your flight? A great deal of basic competence in flying is accuracy and reliability in following set procedures. (Surely set procedures are essentially the same kind of thing as behaviours?) There is a lot of compliance, checking and cross-checking, and little scope for creativity. Again it is interesting to note that there don’t seem to be any NOSs for airline pilots. (There are for ground and cabin staff, maintained by GoSkills. In the “National Occupational Standards For Aviation Operations on the Ground, Unit 42 – Maintain the separation of aircraft on or near the ground”, out of 20 performance requirements, no fewer than 11 start “Make sure that…”. Following procedures is explicitly a large part of other related NOSs.)

However, it is clear that there are better and worse pop songwriters, and better and worse pilots. One should be able to write some competence definitions in each case that are assessable, even if they might not be worth making into NOSs.

What about educational parallels for these, as most of school performance is assessed? Perhaps we could think of poetry writing and mathematics. Probably much of what is good in poetry writing is down to individual inspiration and creativity, tempered by some conventional rules. On the other hand, much of what is good in mathematics is the ability to remember and follow the appropriate procedures for the appropriate cases. Poetry, closely related to songwriting, is mainly to do with outcomes, and not procedures — ends, not means; mathematics, closer to airline piloting, is mainly to do with procedures, with the outcome pretty well assured as long as you follow the appropriate procedure correctly.

Both extremes of this “outcome” and “procedure” spectrum are assessable, but they are assessable in different ways, with different characteristics.

  1. Outcome-focused assessment (getting results, main effects, “ends”) allows variation in the component parts that are not standardised. What may be specified are the incidental constraints, or what to avoid.
  2. Assessment on procedures and conformance to constraints (how to do it properly, “means”, known procedures that minimise bad side effects) tends to have little variability in component procedural parts. As well as airline pilots, we may think of train drivers, power plant supervisors, captains of ships.

Of course, there is a spectrum between these extremes, with no clear boundary. Where the core is procedural conformance, handling unexpected problems may also feature (often trained through simulators). Coolness under pressure is vital, and could be assessed. We also have to face the philosophical point that someone’s ends may be another’s means, and vice versa. Only the most menial of means cannot be treated as an end, and only the greatest ends cannot be treated as a means to a greater end.

Outcomes are often quantitative in nature. The pop song example is clear — measures of songs sold (or downloaded, etc.) allow songwriters to be graded into some level scheme like “very successful”, “fairly successful”, “marginally successful” (or whatever levels you might want to establish). There is no obvious cut-off point for whether you are successful as a hit songwriter, and that invites people to define their own levels. On the other hand, conformance to defined procedures looks pretty rigid by comparison. Either you followed the rules or you didn’t. It’s all too clear when a passenger aeroplane crashes.

But here’s a puzzle for National Occupational Standards. According to the Guide, NOSs are meant to be to do with outcomes, and yet they admit no levels. If they acknowledged that they were about procedures, perhaps together with avoiding negative outcomes, then I could see how levels would be unimportant. And if they allowed levels, rather than being just “achieved” or “not yet achieved” I could see how they would cover all sorts of outcomes nicely. What are we to do about outcomes that clearly do admit of levels, as do many of the more complex kind of competences?

The apparent paradox is that NOSs deny the kind of level system that would allow them properly to express the kind of outcomes that they aspire to representing. But maybe it’s no paradox after all. It seems reasonable that NOSs actually just describe the known standards people need to reach to function effectively in certain kinds of roles. That standard is a level in itself. Under that reading, it would make little sense for a NOS to be subject to different levels, as it would imply that the level of competence for a particular role is unknown — and in that case it wouldn’t be a standard.

Assessing less assessable concepts

Having discussed assessable competence concepts from one extreme to the other, what about less assessable concepts? We are mostly familiar with the kinds of general headings for abilities that you get with PDP (personal/professional development planning) like teamwork, communication skills, numeracy, ICT skills, etc. You can only assess a person as having or not having a vague concept like “communication skills” after detailing what you include within your definition. With a competence such as the ability to manage a business, you can either assess it in terms of measurable outcomes valued by you (e.g. the business is making a profit, has grown — both binary — or perhaps some quantitative figure relating to the increase in shareholder value, or a quantified environmental impact) or in terms of a set of abilities that you consider make up the particular style of management you are interested in.

These less assessable concepts are surely useful as headings for gathering evidence about what we have done, and what kinds of skills and competences we have practiced, which might be useful in work or other situations. It looks to me that they can be made more assessable in one of a few ways.

  1. Detailing assessable component parts of the concept, in the manner of NOSs.
  2. Defining levels for the concept, where each level definition gives more assessable detail, or criteria.
  3. Defining variants for the concept, each of which is either assessable, or broken down further into assessable component parts.
  4. Using a generic level framework to supply assessable criteria to add to the concept.

Following this last possibility, there is nothing to stop a framework from defining generic levels as a shorthand for what needs to be covered at any particular level of any competence. While NOSs don’t have to define levels explicitly, it is still potentially useful to be able to have levels in a wider framework of competence.

[added 2011-09-04] Note that generic levels designed to add assessability to a general concept may not themselves be assessable without the general concept.

Assessability and values in everyday life

Defined concepts, standards, and frameworks are fine for established employers in established industries, who may be familiar with and use them, but what about for other contexts? I happen to be looking for a builder right now, and while my general requirements are common enough, the details may not be. In the “foreground”, so to speak, like everyone else, I want a “good” quality job done within a competitive time interval and budget. Maybe I could accept that the competence I require could be described in terms of NOSs, while price and availability are to do with the market, not competence per se. But when it comes to more “background” considerations, it is less clear. How do I rate experience? Well, what does experience bring? I suspect that experience is to do with learning the lessons that are not internalised in an educational or training setting. Perhaps experience is partly about learning to avoid “mistakes”. But, what counts as mistakes depends on one’s values. Individuals differ in the degree to which they are happy with “bending rules” or “cutting corners”. With experience, some people learn to bend rules less detectably, others learn more personal and professional integrity. If someone’s values agree with mine, I am more likely to find them pleasant.

There’s a long discussion here, which I won’t go into deeply, involving professional associations, codes of conduct and ethics, morality, social responsibility and so on. It may be possible to build some of these into performance criteria, but opinions are likely to differ. Where a standard talks about procedural conformance, it can sometimes be framed as knowing established procedures and then following them. A generic competence at handling clients might include the ability to find out what the client’s values are, and to go along with those to the extent that they are compatible with one’s own values. Where they aren’t, a skill in turning away work needs to be exercised in order to achieve personal integrity.

Conclusions

It’s all clearly a complex topic, more complex indeed than I had reckoned back last November. But I’d like to summarise what I take forward from this consideration of assessability.

  1. Less assessable concepts can be made more assessable by detailing them in any of several ways (see above).
  2. Goals, ends, aims, outcomes can be assessed, but say little about constraints, mistakes, or avoiding occasional problems. In common usage, outcomes (particularly quantitative ones) may often have levels.
  3. Means, procedures, behaviours, etc. can be assessed in terms of (binary) conformity to prescribed pattern, but may not imply outcomes (though constraints may be able to be formulated as avoidance outcomes).
  4. In real life we want to allow realistic competence structures with any of these features.

In the next post, I’ll take all these extra considerations forward into the question of how to represent competence structures, partly through discussing more about what levels are, along with how to represent them. Being clear about how to represent levels will leave us also clearer about how to represent the less precise, non-assessable concepts.

]]>
http://blogs.cetis.org.uk/asimong/2011/08/31/competence-assessability/feed/ 2
E-portfolio Scotland http://blogs.cetis.org.uk/asimong/2010/09/10/e-portfolio-scotland/ http://blogs.cetis.org.uk/asimong/2010/09/10/e-portfolio-scotland/#comments Fri, 10 Sep 2010 18:19:07 +0000 http://blogs.cetis.org.uk/asimong/?p=354 The Scottish e-portfolio scene seems to have comparatively many colleges, many of which use or are interested in Mahara. It may be even more promising than England for exploring company e-portfolio use, and we should try to ensure Scots are represented in any work on skills frameworks for e-portfolio tools.

That was the most interesting conclusion for me in a generally interesting day conference, e-Portfolio Scotland at Queen Margaret University on Friday (2010-09-10). I was given a plenary spot about Leap2A, and the audience responded well to my participative overtures — which is where I gathered this valuable information — and asked some intelligent questions. Mahara and PebblePad are well used, with Blackboard’s offering less so. Reassuringly, Leap2A came up in the presentations / demonstrations of Mahara and PebblePad, and in the final plenary by Gordon Joyes, so the audience would not be doubt about how central Leap2A is. (We just have to carry on following through and delivering!)

It was interesting to meet so many new faces. Apart from Gordon, there was Derrin Kent, and Susi Peacock on her home ground, but I didn’t know any of the others well. There seemed to be a roughly even split between HE and FE, with a very few from professions and schools. Perhaps I ought to spend more e-portfolio time in Scotland…

The vendors present included Calibrand, who I first met at the EIfEL conference this summer, and Taskstream, who have been represented in many e-portfolio conferences over several years. I suggested to the latter that they really need to take on Leap2A to get more into the UK market. A Manchester-based company, OneFile, sells a “Portfolio Assessment Solution” that I had not come across at all before, and their location has obvious potential for future discussion. But perhaps the most interesting vendor there, also giving a presentation, was Linda Steedman, MD of eCom Scotland. Their company has got beyond being a micro-business and offers an “Enterprise Skills Management” tool called SkillsLocker. I was impressed by her presentation, ranging across accreditation of prior learning, work-based learning, and what is now fashionably called “talent management” rather than HR. It seems they are well-connected, with AlphaPlus among others; also that they have done some valuable work cross mapping different skill definitions — I intend to follow this up.

Though perhaps not quite so central to JISC as those working in the HE sector, we still need to find some way of supporting the adoption of Leap2A-friendly portfolio tools in such commercially-based concerns. Work- and skills-based learning and training is a natural successor to HE-based PDP and skills development, and we really need to link in to it to make HE portfolio use more universally motivating.

One big remaining challenge was broadly acknowledged: dealing with these skill and competence representation issues that we do have on our agenda. The vision I was putting around, with no dissenting voices, was to decouple portfolio tools from any particular skills framework, and to have the frameworks published with proper URIs (in good Linked Data style). Then any tool should be able to work with any skills framework, and Leap2A information would include the relevant URIs. Though there remains the problem with HE that they tend to define skills at a different level to industry demands, FE is comparatively much closer to their employers, and they have common reference points in National Occupational Standards. So, among other things, any help we can get to persuade Sector Skills Councils to give proper URIs and structure to their NOSs will be most welcome, and maybe the Scottish e-portfolio community can help with this, and with defining the needed structures?

]]>
http://blogs.cetis.org.uk/asimong/2010/09/10/e-portfolio-scotland/feed/ 0
ICOPER and outcomes http://blogs.cetis.org.uk/asimong/2010/02/04/icoper-and-outcomes/ http://blogs.cetis.org.uk/asimong/2010/02/04/icoper-and-outcomes/#comments Thu, 04 Feb 2010 06:50:03 +0000 http://blogs.cetis.org.uk/asimong/?p=264 The other European project I’m involved in for CETIS is called ICOPER. Over the last couple of weeks I’ve been doing some work improving the deliverable D2.2, mainly working with Jad Najjar. I flag it here because it uses some of the conceptual modelling work I’ve been involved in. My main direct contribution is Section 2. This starts with part of an adaptation of my diagram in a recent post here. It is adapted by removing the part on the right, for recognition, as that is of relatively minor importance to ICOPER. As ICOPER is focused on outcomes, the “desired pattern” is relabelled as “intended learning outcome or other objective”. I thought this time it would be clearer without the groupings of learning opportunity or assessment. And ICOPER is not really concerned with reflection by individuals, so that is omitted as well.

In explaining the diagram, I explain what the different colours represent. I’m still waiting for critique (or reasoned support, for that matter) of the types of thing I find so helpful in conceptual modelling (again, see previous post on this).

As I find so often, detailed thinking for any particular purpose has clarified one part of the diagram. I have introduced (and will bring back into the mainstream of my modelling) an “assessment result pattern”. I recognise that logically you cannot specify actual results as pre-requisites for opportunities, but rather patterns, such as “pass” or “at least 80%” for particular assessments. It takes a selection process (which I haven’t represented explicitly anywhere yet) to compare actual results with the required result pattern.

Overall, this section 2 of the deliverable explains quite a lot about a part of the overall conceptual model intended to be at least approximately from the point of view of ICOPER. The title of this deliverable, “Model for describing learning needs and learning opportunities taking context ontology modelling into account” was perhaps not what would have been chosen at the time of writing, but we needed to write to satisfy that title. Here, “learning needs” is understood as intended learning outcomes, which is not difficult to cover as it is central to ICOPER.

The deliverable as a whole continues with a review of MLO, the prospective European Standard on Metadata for Learning Opportunities (Advertising), to get in the “learning opportunities” aspect. Then it goes on to suggests an information model for “Learning Outcome Definitions”. This is a tricky one, as one cannot really avoid IMS RDCEO and IEEE RCD. As I’ve argued in the past, I don’t think these are really substantially more helpful than just using Dublin Core, and in a way the ICOPER work here implicitly recognises this, in that even though they still doff a cap to those two specs, most of RDCEO is “profiled” away, and instead a “knowledge / skill / competence” category is added, to square with the concepts as described in the EQF.

Perhaps the other really interesting part of the deliverable was one we put in quite a lot of joint thinking to. Jad came up with the title “Personal Achieved Learning Outcomes” (PALO), which is fine for what is intended to be covered here. What we have come up with (provisionally, it must be emphasised) is a very interesting mixture of bits that correspond to the overall conceptual model, with the addition of the kind of detail needed to turn a conceptual model into an information or data model. Again, not surprisingly, this raises some interesting questions for the overall conceptual model. How does the concept of achievement (in this deliverable) relate to the overall model’s “personal claim expression”? This “PALO” model is a good effort towards something that I haven’t personally written much about – how do you represent context in a helpful way for intended learning outcomes or competences? If you’re interested, see what you think. For most skills and competences, one can imagine several aspects of context that are really meaningful, and without which describing things would definitely lose something. Can you do it better?

I hope I’ve written enough to stimulate a few people at least to skim through that deliverable D2.2.

]]>
http://blogs.cetis.org.uk/asimong/2010/02/04/icoper-and-outcomes/feed/ 2
Development of a conceptual model 5 http://blogs.cetis.org.uk/asimong/2009/12/11/development-of-a-conceptual-model-5/ http://blogs.cetis.org.uk/asimong/2009/12/11/development-of-a-conceptual-model-5/#comments Fri, 11 Dec 2009 07:21:08 +0000 http://blogs.cetis.org.uk/asimong/?p=172 This conceptual model now includes basic ideas about what goes on in the individual, plus some of the most important concepts for PDP and e-portfolio use, as well as the generalised formalisable concepts processes surrounding individual action. It has come a long way since the last time I wrote about it.

The minimised version is here, first… (recommended to view the images below separately, perhaps with a right-click)

eurolmcm25-min3

and that is complex enough, with so many relationship links looking like a bizarre and distorted spider’s web. Now for the full version, which is quite scarily complex now…

eurolmcm25

Perhaps that is the inevitable way things happen. One thinks some more. One talks to some more people. The model grows, develops, expands. The parts connected to “placement processes” were stimulated by Luk Vervenne’s contribution to the workshop in Berlin of my previous blog entry. But — and I find hard to escape from this — much of the development is based on internal logic, and just looking at it from different points of view.

It still makes sense to me, of course, because I’ve been with it through its growth and development. But is there any point in putting such a complex structure up on my blog? I do not know. It’s reached the stage where perhaps it needs turning into a paper-length exposition, particularly including all the explanatory notes that you can see if you use CmapTools, and breaking it down into more digestible, manageable parts. I’ve put the CXL file and a PDF version up on my own concept maps page. I can only hope that some people will find this interesting enough to look carefully at some of the detail, and comment… (please!) If you’re really interested, get in touch to talk things over with me. But the thinking will in any case surface in other places. And I’ll link from here later if I do a version with comments that is easier to get at.

]]>
http://blogs.cetis.org.uk/asimong/2009/12/11/development-of-a-conceptual-model-5/feed/ 2
Development of a conceptual model 4 http://blogs.cetis.org.uk/asimong/2009/10/13/development-of-a-conceptual-model-4/ http://blogs.cetis.org.uk/asimong/2009/10/13/development-of-a-conceptual-model-4/#comments Tue, 13 Oct 2009 06:12:53 +0000 http://blogs.cetis.org.uk/asimong/?p=113 This version of the conceptual model (of learning opportunity provision + assessment + award of credit or qualification) uses the CmapTools facility for grouping nodes; and it further extends the use of my own “top ontology” (introduced in my book).

There are now two diagrams: a contracted and an expanded version. When you use CmapTools, you can click on the << or >> symbols, and the attached box will expand to reveal the detail, or contract to hide it. This grouping was suggested by several people in discussion, particularly Christian Stracke. Let’s look at the two diagrams first, then go on to draw out the other points.

eurolmcm13-contracted1

You can’t fail to notice that this is remarkably simpler than the previous version. What is important is to note the terms chosen for the groupings. It is vital to the communicative effectiveness of the pair of diagrams that the term for the grouping represents the things contained by the grouping, and in the top case — “learning opportunity provision” — it was Cleo Sgouropoulou who helped find that term. Most of the links seem to work OK with these groupings, though some are inevitably less than fully clear. So, on to the full, expanded diagram…

eurolmcm13-expanded1

I was favourably impressed with the way in which CmapTools allows grouping to be done, and how the tools work.

Mainly the same things are there as in the previous version. The only change is that, instead of having one blob for qualification, and one for credit value, both have been split into two. This followed on from being uncomfortable with the previous position of “qualification”, where it appeared that the same thing was wanted or led to, and awarded. It is, I suggest, much clearer to distinguish the repeatable pattern — that is, the form of the qualification, represented by its title and generic properties — and the particular qualification awarded to a particular learner on a particular date. I originally came to this clear distinction, between patterns and expressions, in my book, when trying to build a firmer basis for the typology of information represented in e-portfolio systems. But in any case, I am now working on a separate web page to try to explain it more clearly. When done, I’ll post that here on my blog.

A pattern, like a concept, can apply to many different things, at least in principle. Most of the documentation surrounding courses, assessment, and the definitions about qualifications and credit, are essentially repeatable patterns. But in contrast, an assessment result, like a qualification or credit awarded, is in effect an expression, relating one of those patterns to a particular individual learner at a particular time. They are quite different kinds of thing, and much confusion may be caused by failing to distinguish which one is talking about, particularly when discussing things like qualifications.

These distinctions between types of thing at the most generic level is what I am trying to represent with the colour and shape scheme in these diagrams. You could call it my “top ontology” if you like, and I hope it is useful.

CmapTools is available free. It has been a great tool for me, as I don’t often get round to diagrams, but CmapTools makes it easy to draw the kinds of models I want to draw. If you have it, you might like to try finding and downloading the actual maps, which you can then play with. Of course, there is only one, not two; but I have put it in both forms on the ICOPER Cmap server, and also directly in CXL form on my own site. If you do, you will see all the explanatory comments I have made on the nodes. Please feel free to send me back any elaborations you create.

]]>
http://blogs.cetis.org.uk/asimong/2009/10/13/development-of-a-conceptual-model-4/feed/ 3
Skills frameworks, interoperability, portfolios, etc. http://blogs.cetis.org.uk/asimong/2009/04/20/skills-frameworks-portfolios/ http://blogs.cetis.org.uk/asimong/2009/04/20/skills-frameworks-portfolios/#comments Mon, 20 Apr 2009 15:39:48 +0000 http://blogs.cetis.org.uk/asimong/?p=52 Last Thursday (2009-04-16) I went to a very interesting meeting in Leeds, specially arranged, at the Leeds Institute of Medical Education, between various interested parties, about their needs and ideas for interoperability with e-portfolio tools – but also about skills frameworks.

It was interesting particularly because it showed more evidence of a groundswell of willingness to work towards e-portfolio interoperability, and this has two aspects for the people gathered (6 including me). On the one hand, the ALPS CETL is working with MyKnowledgeMap (MKM) – a small commercial learning technology vendor based in York – on a project involving health and social care students in their 5 HEIs around Leeds. They are using the MKM portfolio tool, Multi-Port, but are aware of a need to have records which are portable between their system and others. It looks like being a fairly straightforward case of a vendor with a portfolio tool being drawn in to the LEAP2A fold on the back of the success we have had so far – without the need for extra funding. The outcome should be a classic interoperability win-win: learners will be able to export their records to PebblePad, Mahara, etc., and the MKM tool users will be able to import their records from the LEAP2A-implementing systems to kick-start their portfolio records there with the ALPS CETL or other MKM sites.

MKM tools, as suggested by the MKM name, do cover the representation of skills frameworks, and this forms a bridge between two threads to this meeting: first, the ALPS CETL work, and second, the more challenging area of medical education, where frameworks – of knowledge, skill or competence – abound and are pretty important for medical students and in the professional development of medical practitioners, and health professionals more generally.

In this more challenging side of the meeting, we discussed some of the issues surrounding skills frameworks in medical education – including the transfer of students at undergraduate level; the transfer between a medical school like Leeds and a teaching hospital, where the doctors may well soon be using the NHS Foundation Year e-portfolio tools in conjunction with their further training and development; and then on to professional life.

The development of LEAP2A has probably been helped greatly by not trying to do too much all at once. We haven’t yet fully dealt with how to integrate skills frameworks into e-portfolio information. At one very simple level we have covered it – if each skill definition has a URI, that can be referred to by an “ability” item in the LEAP2A. But at another level it is greatly challenging. Here in medical education we have not one, but several real-life scenarios calling for interoperable skills frameworks for use with portfolio tools. So how are we actually going to advise the people who want to create skills frameworks, about how to do this in a useful way? Their users, using their portfolio tools, want to carry forward the learning (against learning outcomes) and evidence (of competence) to another setting. They want the information to be ready to use, to save them repetition – potentially wasteful to the institution as well as the learner.

The answer necessarily goes beyond portfolio technology, and needs to tackle the issues which several people are currently working on: European projects like TENCompetence and ICOPER, where I have given presentations or written papers; older JISC project work I have been involved with (ioNW2, SPWS); and now the recently set up a CETIS team on competences.

Happily, it seems like we are all pushing at an open door. I am happy to be able to respond in my role as Learning Technology Advisor for e-portfolio technology, and point MKM towards the documentation on – and those with experience of implementing – LEAP2A. And the new competence team has been looking for a good prompt to hold an initial meeting. I imagine we might hold a meeting, perhaps around the beginning of July, focused on frameworks of skills, competence, knowledge, and their use together with curriculum learning outcomes, with assessment criteria, and with portfolio evidence? The Leeds people would be very willing to contribute. Then, perhaps JISC might offer a little extra funding (on the same lines as previous PIOP and XCRI projects) to get together a group of medical educators to implement LEAP2A and related skills frameworks together – in whatever way we all agree is good to take forward the skills framework developments.

]]>
http://blogs.cetis.org.uk/asimong/2009/04/20/skills-frameworks-portfolios/feed/ 1
Representing defining and using ability competency and similar concepts http://blogs.cetis.org.uk/asimong/2009/01/04/representing-ability/ http://blogs.cetis.org.uk/asimong/2009/01/04/representing-ability/#comments Sun, 04 Jan 2009 14:23:08 +0000 http://blogs.cetis.org.uk/asimong/?p=48 I’ve been telling people, for quite a while, that I will write up suggestions for how to deal with abilities (competence, competencies, knowledge, etc. etc.) for many reasons, including particularly e-portfolio related uses. Well, here are my current ideas for the New Year.

They are expressed in a set of linked pages, each dealing with a facet of the issues. The pages are very much draft ideas, but serve the purpose of getting out the basic ideas and inviting other ideas for improvement. If you have an interest, please do at least leave a comment here, or e-mail me with ideas and suggestions.

]]>
http://blogs.cetis.org.uk/asimong/2009/01/04/representing-ability/feed/ 2
GMSA advance http://blogs.cetis.org.uk/asimong/2008/05/17/gmsa-advance/ http://blogs.cetis.org.uk/asimong/2008/05/17/gmsa-advance/#comments Sat, 17 May 2008 06:12:45 +0000 http://blogs.cetis.org.uk/asimong/2008/05/17/gmsa-advance/ As I’ve been involved with GMSA in various ways including through the ioNW2 project, I went to their seminar on 14th May introducing GMSA Advance.  This is to do with providing bite-sized modules of Higher Education, mainly for people at work, and giving awards (including degrees) on that basis – picking up some of the “Leitch” agenda. As I suspected, it was of interest from a portfolio perspective among others.

I’ll start with the portfolio- and PDP-related issues.

The first issue is award coherence. If you put together an award from self-chosen small chunks of learning (“eclectic”, one could call it), there is always an issue of whether that award represents anything coherent. Awarding bodies, including HEIs, may not think it right to give an award for what looks like a random collection of learning. Having awarding bodies themselves define what counts as coherent risks being too restrictive. An awarding body might insist on things which were not relevant to the learner’s workplace, or that had been covered outside the award framework. On the other hand, employers might not understand about academic coherence at all. A possible solution that strikes me and others is

  • have the learner explain the coherence of modules chosen
  • assess that explanation as part of the award requirement.

This explanation of coherence needs to make sense to a variety of people as well as the learner, in particular, to academics and to employers. It invites a portfolio-style approach: the learner is supported through a process of constructing the explanation, and it is presented as a portfolio with links to further information and evidence. One could imagine, for example, a video interview with the learner’s employer as useful extra evidence.

A second issue is the currency and validity of “credit”. Now I have a history of skepticism about credit frameworks and credit transfer, though the above idea of assessed explanation of award coherence at last  brings a ray of light into the gloom. My issue has always been that, to be meaningful, awards should be competence-based, not credit based. And I still maintain that the abilities claimed by someone at the end of a course, suitably validated by the awarding body, should be a key part of the official records of assessment (indeed, part of the “Higher Education Achievement Report” of the Burgess Group – report downloadable as PDF)

One of the key questions for these “eclectic” awards is whether credit should have a limited lifetime. Whether credit should expire surely should depend on what credit is trying to represent. It is just the skills, abilities or competences whose validation needs to expire – this is increasingly being seen in the requirement for professional revalidation. And the expiry of validation itself needs to be based on evidence – bicycle riding and swimming tend to be skills that are learned once for ever; language skills fall off only slowly; but the knowledge of the latest techniques in a leading edge discipline may be lost very quickly.

This is a clear issue for portfolios that present skills. The people with those portfolios need to be aware of the perceived value of old evidence, and to be prepared to back up old formal evidence with more recent, if less formal, additional evidence of currency. We could potentially take that approach back into the the GMSA Advance awards, though there would be many details to figure out, and issues would overlap with accreditation of prior learning.

Other issues at the seminar were not to do with portfolios. There is the question of how to badge such awards. CPD? Several of those attending thought not – “CPD”is often associated with unvalidated personal learning, or even just attendance at events. As an alternative, I rather like the constructive ambiguity of the phrase “employed learning” – it would be both the learners and the learning that are employed – so that is my suggestion for inclusion into award titles.

Another big issue is funding. Current policy is for no government funding to be given for people studying for awards of equal or lower level than one they have already achieved. The trouble is that if each module itself carries an award, then work-based learners couldn’t be funded for this series of bite-sized modules, but only one. The issue is recognised, but not solved. A good idea that was suggested at the seminar is to change and clarify the meaning of credit, so that it takes on the role of measuring public fundability of learning. Learners could have a lifetime learning credit allowance, that they could spend as they preferred. Actually, I think even better would be a kind of “sabbatical” system where one’s study credit allowance continued to build, to allow for retraining. Maybe one year’s (part time?) study credit would be fundable for each (say) 7 years of life – or maybe 7 years of tax-paying work?

So, as you can see, it was a thought-provoking and stimulating seminar.

]]>
http://blogs.cetis.org.uk/asimong/2008/05/17/gmsa-advance/feed/ 0
Intellectual heritage tracing http://blogs.cetis.org.uk/asimong/2008/02/08/intellectual-heritage-tracing/ http://blogs.cetis.org.uk/asimong/2008/02/08/intellectual-heritage-tracing/#comments Fri, 08 Feb 2008 10:47:19 +0000 http://blogs.cetis.org.uk/asimong/2008/02/08/intellectual-heritage-tracing/ I’ve only been hearing and thinking about plagiarism in the last few days – since going to the Assessment Think Tank in York in fact, but since then reading in many places. One of the debated ideas is encouraging students to use plagiarism detection services. Another, heard at York, is that the more adventurous students run more risk. Why? It is unlikely in some subjects (say Philosophy) to come up with entirely novel ideas, so if a student has an idea which was not represented on the reading list, they are less likely to know if someone has had it before, and thus more likely to be judged to have plagiarised – have passed off some ideas as theirs which actually came from someone else. They may not have known that, but they can’t prove it.

Those two ideas together spark off a bigger idea.

Sophisticated plagiarism detection services could be rebranded to be thought of as tracing the intellectual heritage of a piece of work. That would be very useful – I could write some thoughts down, submit them, and be returned a list of similar ideas, along with how my ideas relate to theirs (according to the software, which is not of course going to be perfect). Then I could look up the originals, and work them in properly: paraphrase and reference, for example. It would also be a powerful self-critical tool: instead of simply imagining the objections to one’s own supposedly new idea, one could see how others have argued against similar ideas in the past.

Incredibly useful in the field of patenting, as well, I would guess…

Have the anti-plagiarism people got on to patents yet? I’ll ask.

]]>
http://blogs.cetis.org.uk/asimong/2008/02/08/intellectual-heritage-tracing/feed/ 0