Doing XML semantically

When looking at XML specifications, first look for what are the resources, or objects, or entities. When you have one of these contained in another, ask, what is their relationship? That will help inform a sensible version of the XML spec, if you really must have one.

Didn’t I do well getting the core ideas into less than however many words? OK, now for the full version…

Yesterday we (Scott and I) were visited by Karim Derrick of TAG Learning. Karim and TAG are championing a BSI initiative, scheduled to be BS 8518, for the transfer of assessment data – particularly focused on coursework. They are being generous: they are doing the development work, based on their own and their clients’ needs, and handing it over to BSI for standardisation, so that all can benefit.

One of the things that we are keen on in CETIS is doing standards and specifications in a sensible way. We have long had a strong line in discouraging people from doing ill-advised things (perhaps a bit like the supposed Google message of not being evil) but I’m not very well-adapted for that, so I welcome the complementary approach of positively trying to encourage people to do sensible things, which I think is gaining strength in CETIS. The inherent challenge is coming to some kind of collective view on how to standardise the subject matter in hand – even if this is, wait (until something happens), and only then, do it. Within this line of doing good things, one that we seem to agree on is to do with XML specifications. And so I come back to the main thrust of this post.

Doing XML semantically is what has happened in XCRI (thanks to Scott Wilson and others) and now, with my involvement, in LEAP2A. It is easy in an Atom-based specification to follow this pattern, because Atom’s simple basic structure invites any kind of portfolio item to be an entry, and the relationships between them to be Atom links. For the same reason, Atom tends to be easy to read. But it is not too difficult to do this as well in your own XML language, if you just take a little care. You should look at every element, to see whether it is a thing, a relationship, or data – in RDF terms, a resource, a property or predicate, or a literal. TAG’s draft specification has pupils, as it is designed primarily for schools, rather than students. Pupils are things, in these terms! It has centres, which are often where the teaching and the coursework assessment takes place. What is the relationship between a student and a centre? Just taking leave of the TAG proposal for a minute, and thinking of other possibilities, if there were always only one centre, and all the students belonged to that centre, there would be no need even to represent the students within (in XML terms) the centre. If there are different groups of students within a centre, it might make sense to have within the centre element, elements defining what the relationship is between the centre and this particular group of students.

Then, one part of the draft has pupil elements containing marksheets. Again, what is the relationship? If there is only one possible, you don’t need a container element standing between the pupil and individual marksheet elements. If there is more than one possible relationship, then it would make sense for to have a pupil element containing a wrapper for marksheets, and that wrapper would be associated with the relationship (properly; predicate in RDF terms).

I hope that gives some kind of hint, at least, on how to do XML in a way that makes sense both from the domain point of view, and semantically. The payoff is this. If the mapping to RDF is clear, then someone should be able, without too much difficulty, to create an XSLT to do the transform. Then, if someone else wants to do a different XML spec, or has already done so, and it also transforms to RDF, there is a good basis for knowing whether similar information presented in the two XML specs is actually the same, or not.

One particularly attractive version of this is to have an RDFa representation, which of course of its very nature yeilds RDF on transformation. So you can present exactly the same information in XHTML, readable by anyone in a browser, and formatted to make it easy to read and to understand, and still have all the information just as machine-processable as any XML spec. That’s just what I want to do for LEAP2.

All this is an extension on what I wrote earlier

Forum overkill

You’ve probably noticed for quite a while that many of us now apply considerable caution at being invited to join a new list, a new forum, a new network, a new way of interacting, or anything similar. Not surprising, I agree. But until now I didn’t have a good formulation of why. I’ve just read a message from a colleague, bemoaning – well that would be too strong a word, but you can guess what I mean and he meant – the lack of activity on a forum that he set up for us a while back. Even when it was being set up, as well as wishing him well, I had a sneaking feeling that there were already too many.

If you know my ideas at all, you will probably know that I’ve been developing ideas on multiplicity of personality/persona/whatever-you-like-to-call-it. Particularly the idea that a set of values attaches to a particular context of value, and in each one of these we usually manage to achieve one or more clear roles, a certain consistency of behaviour, and of personal values. This is the sort of context like “family”, “work”, “club”, except that each person has their own, probably different, list of the value contexts which they distinguish.

And you may have read about another related key idea for the future: that portfolio-like tools could well help us both recognise and manage the information and values relevant to these contexts, contributing to a process of ethical development, to the benefit of individuals and society.

But you are less likely to know about my PhD work, which was more about the cognitive contexts of complex tasks. We can manage a complex task by dividing it up into a set of contexts, in each of which we have a certain appropriate set of rules for action (small-scale behaviour), prompted and fed by a corresponding set of information that is relevant to those rule.

If we think back to the very old days before the Web, when Usenet News seemed to be mainly for technical folk, it was apparent that one newsgroup seemed appropriate for each distinct and separate topic; or maybe task. It was when life on the Net became a little more complex and less easily separable, that I started to think that it would be nicer if we could have fewer newsgroups, but more choices to filter within them. That kind of system still hasn’t become widespread – or at least not that I can tell. I’m still expected to join many different lists, many of which overlap.

Or at least, it has come to pass in a strange way: through blogs. A blog is no longer written in a particular group, but available to anyone, who then filter it: usually only on the person of the writer, but sometimes on the tags which are associated with each post. And I’ll stick with the idea that it is strange, because when writing a blog, I feel disconnected; I cannot be sure of who the audience is. Thus, I am not sure of the values that I want to display or put forward. Perhaps blogs only really work for people with complete integrity?

I’m going around this the long way, but I feel the need for the circuit. If we want to be comfortable with a non-universal value set, we need the security of a known group, where values can be observed, sensed, and acted on. Where those who don’t share the values stand out, and preferably get out. But on the other hand, we want to separate discussions where the topic is of interest to different sets of people.

So, please, someone out there who is writing code, here is a request for the kind of forum where I can join with other people who share my values in a large group, but where everyone only gets to see posts on the topics that interest them.

And I’m still going to be reluctant to join new forums of any kind.

ePortfolio 2008

Since going to the annual European Eifel “ePortfolio” conferences is a firm habit of mine (in fact, I have been to every single annual one so far) it seems like a good time to take stock of the e-portfolio world. All credit to Serge and Maureen and their team, they have kept the event as being the best “finger on the pulse” in this field. This year was, as last, in Maastricht. It extended to just 3 rather than 4 days, and there were apparently some hundred fewer people overall. Nevertheless, others as well as I felt that there was an even better overall feel this year. At the excellent social dinner boat trip, I was reflecting, where else can one move so quickly from discussing deeply human issues like personal development, with people who care very insightfully about people, to talking technically about the relative merits of the languages and representations used for implementation of tools and systems, with people who are highly technically competent? It makes sense for this account to take both of those tracks.

Taking the easy one first, then…  We didn’t have a “plugfest” this year, which was in some ways odd: the last three years (since Cambridge, 2005) we have had some attempt at interoperability trials, even though no one was really ready for them. (People did remarkably well, considering.) But this year, when in the PIOP work with LEAP2A we really have started something that is going to work, there were no trials, just presentations. Actually I think that it is much better for being less “hyped”. By next year we should have something really solid to present and demonstrate. I presented our work at two sessions, and in both it was well received.

Not everyone likes XML schema specifications – Sampo Kellomäki enlightened me about some of the gross failings around XML – but luckily, those who aren’t so keen on XML or Atom seemed to appreciate the other side of LEAP 2.0 – the side of RDF and the Semantic Web connections, and the RDFa ideas I first understood in my work for ioNW2. It was good to have something for everyone with a technical interest.

What was disappointing was to understand more closely just what has been happening in the Netherlands. Someone must have made the decision a couple of years ago to follow “the international standard” of IMS ePortfolio, not taking account of the fact that it had not been properly tested in use. That’s how the IMS used to work (though no longer):  get a spec out there quickly, get someone to implement it, and then improve towards something workable based on feedback. But though there were “implementations” of IMS eP, there was no real test of interoperability or portability. Various people we know and work with had tried it, even up to last year’s conference, so we knew many of the problems. Anyway, in the Netherlands, they have been struggling to adapt and profile that difficult spec, and despite the large amount of public funding put in to the project (too much?), most of the couple of dozen national partners have only implemented a subset even of their own limited profile. And IMS eP is not being used as an internal representation by anyone.

Fortunately, Synergetics, who have been involved in the Dutch work (despite being Belgian) have also joined our forthcoming round of PIOP work, and talk towards the end of the conference was that LEAP2A will be added to the Dutch interoperability framework. I do hope this goes through – we will support it as much as we are able. Synergetics also play a leading role in the impressive TAS3 project, so we can expect that as time goes on pathways will emerge to add security architecture to our interoperability approach. But now on to the much more humanly interesting discussions.

I had the good luck to bump into Darren Cambridge (as usual, a keynote speaker) on the evening before the conference, and we talked over some of the ideas I’ve been developing, which at the moment I label as “personal integrity reflection across contexts”. Now that needs writing about separately, but in essence it involves a way of thinking about how to promote real growth, development and change in people’s lives. We also talked about this with Samantha Slade of Percolab – Darren analysed Samantha’s e-portfolio for his forthcoming book (which will be more erudite and better written than mine!).

These discussions were the peak, but elsewhere throughout the conference I got the feeling that the time is now perhaps right to move forward more publicly with discussing values in relation to e-portfolios. Parts of my vision were expressed in Anna’s and my paper two years ago in the Oxford conference – “Ethical portfolios: supporting identities and values.” In essence, it goes like this: portfolio practice can help to develop people’s values, and their understanding of their own values; with that understanding, they can choose occupations which lead to satisfaction and fulfillment; representing those values in machine-readable form may lead to much more potent matching within the labour market – another tool towards “flexicurity”(a term introduced to me 10 minutes ago by Theo Mensen). The new expression of insight is that development of personal values, and understanding them, is supported by some kinds of reflection, and not others. The term I am trying out to point towards the most useful and powerful kind of reflection is that “personal integrity reflection across contexts”. I hope the ideas can be taken forward and presented in more depth next year.

At the conference there was also a focus on “Learning Regions” (the subject of Theo’s call), which I wasn’t able to attend much of. My view of regional initiatives has been somewhat jaded by peripheral involvement years ago with regional development agencies that seemed to have just one agenda item: inward investment. But the vision at the conference was much broader and humane. My input is quite limited. Firstly, to get anything distinctive for a region going, there needs to be a common language for the distinctive concerns (and groups of concerns) for a region. If this is done machine-readably (e.g. RDF) then there is the hope for cross linkage, not just in the labour market but beyond. Again, as in my ioNW2 work, this could well be based on clear and unambiguous URIs being set up for each concept, and possibly this could be extended to having some kind of ontology in the background. Then there is the question of two-way matching, already trialled in a small way by the Dutch public employment service (CWI).

This leads to an opportunity for me to round up. There is so much that could be contributed to by e-portfolio practice and tools; and the sense of this conference was that indeed, things are set to move forward. But it still depends on matters which are not fully and generally understood. There is this issue of representing skills/competences/abilities which will not go away until dealt with satisfactorily (beyond TENCompetence), and alongside that, the issue of assessment of those in a way which makes sense to employers (and of which the results can be machine processed). That “hard” assessment needs to be reconciled with the more humane e-portfolio based assessment, which I think everyone agrees is already very good to get a feel for those last few short-listable candidates. Portfolio tools still have a way to go until they are relevant for search and automatic matching.

But my opinion is that progress here, and elsewhere, can definitely be made.

Interoperability through semantics

I was on a call this afternoon, with the HR-XML people discussing that old chestnut, contact information. The really interesting comment that came up was that many people don’t get any kind of intermediate domain model – rather, they just want to see their implementation (or practice) reflected directly in the specification, and so they are disappointed when (inevitably) it doesn’t happen. The HR-XML solution may be serviceable in the end, but what interested me more was the process which is really needed to do interoperability properly. I’ve been going on about semantic web approaches to interoperability for a while, but I hadn’t really thought through the processes which are necessary to implement it. So it’s a step forward for me.

Here’s how I now see it. Lots of people start off with their own way of seeing, thinking about, or conceptualising things. The primary task of the interoperability analyst or consultant (inventing a term that I’d feel comfortable with for myself) is to create a model into which all the initial variants can be mapped, one way or another. We don’t want one single uniform model into which everyone’s concepts are forced to fit, but rather a common model containing all the differences of view. Now, as I see it, that’s one of the big advantages of the semantic web: it’s so flexible and adaptable that you really can make a model which is a superset of just about any set of models that you can pick. Just what sort of model is needed becomes clearer when we think of the detailed steps of its creation and use.

If one group of people have a particular way of seeing things, the mapping to this common model must be acceptable to them. It won’t always be so immediately, so one has to allow for an educational process, possibly a Socratic one, of leading them to that acceptance. But you don’t have to show them all the other mappings at the same time, just theirs. Relating to other models comes later.

From the mappings to the common model, it is possible, likely even, that there will be some correspondence between concepts, so that different people can recognise they are talking about the same thing. One way of confirming this is to show the various people user interfaces of their systems, dealing with that kind of information. You could easily get remarks such as “yes, we have that, too”. Though one has to look out for, and evaluate, the “except that…” riders.

On the other hand, there are bound to be several concepts which don’t match directly in the common model. To complete the road to interoperability, what is needed is to ascertain, and get agreed, the logical connections between the common model concepts into which the original people’s concepts map. This, of course, is the area of ontologies, but it has a very different feel to the normal one of formalising the logical relationships between the concepts in just one group’s model. We are aiming at a common ontology, not in the sense that everyone must understand and use all the concepts, but that everyone agrees on the way that the concepts interrelate; the way that “their” concepts touch on “foreign” concepts, all within the same ontology.

Once the implications have been agreed between the different concepts in the common model, the way is open to create transforms between information seen in one view and information seen in another view. Each different group can, if they want, keep their own XML schemas to represent their own way of conceptualising the domain, but there will be (approximate) ways of translating this to other conceptualisations, perhaps via an intermediate RDF form. But, perhaps more ambitiously, once these implications are agreed, it is likely that people will be free to migrate towards more coherent views of the domain – actually to change the way they see things.

It is potentially a long process, and supporting it is not straightforward. I could imagine a year’s full-time postgraduate study – an MSc if you like – being needed to study, understand and put together the different roots and branches of philosophy, logic, communication, consensus process, IT, and education that are needed. But if we had trained, not just the naturally gifted, practitioners in this area, perhaps we could have enough people to get beyond the pitfalls of processes that are too often bogged down in mutual misunderstanding or incomprehension, or just plain amateurishness.

E-Systems and E-Portfolios

I went to this joint LLN meeting in Sheffield (2008-07-03) because there were several people, and several topics, that I wanted to keep up or catch up with. The meeting fulfilled that and more.

Roger Clark talked about current GMSA work (Pathways and Advance), and about the need for well defined standards and interfaces. Mark Stubbs talked about XCRI, and how the recently started European initiative, Metadata for Learning Opportunities (MLO) has adopted a basic structure reflecting XCRI. If I understood correctly, he is to join the DCSF Information Standards Board. Selwyn Lloyd reviewed ioNetworks. They are all doing important work which I want to keep up with, and as they are very busy, this kind of meeting is useful to keep track of the general picture. Kirstie Coolin, another valued portfolio interoperability colleague, talked about e-portfolio pilot work in the LEAP AHEAD LLN, covering Derbyshire and Nottinghamshire, which was new to me.

I took the opportunity also to catch up with Lisa Gray and Stuart Wood (recently appointed developer at Nottingham) about our current portfolio interoperability prototyping work.

After a very pleasant and unhurried lunch, we split into workshop discussions, and I went to probably the largest, related to e-portfolios. A very interesting idea bubbled up here: that none of the e-portfolio tools are ideal for all the different purposes of e-portfolios ranging from assessment management to PDP, and that perhaps the way forward would be to use more than one tool. Of course, this lifts portfolio interoperability into the limelight – people seemed to concur on this.  Rather than being a nice-to-have optional extra, interoperability will become a vital enabler to reusing the same information across these different systems. Perhaps also now, interoperability with student record systems, other e-admin systems, and VLEs will become recognised as an equal part of this overall move toward allowing the learner-controlled sharing and reuse of all personal-related information. Sharing between e-portfolio systems and e-admin systems is not so different from sharing information between e-portfolio systems with different purposes. No one system will use the complete range of portfolio information, but in a Web 2.0 world where there are surprising new uses for old information, as much as reasonably possible should be made available for use by other systems.

The networking brought two new contacts with significant interests. Colin Wilkinson is the Employer Engagement Co-ordinator for the North East Higher Skills Network, and has a strongly overlapping interest in the represention of skills. He intends to work with GMSA, and also perhaps us in CETIS in this area, which could be very promising in several ways. Ann Hughes, Becta’s Head of Efficiency and Productivity, is interested in lowering the barriers for information between schools, HE and FE. She told me that in the UK the SIF is now thought of as the “Systems Interoperability Framework”. They need guidance towards positive and fruitful avenues of development, and I think we can help them.

“E-Learning: An Oxymoron?”

Jakob Nielsen is a usability guru I have followed ever since my PhD days, who over the years has expressed many sound and insightful opinions on usability and human-computer interaction. So I had a real jolt when I read that headline in his recent column “Writing Style for Print vs. Web”. He writes first

“I continue to believe in the linear, author-driven narrative for educational purposes. I just don’t believe the Web is optimal for delivering this experience. Instead, let’s praise old narrative forms like books and sitting around a flickering campfire — or its modern day counterpart, the PowerPoint projector — which have been around for 500 and 32,000 years, respectively.”

followed shortly by

“We should accept that the Web is too fast-paced for big-picture learning. No problem; we have other media, and each has its strengths. At the same time, the Web is perfect for narrow, just-in-time learning of information nuggets — so long as the learner already has the conceptual framework in place to make sense of the facts.”

I think that many in the e-learning community could reflect on this profitably. But though I may know a lot about e-portfolio related technology and interoperability, I certainly know much less about e-learning more widely, of which I have no practical experience. Does Jakob, I wonder? Is he thinking about the (too prevalent, but simplistic) model of e-learning as just putting lectures, notes and tests on-line?

And what’s this about books? To be sure, books are archetypally reader-paced, allowing time for reflection whenever wanted, and perhaps this is one factor missing from impoverished models of e-learning. But they could not be called interactive – that is more the province of the “sitting around the campfire”, when learners can ask questions and discuss matters with other learners and with their teachers.

Jakob favours “author-driven narrative” – but what about learner-driven learning? Can’t this be greatly facilitated by the kind of electronic tools that go along with enlightened approaches to e-learning? To risk being a little cynical, e-learning is probably less easy to profit from than books and lectures (for the author, that is!)

Renownedly, Moodle’s philosophy is constructivist – would that be a surprise to Jakob, or would he say that Moodle is attempting the impossible?

The “bottom line” here is that e-learning people need to check whether the modes of learning that they are encouraging, implying, suggesting or allowing do not fall foul of the valid points in Nielsen’s analysis.

GMSA advance

As I’ve been involved with GMSA in various ways including through the ioNW2 project, I went to their seminar on 14th May introducing GMSA Advance.  This is to do with providing bite-sized modules of Higher Education, mainly for people at work, and giving awards (including degrees) on that basis – picking up some of the “Leitch” agenda. As I suspected, it was of interest from a portfolio perspective among others.

I’ll start with the portfolio- and PDP-related issues.

The first issue is award coherence. If you put together an award from self-chosen small chunks of learning (“eclectic”, one could call it), there is always an issue of whether that award represents anything coherent. Awarding bodies, including HEIs, may not think it right to give an award for what looks like a random collection of learning. Having awarding bodies themselves define what counts as coherent risks being too restrictive. An awarding body might insist on things which were not relevant to the learner’s workplace, or that had been covered outside the award framework. On the other hand, employers might not understand about academic coherence at all. A possible solution that strikes me and others is

  • have the learner explain the coherence of modules chosen
  • assess that explanation as part of the award requirement.

This explanation of coherence needs to make sense to a variety of people as well as the learner, in particular, to academics and to employers. It invites a portfolio-style approach: the learner is supported through a process of constructing the explanation, and it is presented as a portfolio with links to further information and evidence. One could imagine, for example, a video interview with the learner’s employer as useful extra evidence.

A second issue is the currency and validity of “credit”. Now I have a history of skepticism about credit frameworks and credit transfer, though the above idea of assessed explanation of award coherence at last  brings a ray of light into the gloom. My issue has always been that, to be meaningful, awards should be competence-based, not credit based. And I still maintain that the abilities claimed by someone at the end of a course, suitably validated by the awarding body, should be a key part of the official records of assessment (indeed, part of the “Higher Education Achievement Report” of the Burgess Group – report downloadable as PDF)

One of the key questions for these “eclectic” awards is whether credit should have a limited lifetime. Whether credit should expire surely should depend on what credit is trying to represent. It is just the skills, abilities or competences whose validation needs to expire – this is increasingly being seen in the requirement for professional revalidation. And the expiry of validation itself needs to be based on evidence – bicycle riding and swimming tend to be skills that are learned once for ever; language skills fall off only slowly; but the knowledge of the latest techniques in a leading edge discipline may be lost very quickly.

This is a clear issue for portfolios that present skills. The people with those portfolios need to be aware of the perceived value of old evidence, and to be prepared to back up old formal evidence with more recent, if less formal, additional evidence of currency. We could potentially take that approach back into the the GMSA Advance awards, though there would be many details to figure out, and issues would overlap with accreditation of prior learning.

Other issues at the seminar were not to do with portfolios. There is the question of how to badge such awards. CPD? Several of those attending thought not – “CPD”is often associated with unvalidated personal learning, or even just attendance at events. As an alternative, I rather like the constructive ambiguity of the phrase “employed learning” – it would be both the learners and the learning that are employed – so that is my suggestion for inclusion into award titles.

Another big issue is funding. Current policy is for no government funding to be given for people studying for awards of equal or lower level than one they have already achieved. The trouble is that if each module itself carries an award, then work-based learners couldn’t be funded for this series of bite-sized modules, but only one. The issue is recognised, but not solved. A good idea that was suggested at the seminar is to change and clarify the meaning of credit, so that it takes on the role of measuring public fundability of learning. Learners could have a lifetime learning credit allowance, that they could spend as they preferred. Actually, I think even better would be a kind of “sabbatical” system where one’s study credit allowance continued to build, to allow for retraining. Maybe one year’s (part time?) study credit would be fundable for each (say) 7 years of life – or maybe 7 years of tax-paying work?

So, as you can see, it was a thought-provoking and stimulating seminar.

LEAP workshop 2008-03-12

Lisa Gray arranged an afternoon workshop for e-portfolio projects interested in LEAP 2.0 at the JISC e-Learning Programme event on 2008-03-12 at Aston.

Nine project delegates attended, and we split into four small groups of pairs of projects. I asked each pair to explain their practice to each other, to consider how their systems might possibly work together, and then to come up with

  • next steps they will take
  • challenges they envisage
  • support they might want

in their progress towards interoperability.

Because of the range of projects represented, and their wide range of situations, the answers to these covered a good range and had many valuable points. Here are a few issues which came up.

  • How is this spec any better than or different from previous ones?
  • Ownership of and responsibility for the information held.
  • Security (various aspects of this).
  • Shared development of Open Source systems and tools, for e.g. validation (of syntax); verification (of information presented).
  • Transfer of permissions between systems.
  • Interfacing with systems (such as e-Learning / MIS / student information systems) from various vendors.

Related to these, the support we (variously CETIS and/or JISC) can offer could include:

  • Making the new specifications available
  • Supporting work on common tools and services (as above)
  • Supporting common ontologies
  • Getting vendors on board
  • Dissemination of what is possible

This was a smart collection of delegates, and they even seemed to enjoy the process! One of the lessons I take back with me is that this approach – getting people to work in pairs as if they had been asked to implement some kind of interoperability between their systems – is very productive: I’ll do it again given the chance. I think it puts people in just the right frame of mind. Everyone gets to explain what their system does to another highly informed person who doesn’t know much about it, and talk about interoperability is grounded in practice, rather than in abstract (and too often, futile) discussions of the conceptual structure of the interoperability specification.

Just one thing – try to match people/projects up so they have as much as possible in common. I prepared this, but had to rearrange on the spot when one person was absent.

Persona woe

Catching up on blog entries this morning I notice one from Joanna Bawa reproducing one from Andrew Hinton which refers to Alan Cooper’s “The Origin of Personas“. Now particularly because I have been closely associated with the usability and HCI community, I need to take account of how that community uses words. The Andrew Hinton piece clearly implies a usage of the term ‘persona’ to mean some kind of representative fictional character, stereotype or archetype who might use some software, or perhaps be engaged in a wider process – some character thought about and designed for. At the bottom of the article there are some links to other very interesting writings on the topic. People have come in and grabbed the term ‘persona’, uncompromisingly. Time to escape. Oh woe – the “intolerable wrestle with words and meanings”!

And I see why, as well. A ‘persona’, being originally a mask, can be worn by more than one person. It can be seen more like a role. The depersonalized persona?

In contrast, what I have been trying to get at in previous writings (other posts here and here) has been something much more intensely personal. It is the set of typical behaviours of a particular individual in a particular context or setting, along with their values in that setting, their attitudes, their propensities. It’s so close to the idea of identity that I was calling them identities for a while, before I admitted that the term ‘identity’ was too firmly entrenched in the realm of those who write software to check that only those allowed somewhere can get in.

Then this January came a new book, “Multiplicity”, from Rita Carter, which simply uses the term ‘personality’, indeed, making a virtue of the connection with multiple personality disorders. You could class it as popular psychology if you like, but in any case I think it is very worthwhile. Of all the people who have discussed matters in this area, Rita Carter is the one who comes closest to identifying just what it is that I regard as so important. The main thing that she does not go into as much as I would have liked is personal values, which to me are very clearly a function of the personality (in her multiplicitous sense), not the individual.

Addendum: Carter suggests this as a short definition of personality: “a coherent and characteristic way of seeing, thinking, feeling, and behaving.”

The most recent paper I have written much of, presented in the Medev event in Newcastle recently, does talk about professional identity, and fleetingly uses the term persona, but dwells more on what is really personal. Is it time to move on, led by Rita Carter, and switch term from ‘persona’ to ‘personality’?