Simon Grant » standardization http://blogs.cetis.org.uk/asimong Cetis blog Fri, 18 Aug 2017 19:43:02 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.22 What is there to learn about standardization? http://blogs.cetis.org.uk/asimong/2014/10/24/learning-about-standardization/ http://blogs.cetis.org.uk/asimong/2014/10/24/learning-about-standardization/#comments Fri, 24 Oct 2014 06:43:34 +0000 http://blogs.cetis.org.uk/asimong/?p=1570 Cetis (the Centre for Educational Technology, Interoperability and Standards) and the IEC (Institute for Educational Cybernetics) are full of rich knowledge and experience in several overlapping topics. While the IEC has much expertise in learning technologies, it is Cetis in particular where there is a body of knowledge and experience of many kinds of standardization organisations and processes, as well as approaches to interoperability that are not necessarily based on formal standardization. We have an impressive international profile in the field of learning technology standards.

But how can we share and pass on that expertise? This question has arisen from time to time during the 12 years I’ve been associated with Cetis, including the last six working from our base in the IEC in Bolton. While Jisc were employing us to run Special Interest Groups, meetings, and conferences, and to support their project work, that at least gave us some scope for sharing. The SIGs are sadly long gone, but what about other ways of sharing? What about running some kind of courses? To run courses, we have to address the question of what people might want to learn in our areas of expertise. On a related question, how can we assemble a structured summary even of what have we ourselves have learned about this rich and challenging area?

These are my own views about what I sense I have learned and could pass on; but also about the topics where I would think it worthwhile to know more. All of these views are in the context of open standards in learning technology and related areas.

How are standards developed?

A formal answer for formal standards is straightforward enough. But this is only part of the picture. Standards can start life in many ways, from the work of one individual inventing a good way of doing something, through to a large corporation wanting to impose its practice on the rest of the world. It is perhaps more significant to ask …

How do people come up with good and useful standards?

The more one is involved in standardization, the richer and more subtle one’s answer to this becomes. There isn’t one “most effective” process, nor one formula for developing a good standard. But in Cetis, we have developed a keen sense of what is more likely to result in something that is useful. It includes the close involvement of the people who are going to implement the standard – perhaps software developers. Often it is a good idea to develop the specification for a standard hand in hand with its implementation. But there are many other subtleties which could be brought out here. This also begs a question …

What makes a good and useful standard?

What one comes to recognise with time and experience is that the most effective standards are relatively simple and focused. The more complex a standard is, the less flexible it tends to be. It might be well suited to the precise conditions under which it was developed, but those conditions often change.

There is much research to do on this question, and people in Cetis would provide an excellent knowledge base for this, in the learning technology domain.

What characteristics of people are useful for developing good standards?

Most likely anyone who has been involved in standardization processes will be aware of some people whose contribution is really helpful, and others who seem not to help so much. Standardization works effectively as a consensus process, not as a kind of battle for dominance. So the personal characteristics of people who are effective at standardization is similar to those who are good at consensus processes more widely. Obviously, the group of people involved must have a good technical knowledge of their domain, but deep technical knowledge is not always allied to an attitude that is consistent with consensus process.

Can we train, or otherwise develop, these useful characteristics?

One question that really interests me is, to what extent can consensus-friendly attitudes be trained or developed in people? It would be regrettable if part of the answer to good standardization process were simply to exclude unhelpful people. But if this is not to happen, those people would need to be to be open to changing their attitudes, and we would have to find ways of helping them develop. We might best see this as a kind of “enculturation”, and use sociological knowledge to help understand how it can be done.

After answering that question, we would move on to the more challenging “how can these characteristics be developed?”

How can standardization be most effectively managed?

We don’t have all the answers here. But we do have much experience of the different organisations and processes that have brought out interoperability standards and specifications. Some formal standardization bodies adopt processes that are not open, and we find this quite unhelpful to the management of standardization in our area. Bodies vary in how much they insist that implementation goes hand in hand with specification development.

The people who can give most to a standardization process are often highly valued and short of time. Conversely, those who hinder it most, including the most opinionated, often seem to have plenty of time to spare. To manage the standardization process effectively, this variety of people needs to be allowed for. Ideally, this would involve the training in consensus working, as imagined above, but until then, sensitive handling of those people needs considerable skill. A supplementary question would be, how does one train people to handle others well?

If people are competent at consensus working, the governance of standardization is less important. Before then, the exact mechanisms for decision making and influence, formal and informal, are significant. This means that the governance of standards organisations is on the agenda for what there is to learn. There is still much to learn here, through suitable research, about how different governance structures affect the standardization process and its outcomes.

Once developed, how are standards best managed?

Many of us have seen the development of a specification or standard, only for it never really to take hold. Other standards are overtaken by events, and lose ground. This is not always a bad thing, of course – it is quite proper for one standard to be displaced by a better one. But sometimes people are not aware of a useful standard at the right time. So, standards not only need keeping up to date, but they may also need to be continually promoted.

As well as promotion, there is the more straightforward maintenance and development. Web sites with information about the standard need maintaining, and there is often the possibility of small enhancements to a standard, such as reframing it in terms of a new technology – for instance, a newly popular language.

And talking of languages, there is also dissemination through translation. That’s one thing that working in a European context keeps high in one’s mind.

I’ve written before about management of learning technology standardization in Europe and about developments in TC353, the committee responsible for ICT in learning, education and training.

And how could a relevant qualification and course be developed?

There are several other questions whose answers would be relevant to motivating or setting up a course. Maybe some of my colleagues or readers have answers. If so, please comment!

  • As a motivation for development, how can we measure the economic value of standards, to companies and to the wider economy? There must be existing research on this question, but I am not familiar with it.
  • What might be the market for such courses? Which individuals would be motivated enough to devote their time, and what organisations (including governmental) would have an incentive to finance such courses?
  • Where might such courses fit? Perhaps as part of a technology MSc/MBA in a leading HE institution or business school?
  • How would we develop a curriculum, including practical experience?
  • How could we write good intended learning outcomes?
  • How would teaching and learning be arranged?
  • Who would be our target learners?
  • How would the course outcomes be assessed?
  • Would people with such a qualification be of value to standards developing organisations, or elsewhere?

I would welcome approaches to collaboration in developing any learning opportunity in this space.

And more widely

Looking again at these questions, I wonder whether there is something more general to grasp. Try reading over, substituting, for “standard”, other terms such as “agreement”, “law”, “norm” (which already has a dual meaning), “code of conduct”, “code of practice”, “policy”. Many considerations about standards seem to touch these other concepts as well. All of them could perhaps be seen as formulations or expressions, guiding or governing interaction between people.

And if there is much common ground between the development of all of these kinds of formulation, then learning about standardization might well be adapted to learn knowledge, skills, competence, attitudes and values that are useful in many walks of life, but particularly in the emerging economy of open co-operation and collaboration on the commons.

]]>
http://blogs.cetis.org.uk/asimong/2014/10/24/learning-about-standardization/feed/ 2
Open : data : co-op http://blogs.cetis.org.uk/asimong/2014/10/23/open-data-co-op/ http://blogs.cetis.org.uk/asimong/2014/10/23/open-data-co-op/#comments Thu, 23 Oct 2014 04:56:10 +0000 http://blogs.cetis.org.uk/asimong/?p=1565 A very interesting event in Manchester on Monday (2014-10-20) called “Open : Data : Cooperation” was focused around the idea of “building a data cooperative”. The central idea was the cooperative management of personal information.

Related ideas have been going round for a long time. In 1999 I first came across a formulation of the idea of managing personal informaton in the book called “Net Worth“. Ten years ago I started talking about personal information brokerage with John Harrison, who has devoted years to this cause. In 2008, Michel Bauwens was writing about “The business case for a User Data Commons“.

A simple background story emerges from following the money. People spend money, whether their own or other people’s, and influence others in their spending of money. Knowing what people are ready to spend money on is valuable, because businesses with something to sell can present their offerings at an opportune moment. Thus, information which might be relevant to anyone buying anything is valuable, and can be sold. Naturally, the more money is at stake, the higher the price of information relevant to that purchase. Some information about a person can be used in this way over and over again.

Given this, it should be possible for people themselves to profit from giving information about themselves. And in small ways, they already do: store cards give a little return for the information about your purchases. But once the information is gathered by someone else, it is open for sale to others. One worry is that, maybe in the future if not right away, that information might enable some “wrong” people to know what you are doing, when you don’t want them to know.

Can an individual manage all that information about themselves better, both to keep it out of the wrong hands, and to get a better price for it from those to whom it is entrusted? Maybe; but it looks like a daunting task. As individuals, we generally don’t bother. We give away information that looks trivial, perhaps, for very small benefits, and we lose control of it.

It’s a small step from these reflections to the idea of people grouping together, the better to control data about themselves. What they can’t practically do separately, there is a chance of doing collectively, with enough efficiencies of scale to make it worthwhile, financially as well as in terms of peace of mind. You could call such a grouping a “personal data cooperative” or a “personal information mutual”, or any of a range of similar names.

Compared with gathering and holding data about the public domain, personal information is much more challenging. There are the minefields of privacy law, such as the Data Protection Act in the UK.

In Manchester on Monday we had some interesting “lightning” talks (I gave one myself – here are the slides on Slideshare,) people wrote sticky notes on relevant topics they were concerned about, and there were six areas highlighted for discussion:

  • security
  • governance
  • participation & inclusivity
  • technical
  • business model
  • legislative

I joined the participation and the technical group discussions. Both fascinated me, in different ways.

The participation discussion led to thoughts about why people would join a cooperative to manage their personal data. They need specific motivation, which could come from the kind of close-knit networks that deal with particular interests. There are many examples of closely knit on-line groups around social or political campaigns, about specific medical issues, or other matters of shared personal concern. Groups of these kinds may well generate enough trust for people to share their personal information, but they are generally not large enough to have much commercial impact, so they might struggle to be sustainable as personal data co-ops. What if, somehow, a whole lot of these minority groups could get together in an umbrella organisation?

Curiously, this has much in common with my personal living situation in a cohousing project. Despite many people’s yearnings (if not cravings) for secure acceptance of their minority positions, to me it looks like our cohousing project is too large and diverse a group for any one “cause” to be a key part of the vision for everyone. What we realistically have is a kind of umbrella in which all these good and worthy causes may thrive. Low carbon footprints; local, organic food; veganism; renewable energy; they’re all here. All these interest groups live within a co-operative kind of structure, where the governance is as far as possible by consensus.

So, my current living situation has resonances with this “participation” – and my current work is highly relevant to the “technical” discussion. But the technical discussion proved to be hard!

If you take just one area of personal-related information, and manage to create a business model using that information, the technicalities start to be conceivable.

For instance, Cetis (particularly my colleague Scott Wilson) has been involved in the HEAR (Higher Education Achievement Report) for quite some time. Various large companies are interested in using the HEAR for recruiting graduates. Sure, that’s not a cooperative scenario, but it does illustrate a genuine business case for using personal data gathered from education. Then one can think about how that information is structured; how it is represented in some transferable format; how the APIs for fetching such information should work. There is definite progress in this direction for HEAR information in the UK – I was closely involved in the less established but wider European initiative around representing the Diploma Supplement, and more can be found under the heading European Learner Mobility.

While the HEAR is progressing towards viability, The “ecosystem” around learner information more widely is not very mature, so there are still questions about how effective our current technical formats are. I’ve been centrally involved in two efforts towards standardization: Leap2A and InLOC. Both have included discussion about the conceptual models, which has never been fully resolved.

More mature areas are more likely to have stable technical solutions. Less mature areas may not have any generally agreed conceptual, structural models for the data; there may be no established business models for generating revenues or profits; and there may be no standards specifically designed for the convenient representation of that kind of data. Generic standards like RDF can cover any linked data, but they are not necessarily convenient or elegant, and may or may not lead to workable practical applications.

Data sources mentioned at this meeting included:

quantified self data
that’s all about your physiological data, and possibly related information
energy (or other utility) usage data
coming from smart meters in the home
purchasing data
from store cards and online shops
communication data
perhaps from your mobile device
learner information
in conjunction with learning technology, as I introduced

I’m not clear how mature any of these particular areas are, but they all could play a part in a personal data co-op. And because of the diversity of this data, as well as its immaturity, there is little one can say in general about technical solutions.

What we could do would be to set out just a strategy for leading up to technical solutions. It might go something like this.

  1. Agree the scope of the data to be held.
  2. Work out a viable business model with that data.
  3. Devise models of the data that are, as far as possible, intuitively understandable to the various stakeholders.
  4. Consider feasible technical architectures within which this data would be used.
  5. Start considering APIs for services.
  6. Look at existing standards, including generic ones, to see whether any existing standard might suffice. If so, try using it, rather than inventing a new one.
  7. If there really isn’t anything else that works, get together a good, representative selection of stakeholders, with experience or skill in consensus standardization, and create your new standard.

It’s all a considerable challenge. We can’t ignore the technical issues, because ignoring them is likely to lead just to good ideas that don’t work in practice. On the other hand, solving the technical issues is far from the only challenge in personal data co-ops. Long experience with Cetis suggests that the technical issues are relatively easy, compared to the challenges of culture and habit.

Give up, then? No, to me the concept remains very attractive and worth working on. Collaboratively, of course!

]]>
http://blogs.cetis.org.uk/asimong/2014/10/23/open-data-co-op/feed/ 0
What is CEN TC 353 becoming? http://blogs.cetis.org.uk/asimong/2014/04/09/what-is-cen-tc-353-becoming/ http://blogs.cetis.org.uk/asimong/2014/04/09/what-is-cen-tc-353-becoming/#comments Wed, 09 Apr 2014 12:28:40 +0000 http://blogs.cetis.org.uk/asimong/?p=1523 The CEN TC 353 was set up (about seven years ago) as the European Standardization Technical Committee (“TC”) responsible for “ICT for Learning Education and Training” (LET). At the end of the meeting I will be describing below, we recognised that the title has led some people to think it is a committee for standardising e-learning technology, which is far from the truth. I would describe its business as being, effectively, the standardization of the representation of information about LET, so that it can be used in (any kind of) ICT systems. We want the ICT systems we use for LET to be interoperable, and we want to avoid the problems that come from vendors all defining their own ways of storing and handling information, thus making it hard to migrate to alternative systems. Perhaps the clearest evidence of where TC 353 works comes from the two recent European Standards to our name. EN 15981, “EuroLMAI”, is about information about learner results from any kind of learning, specifically including the Diploma Supplement, and the UK HEAR, that document any higher education achievements. EN 15982, “MLO” (Metadata for Learning Opportunities) is the European equivalent of the UK’s XCRI, “eXchanging Course-Related Information”, mainly about the information used to advertise courses, which can be of any kind. Neither of these are linked to the mode of learning, technology enhanced or not; and indeed we have no EN standards about e-learning as such. So that’s straight, then, I trust …

At this CEN TC 353 meeting on 2014-04-08 there were delegates from the National Bodies of: Finland; France (2); Germany; Greece; Norway; Sweden (2); UK (me); and the TC 353 secretary. That’s not very many for an active CEN TC. Many of the people there have been working with CETIS people, including me, for several years. You could see us as the dedicated, committed few.

The main substance of the day’s discussion was about two proposed new work items (“NWIs”), one from France, one from Sweden, and the issues coming out of that. I attended the meeting as the sole delegate (with the high-sounding designation, “head of delegation”) from BSI, with a steer from colleagues that neither proposal was ready for acceptance. That, at least, was agreed by the meeting. But something much more significant appeared to happen, which seemed to me like a subtle shift in the identity of TC 353. This is entirely appropriate, given that the CEN Workshop on Learning Technologies (WS-LT), which was the older, less formal body, is now acccepted as defunct — this is because CEN are maintaining their hard line on process and IPR, which makes running an open CEN workshop effectively impossible.

No technical standardization committee that I know of is designed to manage pre-standardization activities. Floating new ideas, research, project work, comparing national initiatives, etc., need to be done before a proposal reaches a committee of this kind, because TC work, whether in CEN, or in our related ISO JTC1 SC36, tends to be revision of documents that are presented to the committee. It’s very difficult and time consuming to construct a standard from a shaky foundation, simply by requesting formal input and votes from national member bodies. And when a small team is set up to work under the constraints of a bygone era of confidentiality, in some cases it has proved insurmountably difficult to reach a good consensus.

Tore Hoel, a long-time champion of the WS-LT, admitted that it is now effectively defunct. I sadly agree, while appreciating all the good work it has done. So TC 353 has to explore a new role in the absence of what was its own Workshop, which used to do the background work and to suggest the areas of work that needed attention. Tore has recently blogged what he thinks should be the essential characteristics of a future platform for European open standards work, and I very much agree with him. He uses the Open Stand principles as a key reference.

So what could this new role be? The TC members are well connected in our field, and while they do not themselves do much IT systems implementation, they know those people, and are generally in touch with their views. The TC members also have a good overview of how the matters of interest to TC 353 relate to neighbouring issues and stakeholders. We believe that the TC is, collectively, in quite a good position to judge when it is worth working towards a new European Standard, which is after all their raison d’etre. We can’t see any other body that could perform this role as well, in this specific area.

As we were in France, the famous verse of Rouget de Lisle, the “Marseillaise” came to my mind. “Aux armes, citoyens, Formez vos bataillons!” the TC could be saying. What I really like, on reflection, about this aspect of the French national anthem is that it isn’t urging citizens to join some pre-arranged (e.g. royal) battalions, but to create their own. Similarly, the TC could say, effectively, “now is the time to act — do it in your own ways, in your own organisations, whatever they are — but please bring the results together for us to formalise when they are ready.”

For me, this approach could change the whole scene. Instead of risking being an obstacle to progress, the CEN TC 353 could add legitimacy and coherence to the call for pre-standardization activity in chosen areas. It would be up to the individuals listening (us wearing different hats) to take up that challenge in whatever ways we believe are best. Let’s look at the two proposals from that perspective.

AFNOR, the French standards body, was suggesting working towards a European Standard (EN) with the title “Metadata for Learning Opportunities part 2 : Detailed Description of Training and Grading (face to face, distance or blended learning and MOOCs): Framework and Methodology”. The point is to extend MLO (EN 15982), including perhaps some of those characteristics of courses (learning opportunities), perhaps drawn from the Norwegian CDM or its French derivative, that didn’t make it into the initial version of MLO for advertising. There have from time to time in the UK been related conversations about the bits of the wider vision for XCRI that didn’t make it into XCRI-CAP (“Course Advertising Profile”). But they didn’t make it probably for some good reason — maybe either there wasn’t agreement about what they should be, or there wasn’t any pressing need, or there weren’t enough implementations of them to form the basis for effective consensus.

Responding to this, I can imagine BSI and CETIS colleagues in the UK seriously insisting, first, that implemention should go hand in hand with specification. We need to be propertly motivated by practical use cases, and we need to test ideas out in implementation before agreeing to standardize them. I could imagine other European colleagues insisting that the ideas should be accepted by all the relevant EC DGs before they have a chance of success in official circles. And so on — we can all do what we are best at, and bring those together. And perhaps also we need to collaborate between national bodies at this stage. It would make sense, and perhaps bring greater commitment from the national bodies and other agencies, if they were directly involved, rather than simply sending people to remote-feeling committees of standards organisations. In this case, it would be up to the French, whose Ministry of Education seems to be wanting something like this, to arrange to consult with others, to put together an implemented proposal that has a good chance of achieving European consensus.

We agreed that it was a good idea for the French proposal to use the “MOOC” label to gain interest and motivation, while the work would in no way be limited to MOOCs. And it’s important to get on board both some MOOC providers, and related though different, some of the agencies who aggregate information about MOOCs (etc.) and offer information about them through portals so that people can find appropriate ones. The additional new metadata would of course be designed to make that search more effective, in that more of the things that people ask about will be modelled explicitly.

So, let’s move on to the Swedish proposal. This was presented under the title “Linked and Open Data for Learning and Education”, based on their national project “Linked and Open Data in Schools” (LODIS). We agreed that it isn’t really on for a National Body simply to propose a national output for European agreement, without giving evidence on why it would be helpful. In the past, the Workshop would have been a fair place to bring this kind of raw idea, and we could have all pitched in with anything relevant. But under our new arrangements, we need the Swedes themselves to lead some cross-European collaboration to fill in the motivation, and do the necessary research and comparison.

There are additional questions also relevant to both proposals. How will they relate to the big international and American players? For example, are we going to get schema.org to take these ideas on, in the fullness of time? How so? Does it matter? (I’m inclined to think it does matter.)

I hope the essentials of the new approach are apparent in both cases. The principle is that TC 353 acts as a mediator and referee, saying “OK” to the idea that some area might be ripe for further work, and encouraging people to get on with it. I would, however, suggest that three vital conditions should apply, for this approach to be effective as well as generally acceptable.

  1. The principal stakeholders have to arrange the work themselves, with enough trans-national collaboration to be reasonably sure that the product will gain the European consensus needed in the context of CEN.
  2. The majority of the drafting and testing work is done clearly before a formal process is started in CEN. In our sector, it is vital that the essential ideas are free and open, so we want a openly licenced document to be presented to the TC as a starting point, as close as can be to the envisioned finishing point. CEN will still add value through the formal process and formal recognition, but the essential input will still be openly and freely licenced for others to work with in whatever way they see fit.
  3. The TC must assert the right to stop and revoke the CEN work item, if it turns out that it is not filling a genuine European need. There is room for improvement here over the past practice of the TC and the WS-LT. It is vital to our reputation and credibility, and to the ongoing quality of our output, that we are happy with rejecting work that it not of the right quality for CEN. Only in this way can CEN stakeholders have the confidence in a process that allows self-organising groups to do all the spadework, prior to and separate from formal CEN process and oversight.

At the meeting we also heard that the ballot on the TC 353 marketing website was positive. (Disclosure: I am a member of the TC 353 “Communications Board” who advised on the content.) Hopefully, a consequence of this will be that we are able to use the TC 353 website both to flag areas for which TC 353 believes there is potential for new work, and to link to the pre-standardization work that is done in those areas that have been encouraged by the TC, wherever that work is done. We hope that this will all help significantly towards our aim of effectively open standardization work, even where the final resulting EN standards remain as documents with a price tag.

I see the main resolutions made at the meeting as enacting this new role. TC 353 is encouraging proposers of new work to go ahead and develop mature open documentation, and clear standardization proposals, in whatever European collaborations they see fit, and bring them to a future TC meeting. I’d say that promises a new chapter in the work of the TC, which we should welcome, and we should play our part in helping it to work effectively for the common good.

]]>
http://blogs.cetis.org.uk/asimong/2014/04/09/what-is-cen-tc-353-becoming/feed/ 4
Educational Technology Standardization in Europe http://blogs.cetis.org.uk/asimong/2013/10/29/ed-tech-standards-europe/ http://blogs.cetis.org.uk/asimong/2013/10/29/ed-tech-standards-europe/#comments Tue, 29 Oct 2013 11:57:52 +0000 http://blogs.cetis.org.uk/asimong/?p=1477 The current situation in Europe regarding the whole process of standardization in the area of ICT for Learning Education and Training (LET) is up in the air just now, because of a conflict between how we, the participants, see it best proceeding, and how the formal de jure standards bodies are reinforcing their set up.

My dealings with European learning technology standardization colleagues in the last few years have probably been at least as much as any other single CETIS staff member. Because of my work on European Learner Mobility and InLOC, since 2009 I have attended most of the meetings of the Workshop Learning Technologies (which also has an official page), and I have also been involved centrally in the eCOTOOL and to a lesser extend in the ICOPER European projects.

So what is going on now — what is of concern?

In CETIS, we share some common views on the how the standardization process should be taken forward. During the course of specification development, it is important to involve people implementing them, and not just have people who theorise about them. In the case of educational technology, the companies who are most likely to use the interoperability specifications we are interested in tend to be small and agile. They are helped by specifications that are freely available, and available as soon as they are agreed. Having to pay for them is an unwelcome obstacle. They need to be able to implement the specifications without any constraints or legal worries.

However, over the course of this last year, CEN has reaffirmed long standing positions which don’t match our requirements. The issue centres partly around perceived business models. The official standards bodies make money from selling copies of standards documents. In a paper based, slow-moving world, one can see some sense in this. Documents may have been costly to produce, and businesses relying on a standard wanted a definitive copy. We see similar issues and arguments around academic publishing. In both fields, it is clear that the game is continuing to change, but hasn’t reached a new stable state yet. What we are saying is that, in our area, this traditional business model is never likely to be be justified, and it’s diffcult to imagine the revenues materialising.

The European learning technology standardization community have been lucky in past years, because the official standards bodies have tolerated activity which is not profitable for them. Now — we can only guess, because of financial belts being tightened — CEN at least is not going to continue tolerating this. Their position is set out in their freely available Guides.

Guide 10, the “Guidelines for the distribution and sales of CEN-CENELEC publications”, states:

Members shall exercise these rights in accordance with the provisions of this Guide and in a way that protects the integrity and value of the Publications, safeguards the interests of other Members and recognizes the value of the intellectual property that they contain and the costs to the CEN-CENELEC system of its development and maintenance.
In particular, Members shall not make Publications, including national implementations and definitive language versions, available free of charge to general users without the specific approval of the Administrative Boards of CEN and/or CENELEC.

And, just in case anyone was thinking of circumventing official sales by distributing early or draft versions, this is expressly forbidden.

6.1.1 Working drafts and committee drafts
The distribution of working drafts, committee drafts and other proceedings of CEN-CENELEC technical bodies and Working Groups is generally restricted to the participants and observers in those technical bodies and Working Groups and they shall not otherwise be distributed.

So there it is: specification development under the auspices of CEN is not allowed to be open, despite our view that openness works best in any case, and that it is genuinely needed in our area.

As if this were not difficult enough, the problems extend beyond the copyright of standards documentation. After a standard is agreed, it has to be “implemented”, of course. What kind of use is permitted, and under what terms? A fully open standard will allow any kind of use without royalty or any other kind of restriction, and this is particularly relevant to developers of free and open source software. One specification can build on another, and this can get very tricky if there are conditions attached to implementation of specifications. I’ve come across cases where a standardization body won’t reuse a specification because it is not clear that it is licenced freely enough.

So what is the CEN position on this? Guide 8 (December 2011) is the “CEN-CENELEC Guidelines for Implementation of the Common IPR Policy on Patent”. Guide 8 does say that the use of official standards is to be free of royalties, but at the end of Clause 4.1 one senses a slight hesitation:

The words “free of charge” in the Declaration Form do not mean that the patent holder is waiving all of its rights with respect to the essential patent. Rather, it refers to the issue of monetary compensation; i.e. that the patent holder will not seek any monetary compensation as part of the licensing arrangement (whether such compensation is called a royalty, a one-time licensing fee, etc.). However, while the patent holder in this situation is committing to not charging any monetary amount, the patent holder is still entitled to require that the implementer of the above document sign a licence agreement that contains other reasonable terms and conditions such as those relating to governing law, field of use, reciprocity, warranties, etc.

What does this mean in practice? It seems unclear in a way that could cause considerable concern. And when thinking of potential cumulative effects, Definition 2.9 defines “reciprocity” thus:

as used herein, requirement for the patent holder to license any prospective licensee only if such prospective licensee will commit to license, where applicable, its essential patent(s) or essential patent claim(s) for implementation of the same above document free of charge or under reasonable terms and conditions

Does that mean that the implementer of a standard can impose any terms and condition that are arguably reasonable on its users, including payments? Could this be used to change the terms of a derivative specification? We — our educational technology community — really don’t need this kind of unclarity and uncertainty. Why not have just a plain, open licence?

What seems to be happening here is the opposite of the arrangement known as “copyleft“. While under “copyleft”, any derivative work has to be similarly licenced, under the CEN terms, it seems that patent holders can impose conditions, and can allow companies implementing their patents to impose more conditions or charge any reasonable fees. Perhaps CEN recognises that they can’t expect everyone to give them all of the cake? To stretch that metaphor a bit, maybe we are guessing that much of the educational technology community — the open section that we believe is particularly important — has no appetite for that kind of cake.

The CEN Workshop on Learning Technologies has suspended its own proceedings for reasons such as the above, and several of us are trying to think of how to go forward. It seems that it will be fruitless to try to continue under a strict application of the existing rules. The situation is difficult.

Perhaps we need a different approach to consensus process governance. Yes, that reads “consensus process governance”, a short phrase, apparently never used before, but packed full of interesting questions. If we have heavyweight bodies sitting on top of standardization, it is no wonder that people have to pay (in whatever way) for those staff, those premises, that bureaucracy.

It is becoming commonplace to talk of the “1%” extracting more and more resource from us “99%“. (See e.g. videos like this one.) And naturally any establishment tends to seek to preserve itself and feather its own nest. But the real risk is that our community is left out, progressively deprived of sustenance and air, with the strongest vested interests growing fatter, continually trying to tighten their grip on effective control.

So, it is all the more important to find a way forward that is genuinely collaborative, in keeping with a proper consensus, fair to all including those with less resource, here in standardization as in other places in society. I am personally up for collaborating with others to find a better way forward, and hope that we will make progress together under the CETIS umbrella — or indeed any other convenient umbrella that can be opened.

]]>
http://blogs.cetis.org.uk/asimong/2013/10/29/ed-tech-standards-europe/feed/ 2
Open Badges, Tin Can, LRMI can use InLOC as one cornerstone http://blogs.cetis.org.uk/asimong/2013/07/31/open-badges-tin-can-lrmi-can-use-inloc-as-one-cornerstone/ http://blogs.cetis.org.uk/asimong/2013/07/31/open-badges-tin-can-lrmi-can-use-inloc-as-one-cornerstone/#comments Wed, 31 Jul 2013 08:42:53 +0000 http://blogs.cetis.org.uk/asimong/?p=1469 (22nd in my logic of competence series.)

There has been much discussion recently about Mozilla Open Badges, xAPI (Experience API, alias “Tin Can API“) and LRMI, as new and interesting specifications to help bring standardization particularly into the world of technology and resources involved with people and their learning. They have all reached their “version 1″ this year, along with InLOC.

InLOC can quietly serve as a cornerstone of all three, providing a specification for one of the important things they may all want to refer to. InLOC allows documentation of the frameworks, of learning outcomes, competencies, abilities, whatever you call them, that describe what people need to know and be able to do.

Mozilla has been given, and devoted, plenty of resource to their OpenBadges effort, and as a result is it widely known about, though not so well known is the rapid and impressive development of the actual specification. The key part of the spec is how OpenBadges represents the “assertions” that someone has achieved something. The thing that people achieve (rather that its achievement) could well be represented in an InLOC framework.

Tin Can / Experience API (I’ll use the customary abbreviation “xAPI”) has also been talked about widely, as a successor to SCORM. The xAPI “makes it possible to collect the data about the wide range of experiences a person has (online and offline)”. This clearly includes “experiences” such as completing a task or attaining a learning outcome. But xAPI does not deal with the relationships between these. If one greater learning outcome was composed of several lesser ones, it wouldn’t be natural to represent that fact in xAPI itself. That is where InLOC naturally comes in.

LRMI (“Learning Resource Metadata Initiative”) is, as one would expect, designed to help represent metadata about learning resources, in a way that is integrated with schema.org. What if many of those learning resources are designed to help a learner achieve an intended learning outcome? LRMI can naturally refer to such a learning outcome, but is not designed to represent the structures themselves. Again, InLOC can do that.

What would be chaotic would be if these three specifications, each one potentially very useful in its own way, all specified their own, possibly incompatible ways of representing the structures or frameworks that are often created to bring common ground and order to this whole area of life.

Please don’t let that happen! Instead, I believe we should be using InLOC for what it is good at, leaving each other spec to handle its own area, and no one shamefully “reinventing the wheel”.

Draft proposals

These proposals are only initial proposals at present, looking forward to discussion with other people involved with or interested in the other three specifications. Please don’t hesitate to suggest better ways if you can see them.

OpenBadges

The Assertions page gives the necessary detail of how the OpenBadges spec works.

  • The BadgeClass criteria property means the “URL of the criteria for earning the achievement.” If there is an InLOC LOCdefinition or LOCstructure that represents these criteria, as there could well be, then the natural mapping would be for the criteria property simply to hold the URI, either of the (single) LOCdefinition, or of the LOCstructure that comprises all of the definitions together.
  • The BadgeClass alignment property gives a list of “objects describing which educational standards this badge aligns to, if any.” In cases where there is no LOCdefinition or LOCstructure representing the whole of the badge criteria, it seems natural to put a set of LOCdefinition URIs into the (multiple) objects of this property — which are AlignmentObjects.
  • Each AlignmentObject has the following properties, which map directly onto InLOC.
    • name: this could be the title of a LOCdefinition
    • url: this could be the id of the same LOCdefinition
    • description: this could be the description of the same LOCdefinition

One could also potentially take both approaches at the same time.

I will record more detail, and change it as it evolves, on the InLOC wiki.

xAPI

The developers call this the Tin Can API, but their sponsors, ADL, call it the Experience API or xAPI.

The specification (v1.0.1, 2013-10-01) can be read in this PDF document.

Tin Can is based around the statement. This is defined as “a simple construct consisting of <actor (learner)> <verb> <object>, with <result>, in <context> to track an aspect of a learning experience.” There are a number of ways in which a statement could relate to a learning outcome or competence. How might these correspond to InLOC?

  1. If the statement “verb” is something like completed, or mastered, or passed, the “object” could well be something like a learning outcome, or an assessment directly related to a learning outcome. The object has two properties on top of the expected objectType:
    • id: this can be the same as a LOC id in InLOC
    • definition: this in turn has recommended properties of:
      1. name: this is proposed as the LOC title
      2. description: this is proposed as the LOC description
      3. type: this is proposed as the URI for LOCdefinition or LOCstructure
  2. The statement could be that some experiences were had (e.g. an apprenticeship), and the result was the learning outcome or competence. It might therefore be useful to give the URI of an InLOC-formatted learning outcome as part of an xAPI result. Unfortunately, none of the specified properties of the Result object have a URI type, so the URI of a LOC definition would have to go in the extensions property of the result.
  3. Often in personal or professional development planning, it is useful to record what is planned. An example of how to represent this, with the object as a sub-statement, is given in the spec section 4.1.4.3, page numbered 20. The sub-statement can be something similar to the first option above.
  4. A learning outcome may form part of the context of an activity in diverse ways. If it is not one of the above, it may be possible to use the context property of a statement, either as a statement reference in the statement property of the context, or as part of the context‘s extensions.

In essence, the clearest and most straightforward way of linking to an InLOC LOCstructure or LOCdefinition is as a statement object, rather than its result or context. The other options could be seen as giving too many options, which may lead away from useful interoperability.

I will record more detail, and change it as it evolves, on the InLOC wiki.

LRMI

The documentation for the Learning Resource Metadata Initiative is at http://www.lrmi.net/. The specification, and its correspondence with InLOC, is very simple. All the properties are naturally understood as properties of a learning resource. The property relevant to InLOC is educationalAlignment, whose object is an AlignmentObject.

Here, the LRMI AlignmentObject properties are mapped to LOCdefinition properties.

  • targetURL: LOCdefinition id
  • targetName: LOCdefinition title
  • targetDescription: LOCdefinition description

I will record more detail, and change it as it evolves, on the InLOC wiki.

What this all means

xAPI and LRMI

The implications for xAPI and LRMI are just that they could suggest InLOC as a possible format for the publication of frameworks that they may want to refer to. Neither spec has pretensions to cover this area of frameworks, and the existence of InLOC should help to prevent people inventing diverse solutions, when we really want one standard approach to help interoperability.

A question remains about what a suitable binding of InLOC would be for both specs. In many ways it should not matter, as it will be the URIs and some values that will be used for reference from xAPI and LRMI, not any of the InLOC syntax. However, it might be useful to remember that xAPI’s native language is JSON, and LRMI’s is HTML, with added schema.org markup using microdata or RDFa. Neither of these bindings has been finalised for InLOC, so an opportunity exists to ensure that suitable bindings are agreed, while still conforming to the InLOC information model in one or other form.

OpenBadges

The case of Mozilla Open Badges is perhaps the most interesting. Clearly, there is a potential interest for badges to link to representations of learning outcomes or competences as defined by relevant authorities. It is so much more powerful when these representations reside in a common space that can be referred to by anyone (including e.g. xAPI and LRMI users, personal development, portfolio, and recruitment systems). It is easy to see how badges could usefully become “metadata-infused” tokens of the achievement of something that is already defined elsewhere. Redefining those things would simply confuse people.

InLOC solves several problems that OpenBadges should not have to worry about. One is representing equivalence (or not) between different competencies. That is provided for straightforwardly within InLOC, and should be done by the authorities defining the competencies, whether or not they are the same people as those who define and issue the badges.

Second, InLOC gives a clear, comprehensive and predefined vocabulary for how different competencies relate to each other. Mozilla’s Web Literacy Standard defines a tree structure of “literacies”, “competencies” and “skills”. Other frameworks and standards use other terms and concepts. InLOC is generic enough to represent all the relationships in all of these structures. As with equivalencies, the badge issuer should not have to define, for example, what roles require what skills and what knowledge. That should be up to occupational domain experts.

But OpenBadges do require some way to represent the fact that one, greater, badge can stand for a number of lesser badges. This is necessary to avoid being drowned in a flood of badges each one so small that it is unrecognisable or insignificant.

While so many frameworks have not been expressed in a machine processable format like InLOC, there will remain a requirement for an internal mechanism within OpenBadges to specify precisely which set of lesser badges is represented by a single a greater badge. But when the InLOC structures are in place, and all the OpenBadges in question refer to InLOC URIs for their criteria, we can look forward to automatic consistency checking of super-badges. To check a greater badge with a set of lesser component badges, check that the criteria structure or definition for the greater badge has parts (as defined by InLOC relationships) which are each the criteria of one of the set of lesser badges.

As with xAPI, JSON is the native language of OpenBadges, so one task that remains to be completed is to ensure that there is a JSON binding of InLOC that satisfies both the OpenBadges and the Tin Can communities.

That should be it! Is it?

]]>
http://blogs.cetis.org.uk/asimong/2013/07/31/open-badges-tin-can-lrmi-can-use-inloc-as-one-cornerstone/feed/ 1
A new (for me) understanding of standardization http://blogs.cetis.org.uk/asimong/2013/05/06/understanding-standardization/ http://blogs.cetis.org.uk/asimong/2013/05/06/understanding-standardization/#comments Mon, 06 May 2013 15:08:25 +0000 http://blogs.cetis.org.uk/asimong/?p=1462 InLOC project, one is likely to get new insights into what standardization is, or should be. I tried to encapsulate this in a tweet yesterday, saying "Standardization, properly, should be the process of formulation and formalisation of the terms of collective commitment". Then ...]]> When engaging deeply in any standardization project, as I have with the InLOC project, one is likely to get new insights into what standardization is, or should be. I tried to encapsulate this in a tweet yesterday, saying “Standardization, properly, should be the process of formulation and formalisation of the terms of collective commitment”.

Then @crispinweston replied “Commitment to whom and why? In the market, fellow standardisers are competitors.” I continued, with the slight frustration at the brevity of the tweet format, “standards are ideally agreed between mutually recognising group who negotiate their common interest in commitment”. But when Crispin went on “What role do you give to the people expected to make the collective commitment in drafting the terms of that commitment?” I knew it was time to revert from micro-blogging to macro-blogging, so to speak.

Crispin casts me in the position of definer of roles — I disclaim that. I am trying, rather, firstly to observe and generalise from my observations about what standardization is, when it is done successfully, whether or not people use or think of the term “standardization”, and secondly, to intuit a good and plausible way forward, perhaps to help grow a consensus about what standardization ought to be, within the standardization community itself.

One of the challenges of the InLOC project was that the project team started from more or less carte blanche. Where there is a lot of existing practice, standardization can (in theory at least) look at existing practice, and attempt to promote standardization on the best aspects of it, knowing that people do it already, and that they might welcome (for various reasons) a way to do it in just one way, rather than many. But in the case of InLOC, and any other “anticipatory” standard, people aren’t doing closely related things already. What they are doing is publishing many documents about the knowledge, skills, competence, or abilities (or “competencies”) that people need for particular roles, typically in jobs, but sometimes as learners outside of employment. However, existing practice says very little about how these should be structured, and interrelated, in general.

So, following this “anticipatory” path, you get to the place where you have the specification, but not the adoption. How do you then get the adoption? It can only be if you have been either lucky, in that you’ve formulated a need that people naturally come to see, or that you are persuasive, in that you persuade people successfully that it is what they really (really) want.

The way of following, rather than anticipating, practice certainly does look the easier, less troubled, surer path. Following in that way, there will be a “community” of some sort. Crispin identifies “fellow standardisers” as “competitors” in the market. “Coopetition” is a now rather old neologism that comes to mind. So let me try to answer the spirit at least of Crispin’s question — not the letter, as I am seeing myself here as more of an ethnographer than a social engineer.

I envisage many possible kinds of community coming together to formulate the terms of their collective commitments, and there may be many roles within those communities. I can’t personally imagine standard roles. I can imagine the community led by authority, imposing a standard requirement, perhaps legally, for regulation. I can imagine a community where any innovator comes up with a new idea for agreeing some way of doing things, and that serves to focus a group of people keen to promote the emerging standard.

I can imagine situations where an informal “norm” is not explicitly formulated at all, and is “enforced” purely by social peer pressure. And I can imagine situations where the standard is formulated by a representative body of appointees or delegates.

The point is that I can see the common thread linking all kinds of these practices, across the spectrum of formality–informality. And my view is that perhaps we can learn from reflecting on the common points across the spectrum. Take an everyday example: the rules of the road. These are both formal and informal; and enforced both by traffic authorities (e.g. police) and by peer pressure (often mediated by lights and/or horn!)

When there is a large majority of a community in support of norms, social pressure will usually be adequate, in the majority of situations. Formal regulation may be unnecessary. Regulation is often needed where there is less of a complete natural consensus about the desirability of a norm.

Formalisation of a norm or standard is, to me, a mixed blessing. It happens — indeed it must happen at some stage if there is to be clear and fair legal regulation. But the formalisation of a standard takes away the natural flexibility of a community’s response both to changing circumstances in general, and to unexpected situations or exceptions.

Time for more comment? You would be welcome.

]]>
http://blogs.cetis.org.uk/asimong/2013/05/06/understanding-standardization/feed/ 6
InLOC moving on http://blogs.cetis.org.uk/asimong/2013/04/30/inloc-moving-on/ http://blogs.cetis.org.uk/asimong/2013/04/30/inloc-moving-on/#comments Tue, 30 Apr 2013 12:37:57 +0000 http://blogs.cetis.org.uk/asimong/?p=1436 InLOC project — a European ICT Standardization Work Programme project I have been leading since November 2011. So a good day for an initial review and reflection. I blogged some previous thoughts on InLOC in November 2012 and February this year.]]> Today is the final day of the InLOC project — a European ICT Standardization Work Programme project I have been leading since November 2011. So a good day for an initial review and reflection. I blogged some previous thoughts on InLOC in November 2012 and February this year, and these thoughts are based on some aspects of the project’s final report.

InLOC — Integrating Learning Outcomes and Competences — is all about devising a good way of representing and communicating structures of learning outcomes, competence, skills, competencies, etc. that can be defined by framework owners, and used by many kinds of ICT tools, including those supporting: specifying learning outcomes of courses; claiming skills and competences in portfolios; recruitment and specifying job requirements; learning objectives relevant to resources; and possibly many more.

Project outcomes

We have produced three CEN Workshop Agreements, two formally approved and awaiting publication (Information Model, and Guidelines), and one where a workshop vote will be concluded in the coming days (Application Profile: we don’t expect any problems). Further work includes technical bindings, and two demo prototypes kindly contributed.

The Information Model

There are a number of key advances made in the InLOC Information Model, with respect to other and previous work. “LOC” here stands for “Learning Outcome or Competence”.

  1. A clear distinction is made between a LOCdefinition and a LOCstructure.
    • A LOCdefinition is similar in some ways to IMS RDCEO or IEEE RCD. Any idea of structure is kept separate from this, so that the definition can potentially be reused in different structures. Thus, a LOC definition is like the expression of just one concept about learning outcome, competence, etc.
    • A LOCstructure is the information about the structure and compound properties, but this is kept separate from any particular single definition. While it is recognised that in practice the two are often mixed, the InLOC specification separates them for clarity and for effective implementation.
  2. A clear distinction is made between defining levels with level definitions, and attributing levels (from another scheme) to definitions. This is explained in InLOC treatment of levels. This is necessary for logical clarity, and therefore at some point for applications. A decimal number is introduced as a key part of the model, to allow level information to be automatically processed.
  3. A single structural form, the LOCassociation, is used both to represent relationships between LOC structures and definitions, and to represent several different kinds of compound properties, each with more than one part. This results in structures that are easier to process, with fewer distinct information model components. It also is responsible for the relative ease of representing InLOC naturally in RDF, with minor changes to the model.

Within InLOC in general, a recurrent pattern is of one identifier together with a set of multilingual titles or labels. This is a common pattern elsewhere, and ensures that InLOC representations can naturally work multilingually.

There is a diagrammatic illustration of the Information Model structure as a UML diagram, and many more illustrative diagrams in the Guidelines section on InLOC explained through example.

The Guidelines

The central feature of the Guidelines is a detailed examination of a cross-section of the European e-Competence Framework, given as a good example of the power and flexibility of InLOC in a case from real life. The e-CF is a useful example for InLOC as it identifies 5 levels of competence. The e-CF is analysed in the section on InLOC explained through example. In this section, there are more diagrams illustrating the Information Model and how it is applied in this case.

The e-CF is fully expressed here in InLOC XML format.

Application Profile of Europass CV and Language Passport

The most used Europass instrument is the Europass CV, and Cedefop have recently been revamping it. It is a kind of simple e-portfolio, and the challenge here is to allow it to refer effectively to InLOC structures, so that the end users — the people who have the skills and competences they want to show off — can refer directly to InLOC identifiers, and so have better hope of having them accurately recognised and found in relevant searches. For the Europass CV, the InLOC team have proposed a modification of their XML Schema, and it looks like several if not all of our proposals will be taken on by Cedefop, paving the way for the Europass CV being a leading example of the use of InLOC structures in practice.

Technical bindings

No information model is complete without suggestions for how to bind it to currently relevant technologies. The ones chosen by InLOC were:

We hope that they are reasonably clear and self-explanatory.

While the project found no great motivation for developing other bindings, I personally believe that it would be very valuable in the future to develop something with RDFa and schema.org.

Prototypes

We have been really lucky to have two initiatives filling in where the project was not funded to deliver. There is a Viewer-editors page on the project wiki with access details.

Challenges

The main challenge in this project has been trying to generate interest and contributions from interested parties. It’s not that the topic isn’t important, just that, as usual, busy people need a pressing reason to engage with this kind of activity. This challenge is endemic to all “anticipatory” standardization work. Before either policy mandation or clear economic interest, it takes some spare effort and a clear vision before people are willing to engage.

I’m intending to write more about what this means for my own personal view of what standardization could best be, or perhaps “should” be.

Recommendations

It seems to me good practice to make some recommendations at the end of the project — after all, if one has been engaged in some good work, there should be some ways forward that are clearer at the end than at the beginning. The recommendations that the team agreed included:

  • focusing on trying to get people to publish frameworks in InLOC, as this will in turn motivate tool builders;
  • ensuring that when people are ready to adopt InLOC, they can find resources and expertise;
  • persuading developers to make it easy for users to refer to definitions within InLOC structures;
  • get other Workshop, and other European, projects to use InLOC where possible;
  • work on APIs, and on automatic configuration of key domain terms within user interfaces.
]]>
http://blogs.cetis.org.uk/asimong/2013/04/30/inloc-moving-on/feed/ 0
Where are the customers? http://blogs.cetis.org.uk/asimong/2012/01/16/where-are-the-customers/ http://blogs.cetis.org.uk/asimong/2012/01/16/where-are-the-customers/#comments Mon, 16 Jan 2012 10:21:10 +0000 http://blogs.cetis.org.uk/asimong/?p=1006 CEN Workshop on Learning Technologies (WS-LT) was a great stimulus for my further reflection – should we be thinking more of national governments?]]> All of us in the learning technology standards community share the challenge of knowing who our real customers are. Discussion at the January CEN Workshop on Learning Technologies (WS-LT) was a great stimulus for my further reflection — should we be thinking more of national governments?

Let’s review the usual stakeholder suspects: education and training providers; content providers; software developers; learners; the European Commission. I’ll gesture (superficially) towards arguing that each one of these may indeed be stakeholders, but the direction of the argument is that there is a large space in our clientele and attendance for those who are directly interested and can pay.

Let’s start with the the providers of education and training. They do certainly have an interest in standards, otherwise why would JISC be supporting CETIS? But rarely do they implement standards directly. They are interested, so our common reasoning goes, in having standards-compliant software, so that they can choose between software and migrate when desired, avoiding lock-in. But do they really care about what those standards are? Do they, specifically, care enough to contribute to their development and to the bodies and meetings that take forward that development?

In the UK, as we know, JISC acts as an agent on behalf of UK HEIs and others. This means that, in the absence of direct interest from HEIs, it is JISC that ends up calling the shots. (Nothing inherently wrong with that – there are many intelligent, sensible people working for JISC.) Many of us play a part in the collective processes by which JISC arrives at decisions about what it will fund. We are left hoping that JISC’s customers appreciate this, but it is less than entirely clear how much they appreciate the standardisation aspect.

I’ll be even more cursory about content providers, as I know little about that field. My guess is that many larger providers would welcome the chance of excluding their competitors, and that they participate in standardisation only because they can’t get away with doing differently. Large businesses are too often amoral beasts.

How about the software vendors, then? We don’t have to look far for evidence that large purveyors of proprietary software may be hostile in spirit to standardisation of what their products do, and that they are kept in line, if at all, only by pressure from those who purchase the software. In contrast, open source developers, and smaller businesses, typically welcome standards, allowing work to be reused as much as possible.

In my own field of skills and competence, there are several players interested in managing the relevant information about skills and competence, including (in the UK) Sector Skills Councils, and bodies that set curricula for qualifications. But they will naturally need some help to structure their skill and competence information, and for that they will need tools, either that they develop themselves or buy. It is those tools that are in line to be standards compliant.

And what of the learners themselves? Seems to me “they” (including “we” users) really appreciate standards, particularly if it means that our information can be moved easily between different systems. But, as users, few of us have influence. Outside the open source community, which is truly wonderful, I can’t easily recall any standards initiative funded by ordinary users. Rather, the influence we and other users have is often doubly indirect: filtered through those who pay for the tools we use, and through those who develop and sell those tools.

The European Commission, then? Maybe. We do have the ICT Standardisation Work Programme (ICTSWP), sponsored by DG Enterprise and Industry. I’m grateful that they are sponsoring the work I am doing for InLOC, though isn’t the general situation a bit like JISC? It is all down to which priorities happen to be on the agenda (of the EC this time), and the EC is rather less open to influence than JISC. Whether an official turns up to a CEN Workshop seems to depend on the priorities of that official. André Richier (the official named in “our” bit of the ICTSWP) often turns up to the Workshop on ICT Skills, but rarely to our Workshop. In any case they are not the ultimate customers.

What are the actual interests of the EC? Mobility, evidently. There has been so much European funding over the years with the term “mobiity” attached. Indeed, the InLOC work is seen as part of the WS-LT’s work on European Learner Mobility. Apart from mobility, the EC must have some general interest in the wellbeing of the European economy as a whole, but this is surely difficult, where the interests of different nations surely diverge. More of this later.

In the end, many people don’t turn up, for all these reasons. They don’t turn up at the WS-LT; they don’t turn out in any real strength for the related BSI committee, IST/43; few of the kinds of customer I’m thinking about even turn up at ISO SC36.

Who does turn up then? They are great people. They are genuinely enthusiastic about standardisation, and have many bright ideas. They are mostly in academia, small (often one-person) consultancy, projects, networks or consortia. They like European, national, or any funding for developing their often genuinely good ideas. Aren’t so many of us like that? But there were not even many of us lot at this WS-LT meeting in Berlin. And maybe that is how it goes – when starved of the direct stimulus of the people we are doing this for, we risk losing our way, and the focus, enthusiasm and energy dwindles, even within our idealistic camp.

Before I leave our esteemed attendees, however, I would like to point out the most promising bodies that were represented at the WS-LT meeting: KION from Italy and the University of Oslo’s USIT, both members of RS3G, the Rome Student Systems and Standards Group, an association of software providers. They are very welcome and appropriate partners with the WS-LT.

Which brings me back to the question, where are the other (real) customers? We could ask the same thing of IST/43, and of ISO SC36. Which directly interested parties might pay? Perhaps a good place to start the analysis is to divide the candidates roughly between private and public sectors.

My guess here is that private sector led standardisation works best in the classic kinds of situation. What would be the point of a manufacturer developing their own range of electrical plugs and sockets? Even with telephones, there are huge advantages in having a system where everyone can dial everyone else, and indeed where all handsets work everywhere (well, nearly…). But the systems we are working with are not in that situation. There are reasons for these vendors to want to try their own new non-standard things. And much of what we do leads, more than follows, implementation. That ground sometimes seems a bit shaky.

Private sector interest in skills and competence is focused in the general areas of personnel, recruitment, HR, and training. Perhaps, for many businesses, the issues are not seen as complex enough to merit the involvement of standards.

So what are the real benefits that we see from learning technology standardisation, and put across to our customers? Surely these include better, more effective as well as efficient education; in the area of skills and competence, easier transition between education and work; and tools to help with professional and vocational development. These relate to classic areas of direct interest from government, because all governments want a highly skilled, competent, professional work force, able to “compete” in the global(ised) economy, and to upskill themselves as needed. The foundations of these goals are laid in traditional education, but they go a long way beyond the responsibilities of schools, HEIs, and traditional government departments of education. Confirmation of the blurring of boundaries comes from recalling that the the EC’s ICTSWP is sponsored not by DG Education and Culture, but DG Enterprise and Industry.

My conclusion? Government departments need our help in seeing the relevance of learning technology standardisation, across traditional departmental boundaries. This is not a new message. What I am adding to it is that I think national government departments and their agencies are our stakeholders, indeed our customers, and that we need to be encouraging them to come along to the WS-LT. We need to pursuade them that different countries do share an interest in learning technology standardisation. This would best happen alongside their better involvement in national standards bodies, which is another story, another hill to climb…

]]>
http://blogs.cetis.org.uk/asimong/2012/01/16/where-are-the-customers/feed/ 0
ICT Skills http://blogs.cetis.org.uk/asimong/2011/12/13/ict-skills/ http://blogs.cetis.org.uk/asimong/2011/12/13/ict-skills/#comments Tue, 13 Dec 2011 11:12:58 +0000 http://blogs.cetis.org.uk/asimong/?p=948 CEN Workshop Learning Technologies (WS-LT), but as far as I know none yet to a closely related Workshop on ICT Skills. Their main claim to fame is the European e-Competence Framework (e-CF), a simpler alternative to SFIA (developed by the BCS and partners). It was interesting on several counts, and raises some questions we could all give an opinion on.]]> Several of us in CETIS have been to the CEN Workshop Learning Technologies (WS-LT), but as far as I know none yet to a closely related Workshop on ICT Skills. Their main claim to fame is the European e-Competence Framework (e-CF), a simpler alternative to SFIA (developed by the BCS and partners). It was interesting on several counts, and raises some questions we could all give an opinion on.

The meeting was on 2011-12-12 at the CEN meeting rooms in Brussels. I was there on two counts: first as a CETIS and BSI member of CEN WS-LT and TC 353, and second as the team leader of InLOC, which has the e-CF mentioned in its terms of reference. There was a good attendance of 35 people, just a few of whom I had met before. Some members are ICT employers, but more are either self-employed or from various organisations with an interest in ICT skills, and in particular, CEPIS (not to be confused with CETIS!) of which the BCS is a member. A surprising number of Workshop members are Irish, including the chair, Dudley Dolan.

The WS-LT and TC353 think a closer relationship with the WS ICT Skills would be of mutual benefit, and I personally agree. ICT skills are a vital component of just about any HE skills programme, essential as they are for the great majority of graduate jobs. As well as the e-CF, which is to do with competences used in ICT professions, the WS ICT Skills have recently started a project to agree a framework of key skills for ICT users. So for the WS-LT there is an easy starting point for which we can offer to apply various generic approaches to modelling and interoperability. The strengths of the two workshops are complementary: the WS-LT is strong in the breadth of generalities about metadata, theory, interoperability; the WS ICT Skills is strong in depth, about practice in the field of ICT.

The meeting revealed that the two workshops share several concerns. Both need to manage their CWAs, withdrawing outdated ones; both are concerned about the length and occasional opaqueness of the procedure to fund standardisation expert team work. Both are concerned with the availability and findability of their CWAs. André Richier is interested in both Workshops, though more involved in the WS ICT Skills. Both are concerned, in their own different ways, with the move through education and into employment. Both are concerned with creating CWAs and ENs (European “Norm” Standards), though the WS-LT is further ahead on this front, having prompted the formation of CEN TC353 a few years ago, to deal with the EN business. The WS ICT Skills doesn’t have a TC, and it is discussing whether to attempt ENs without a TC, or to start their own TC, or to make use of the existing TC353.

On the other hand, the WS ICT Skills seems to be ahead in terms of membership involvement. They charge money for voting membership, and draw in big business interest, as well as small. Would the WS-LT (counterintuitively perhaps) draw in a larger membership if it charged fees?

I was lucky to have a chance (in a very full agenda) to introduce the WS-LT and the InLOC project. I mentioned some of the points above, and pointed out how relevant InLOC is to ICT skills, with many links including shared experts. While understanding is built up between the two workshops, it was worth stressing that nothing in InLOC is sector-specific; we will not be developing any learning outcome or competence content; and that far from being in any way competitive, we are perfectly set up for collaboration with the WS ICT Skills, and the e-CF.

Work on e-CF version 3 is expected to be approved very soon, and there is a great opportunity there to try to ensure that the InLOC structures are suited to representing the e-CF, and that any useful insights from InLOC are worked into the e-CF. The e-CF work is ably led by Jutta Breyer who runs her own consultancy. Another project of great interest to InLOC is their work on “end user” ICT skills (the e-CF deals with professional competences), led by Neil Farren of the ECDL Foundation. The term “end user” caused some comment and will probably not feature in the final outputs of this project! Their project is a mere month or so ahead of InLOC in time. In particular, they envisage developing some kind of “framework shell”, and to me it is vital that this coordinates well with the InLOC outputs, as a generalisation-specialisation.

Another interesting piece of work is looking at ICT job profiles. The question of how a job profile relates to competence definitions is something that needs clarifying and documenting within the InLOC guidelines, and again, the closer we can coordinate this, the better for both of us.

Finally, should there be an EN for the e-CF? It is a tricky question. Sector Skills Councils in the UK find it hard enough to write National Occupation Standards for the UK – would it be possible to reach agreement across Europe? What would it mean for SFIA? If SFIA saw it as a threat, it would be likely to weigh in strongly against such a move. Instead, would it be possible to persuade SFIA to accept a suitably adapted e-CF as a kind of SFIA “Lite”? Some of us believe that would help, rather than conflict with, SFIA itself. Or could there be an EN, not rigidly standardising the descriptions of “e-Competences”, but rather giving an indication for how such frameworks should be expressed, with guidelines on ICT skills and competences in particular?

Here, above all, there is room for detailed discussion between the Workshops, and between InLOC and the ongoing ICT Skills Workshop teams, to achieve something that is really credible, coherent and useful to interested stakeholders.

]]>
http://blogs.cetis.org.uk/asimong/2011/12/13/ict-skills/feed/ 4
Grasping the future http://blogs.cetis.org.uk/asimong/2011/06/24/grasping-the-future/ http://blogs.cetis.org.uk/asimong/2011/06/24/grasping-the-future/#comments Fri, 24 Jun 2011 10:25:13 +0000 http://blogs.cetis.org.uk/asimong/?p=748 We had an IEC departmental meeting yesterday, with all kinds of interesting ideas being floated about how to move forwards. (For outsiders: the Institute for Educational Cybernetics is the department at Bolton that hosts CETIS). I’m now sure there is room for new development of an approach to technology dissemination that we could consider.

This idea didn’t quite make it into the main discussion yesterday, which is partly why I wanted to blog about it here. Coincidentally, this morning via LinkedIn I see an article from yesterday on TechCrunch about Oblong, which I can use to help explain.

Yesterday Scott was talking about doing lots of “cool” stuff (tools, books included) so that some of them have a chance to take off and be one of the next big things — most of them probably won’t if we’re honest (like my book on Electronic Portfolios…). I was rather feebly trying to say that I can see a related gap that the IEC is in a very good position to bridge. Let me explain better and more clearly now.

When we have good ideas, part of the thing we have to come to terms with is that others often don’t get it straight away. If you think about it, this is pretty obvious — the insight you have is dependent on your current state of awareness, that you have spent quite some time building up. But then comes the real problem. It is much too easy to see the job of getting others to adopt your idea in terms of just persuading them. The wonderful presentation; the super-clear explanation; the appeal to how useful the thing is by referring to the amazing things that can be done: any of these may tempt us to believe it is the answer.

But, as anyone with teaching experience knows, it is often a much longer process. Even if calculus were really wonderful, you couldn’t persuade people who can’t even do algebra properly, with the most persuasive presentation in the world. They really can’t get it yet. But you can think in terms of progressive learning, through the stages of maths that have been worked on for centuries now. Similarly, there are many people you can’t just win over to, say, logic programming. In my direct recent experience, I could say the same about concept mapping, and in particular the diagrammatic conventions that underlie both that and RDF graphs, and indeed Topic Maps. A very similar story could be told of various technology specifications or standards. Take a look at RDFa, for instance, and the supposedly pragmatic decision by schema.org to adopt microdata in preference. “But you just have to understand it”, one might complain, “and you’ll see how much better it is!”

(Aside: to see how much better RDFa really is, see Manu Sporny’s blog.)

The vital and central point is that many technical people, I believe, misconceive of the task. They see it in terms of presentational effort, whereas they would be much better off thinking of the task in terms of learning and development.

We could hear echoes of Piaget here, perhaps. People have stages of their cognitive development. But I’m not a follower of Piaget (any more than of Marx) and I’m proposing not to follow any fixed scheme here. Rather, I’m saying that people — technical people in particular — if they are to maximise the chances of something they have created being adopted widely, need to look at the real potential adopters and create helpful models of what the relevant developmental stages are for those potential adopters, rather than for humanity in general.

And that brings me back to our potential role — the IEC’s role — here. We know about, we are in touch with, we incorporate several technical wizards and several far-sighted and innovative educators (and even a few who are both!) I think we can take on a mission to work out how to educate the innovators, the creators, the producers, about this task, this responsibility if you like, for working towards wider adoption. We could tell people about how important and useful it is, centrally, to plan out a sequence of stages, to motivate non-adopters towards adoption. Each stage needs to be graspable by, and motivating to, the audience. And it’s not necessarily only plain learning that needs to be mapped out, but individual stages of development (remembering the Piaget concept again), and that can take time.

Maybe this is part of the essence of the idea of “timing” of innovations. I’m saying that it’s not just good fortune, but some of it can be reasonably predicted, given a good model of people’s cognitive developmental stages, their experience, and the knowledge and skills they have accumulated. Just focusing on technology adoption, there could be a rich seam of research here, taking case studies of technology adoption, and working out why adoption happened, or not.

So back to the serendipitous example. Obviously adoption is greatly helped by well-placed articles (such as the one linked above) from reputable sources. But the article itself gives more clues. I quote:

“both Kramer and Ubderkoffler agree that consumer technologies like the Wii and the Kinect are perfect in helping to transition people over to these future concepts of computing.”

Then, a bit later:

“But first, Oblong knows they need to be able to bring relatively affordable products to market. And again, that’s what Mezzanine is all about. “Our goal here is to change how people work together,” Kramer explains in a slightly (but only slightly) less ambitious statement.”

So they are perfectly aware that getting people to adopt this new technology involves providing motivating experiences, and if they can’t afford them they won’t have them. They are also aware of the distinction between the future aspirational goal, and the humbler steps that need to be taken to approach it.

So, it looks like some people — probably the people who are going to be successful in getting their things adopted — understand these points well. My experience suggests that many more don’t. I can certainly say I struggle to keep hold of the central points here, and am easily tempted away to variations of the simplistic “give them a bigger prod and they’ll understand” way of thinking. But surely, shouldn’t part of what we offer as education in educational technology (or indeed cybernetics) be to get a more truly useful set of ideas more firmly into people’s consciousness?

In the end, what I think I’m saying is that we need to help the current enthusiasts / experts / technology evangelists grasp the reality about how, so often, the adoption process is limited or bounded by the stage of development of the potential adopters, and thus refocus their efforts towards formulation and envisioning respectful, plausible models of how their (no doubt) great innovations can be grasped and adopted, step by step in a future process perhaps, if not (the desired) all at once, now!

]]>
http://blogs.cetis.org.uk/asimong/2011/06/24/grasping-the-future/feed/ 5