Subject coding is changing from JACS3 to HECoS; here’s what’s different

From UCAS applications to HESA returns, and from league tables to the academic technology approval scheme, degree programmes and modules are classified by subject. JACS3 does that job now, but HECoS will do it in the future. Here are the main differences. After many years of use, the Joint Academic Coding System (JACS) that’s pervasive in UK Higher Education data sets ran into some limits: it was running out of codes in some subject areas, and it was being used for many more purposes than it was originally designed to support. That’s why the Higher Education Data and Information Improvement Programme (HEDIIP) commissioned CETIS, in collaboration with APS and Aspire, to consult with the sector on a replacement of the vocabulary. The result of that work is the Higher Education Coding of Subjects (HECoS) vocabulary. HECoS has now reached the penultimate stage in that a release candidate is out for consultation, as are proposals for the governance and adoption of the scheme. The whole vocabulary can be seen on our tematres development site, and reports on the development of HECoS, as well as the proposals for governance and adoption are available from the consultation site. Here are the main differences between JACS3 and HECoS in a nutshell, though; One flat list, no hierarchies, and no memorable codes This is easily the biggest and most noticeable change. HECoS itself is just a list of terms without any implied or given groupings. That doesn’t mean groupings and hierarchies aren’t important, quite the contrary: different organisations have different uses for subject information, and that means they can group subjects differently. In a way, that follows on from what’s already happening with JACS3 in practice. The definition of what subjects constitutes biological sciences, for example, already differs between JACS3, HEFCE and what a typical university is likely to be able to offer. Different drivers and different contexts lead these organisations to group subjects differently, and HECoS is designed to enable different groupings to exist side by side, whilst still sharing the same subject terms. HECoS with many hierarchies A consequence of the approach is that the familiar JACS3 codes (“L3xx” is anything sociological etc.) are no longer valid. From the perspective of HECoS “sociolinguistics” will therefore have no defined link with “sociology”, which is why the code for the former is “101016” –or a URI that encodes that number such as http://hecos.hediip.ac.uk/terms/101016– and the code for the latter is “100505”. For ease of navigation, however, HECoS will come with some common groupings. There is a “sociology group” that has both “sociolinguistics” and “sociology” in it. This is just to help people find terms, and nodes like “sociology group” cannot be used to classify a degree programme or module. Terms are based on demonstrated use, need and distinguishability While JACS was reviewed periodically, it hasn’t always had formal acceptance criteria either for the terms that were already in there, or for newly proposed ones. HECoS does have a proposal for it, which has already been applied in the development of the current draft. The criteria for the first cut were, in short:
  1. is the term in JACS3?
  2. is there evidence of use of the term in HESA data returns?
  3. is the term’s definition and scope sufficiently clear and comprehensive to allow classification?
  4. is the term reliably distinguishable from other terms?
The first criterion comes out of a recognition that JACS has imposed a structure and created its own reality over the years. That’s a good thing, and worth preserving for time series analysis reasons alone. The second criterion addresses an issue that has bedevilled JACS for a while: many terms were sound in theory, but barely or never used in practice. This creates confusion and often makes coding unreliable: what good is a term if it groups one degree programme in one institution? For that reason, we looked at whether a term has at least two degree programmes in at least two institutions in HESA student data returns. The third criterion has to do with the way some JACS terms were defined: some were incomplete –e.g. “history by topic” without specifying what that topic was– or where not sufficiently complete to determine what was in or out. The final criterion of distinguishability is related to that: we examined the HESA returns for consistency of coding. If the spread of similar degree programmes over several terms indicated that people were struggling to distinguish between terms, we’ve rearranged terms so that they follow the groupings that were obvious in the data as closely as possible. We’ve also started to test any such changes with sorting exercises to ensure that people can indeed distinguish between four related terms. A commonly administered change process Just like JACS evolved over the years, so will HECoS. The difference is that we are proposing to regularise the change and allow it to follow a predictable path. The main mechanism for that would be a registry for new terms. The diagram outlines how a new subject term can be discovered, or entered for consideration for inclusion, or discovery by others. newTermProcess The proposed criteria for accepting a new term into HECoS proper are similar the ones used for the first draft: a term has to be demonstrably in use, or fill a need, and be distinguishable by non-specialists. In each case, though, the HECoS governance body, which is designed to represent the whole sector, will have the ultimate say on which terms will be accepted or retired, and how often these changes will happen.

When does a book become a web platform?

During last week’s CETIS conference I ran a session to assess how ebooks can function as an educational medium beyond the paper textbook.

After reminding ourselves that etextbooks are not yet as widespread as ebook novels, and that paper books generally are still most widely read, we examined what ebook features make a good educational experience.

Though many features could have been mentioned, the majority were still about the experience itself. Top of the bill: formative assessment at the end of a chapter. Either online or offline, it needs to be interactive, and there need to be a lot of items readily available. Other notable features in the area include a desire for contextualised discussion about a text. Global is good, but chats limited to other learners in a course is better. A way of asking for clarification of a teacher by highlighting text was another notable request.

Using standards to make assessment in e-textbooks scalable, engaging but robust

During last week’s EDUPUB workshop, I presented a demo of how an IMS QTI 2.1 question item could be embedded in an EPUB3 e-book in a way that is engaging, but also works across many e-book readers. Here’s the why and how.

One of the most immediately obvious differences between a regular book and an e-textbook is the inclusion of little quizzes at the end of a chapter that allow the learner to check their understanding of what they’ve just learned. Formative assessment matters in textbooks.

Question and Test tools demonstrate interoperability

As the QTI 2.1 specification gets ready for final release, and new communities start picking it up, conforming tools demonstrated their interoperability at the JISC – CETIS 2012 conference.

The latest version of the world’s only open computer aided assessment interoperability specification, IMS’ QTI 2.1, has been in public beta for some time. That was time well spent, because it allowed groups from across at least eight nations across four continents to apply it to their assessment tools and practices, surface shortcomings with the spec, and fix them.

Nine of these groups came together at the JISC – CETIS conference in Nottingham this year to test a range of QTI packages with their tools, ranging from the very simple to the increasingly specialised. In the event, only three interoperability bugs were uncovered in the tools, and those are being vigorously stamped on right now.

Where it gets more complex is who supports what part of the specification. The simplest profile, provisionally called CC QTI, was supported by all players and some editors in the Nottingham bash. Beyond that, it’s a matter of particular communities matching their needs to particular features of the specification.

In the US, the Accessible Portable Item Profile (APIP) group brings together a group of major test and tool vendors, that are building a profile for summative testing in schools. Their major requirement is the ability to finely adjust the presentation of questions to learners with diverse needs, which is why they have accomplished by building an extension to QTI 2.1. The material also works in QTI tools that haven’t been built explicitly for APIP yet.

A similar group has sprung up in the Netherlands, where the goal is to define all computer aided high stakes school testing in the country in QTI 2.1 That means that a fairly large infrastructure of authoring tools and players is being built at the moment. Since the testing material covers so many subjects and levels, there will be a series of profiles to cover them all.

An informal effort has also sprung up to define a numerate profile for higher education, that may yet be formalised. In practice, it already works in the tools made by the French MOCAH project, and the JISC Assessment and Feedback sponsored QTI-DI and Uniqurate projects.

For the rest of us, it’s likely that IMS will publish something very like the already proven CC QTI as the common core profile that comes with the specification.

More details about the tools that were demonstrated are available at the JISC – CETIS conference pages.

Approaches to building interoperability and their pros and cons

System A needs to talk to System B. Standards are the ideal to achieve that, but pragmatics often dictate otherwise. Let’s have a look at what approaches there are, and their pros and cons.

When I looked at the general area of interoperability a while ago, I observed that useful technology becomes ubiquitous and predictable enough over time for the interoperability problem to go away. The route to get to such commodification is largely down to which party – vendors, customers, domain representatives – is most powerful and what their interests are. Which describes the process very nicely, but doesn’t help solve the problem of connecting stuff now.

So I thought I’d try to list what the choices are, and what their main pros and cons are:

A priori, global
Also known as de jure standardisation. Experts, user representatives and possibly vendor representatives get together to codify whole or part of a service interface between systems that are emerging or don’t exist yet; it can concern either the syntax, semantics or transport of data. Intended to facilitate the building of innovative systems.
Pros:

  • Has the potential to save a lot of money and time in systems development
  • Facilitates easy, cheap integration
  • Facilitates structured management of network over time

Cons:

  • Viability depends on the business model of all relevant vendors
  • Fairly unlikely to fit either actually available data or integration needs very well

A priori, local
i.e. some type of Service Oriented Architecture (SOA). Local experts design an architecture that codifies syntax, semantics and operations into services. Usually built into agents that connect to each other via an ESB.
Pros:

  • Can be tuned for locally available data and to meet local needs
  • Facilitates structured management of network over time
  • Speeds up changes in the network (relative to ad hoc, local)

Cons:

  • Requires major and continuous governance effort
  • Requires upfront investment
  • Integration of a new system still takes time and effort

Ad hoc, local
Custom integration of whatever is on an institution’s network by the institution’s experts in order to solve a pressing problem. Usually built on top of existing systems using whichever technology is to hand.
Pros:

  • Solves the problem of the problem owner fastest in the here and now.
  • Results accurately reflect the data that is actually there, and the solutions that are really needed

Cons:

  • Non-transferable beyond local network
  • Needs to be redone every time something changes on the local network (considerable friction and cost for new integrations)
  • Can create hard to manage complexity

Ad hoc, global
Custom integration between two separate systems, done by one or both vendors. Usually built as a separate feature or piece of software on top of an existing system.
Pros:

  • Fast point-to-point integration
  • Reasonable to expect upgrades for future changes

Cons:

  • Depends on business relations between vendors
  • Increases vendor lock-in
  • Can create hard to manage complexity locally
  • May not meet all needs, particularly cross-system BI

Post hoc, global
Also known as standardisation, consortium style. Service provider and consumer vendors get together to codify a whole service interface between existing systems; syntax, semantics, transport. The resulting specs usually get built into systems.
Pros:

  • Facilitates easy, cheap integration
  • Facilitates structured management of network over time

Cons:

  • Takes a long time to start, and is slow to adapt
  • Depends on business model of all relevant vendors
  • Liable to fit either available data or integration needs poorly

Clearly, no approach offers instant nirvana, but it does make me wonder whether there are ways of combining approaches such that we can connect short term gain with long term goals. I suspect if we could close-couple what we learn from ad hoc, local integration solutions to the design of post-hoc, global solutions, we could improve both approaches.

Let me know if I missed anything!

ArchiMate modelling bash outcomes

What’s more effective than taking two days out and focus on a new practice with peers and experts?

Following the JISC’s FSD programme, an increasing number of UK Universities started to use the ArchiMate Enterprise Architecture modelling language. Some people have had some introductions to the language and its uses, others even formal training in it, others still visited colleagues who were slightly further down the road. But there was a desire to take the practice further for everyone.

For that reason, Nathalie Czechowski of Coventry University took the initiative to invite anyone with an interest in ArchiMate modelling (not just UK HE), to come to Coventry for a concentrated two days together. The aims were:

1) Some agreed modelling principles

2) Some idea whether we’ll continue with an ArchiMate modeller group and have future events, and in what form

3) The models themselves

With regard to 1), work is now underway to codify some principles in a document, a metamodel and an example architecture. These principles are based on the existing Coventry University standards and the Twente University metamodel, and the primary aim of them is to facilitate good practice by enabling sharing of, and comparability between, models from different institutions.

With regard to 2), the feeling of the ‘bash participants was that it was well worth sustaining the initiative and organise another bash in about six months’ time. The means of staying in touch in the mean time have yet to be established, but one will be found.

As to 3), a total of 15 models were made or tweaked and shared over the two days. Varying from some state of the art, generally applicable samples to rapidly developed models of real life processes in universities, they demonstrate the diversity of the participants and their concerns.

All models and the emerging community guidelines are available on the FSD PBS wiki.

Jan Casteels also blogged about the event on Enterprise Architect @ Work

And the Winner Is … The UK

I have spent this week at the IMS Learning Impact Conference in Long Beach California. I’ve enjoyed the conference and sensed a remarkably fresh approach, amongst delegates and IMS alike, to standards and their role in educational technology. Overall I’d suggest a strong re- affirmation that the direction of travel we have been following in CETIS is very much on course. Lots of talk of openness, collaboration and Learner centred approaches (I’ll reflect on this in my next blog post). As is custom at this event the final activity, before workshops and working group meetings, is the annual Learning Impact Awards. It was something akin to the British (music) invasion of the early 1960′s with the UK dominating the platinum awards across all categories winners included The BBC for their accessibility tool kit ASK, Pebblepad and the Nottingham Xerte online toolkit Three out of the four main awards to the UK with two of these being accessibility tools.

Clive 1st June 2007

It has been some times since I ‘blogged’

Since I last ‘put fingers to keyboard’ I have been boring folk with my view that over the last year the ‘centre of gravity’ of learning technology standards developments have moved to the schools and FE sectors from HE. Driven by the e-Strategy and other governemnt schools based agendas such as ‘Every Child Matters’, Becta, MIAP and others have been obliged to deliver solutions. to strictly imposed deadlines. The focus so far has been in two areas: joining social service systems up with school administration systems and learning platforms for schools. Both have required standards based developments: the Schools Interoperability Framework (SIF) underpinned by the adoption of a Unique learner Number (ULN) has been adopted for the former and Becta has produced specifications (all around standards and extendability) that suppliers of learning platforms have to satisfy. Additionally, the Qualification and Curriculum Authority, in order to meet the requirements of the new vocational Specilaised Diploma will have to produce data standards by the Autumn for course details and qualification achievements.

Obviously such developments will have an impact for the JISC communities. Firstly, there will be a concentration  of minds around where standards are really needed (rather than ‘could be useful’) and there will be a requirement for JISC to focus on those areas of detail which could impede national projects if not attended to. Solutions to the problems of Identity Management and the scalability of SOA implementations are just two that need urgent attention.

So to survive , JISC has to be sufficiently engaged in influencing and engaging with learning technology based solutions in the schools and tertiary sector in order to anticpate those areas that need the efforts and expertise of our community?

So what have I been doing for my one day per week in addition to boring my colleagues at mangement meetings with the above?

Well I have been supportng Peter with ePortfolio develovepment (around assessment) and with the help of Nottingham University finding out about Lifelong Learning Networks,Â