eTextBooks Europe

I went to a meeting for stakeholders interested in the eTernity (European textbook reusability networking and interoperability) initiative. The hope is that eTernity will be a project of the CEN Workshop on Learning Technologies with the objective of gathering requirements and proposing a framework to provide European input to ongoing work by ISO/IEC JTC 1/SC36, WG6 & WG4 on eTextBooks (which is currently based around Chinese and Korean specifications). Incidentally, as part of the ISO work there is a questionnaire asking for information that will be used to help decide what that standard should include. I would encourage anyone interested to fill it in.

The stakeholders present represented many perspectives from throughout Europe: publishers, publishing industry specification bodies (e.g. IPDF who own EPUB3, and DAISY), national bodies with some sort of remit for educational technology, and elearning specification and standardisation organisations. I gave a short presentation on the OER perspective.

Many issues were raised through the course of the day, including (in no particular order)

  • Interactive and multimedia content in eTextbooks
  • Accessibility of eTextbooks
  • eTextbooks shouldn’t be monolithic and immutable chunks of content, it should be possible to link directly to specific locations or to disaggregate the content
  • The lifecycle of an eTextbook. This goes beyond initial authoring and publishing
  • Quality assurance (of content and pedagogic approach)
  • Alignment with specific curricula
  • Personalization and adaptation to individual needs and requirements
  • The ability to describe the learning pathway embodied in an eTextbook, and vary either the content used on this pathway or to provide different pathways through the same content
  • The ability to describe a range IPR and licensing arrangements of the whole and of specific components of the eTextbook
  • The ability to interact with learning systems with data flowing in both directions

If you’re thinking that sounds like a list of the educational technology issues that we have been busy with for the last decade or two, then I would agree with you. Furthermore, there is a decade or two’s worth of educational technology specs and standards that address these issues. Of course not all of those specs and standards are necessarily the right ones for now, and there are others that have more traction within digital publishing. EPUB3 was well represented in the meeting (DITA is the other publishing standard mentioned in the eTernity documentation, but no one was at the meeting to talk about that) and it doesn’t seem impossible to meet the educational requirements outlined in the meeting within the general EPUB3 framework. The question is which issues should be prioritised and how should they be addressed.

Of course a technical standard is only an enabler: it doesn’t in itself make any change to teaching and learning; change will only happen if developers create tools and authors create resources that exploit the standard. For various reasons that hasn’t happened with some of the existing specs and standards. A technical standard can facilitate change but there needs to a will or a necessity to change in the first place. One thing that made me hopeful about this was a point made by Owen White of Pearson that he did not to think of the business he is in as being centred around content creation and publishing but around education and learning and that leads away from the view of eBooks as isolated static aggregations.

For more information keep an eye on the eTernity website

The Human Computer: a diversion from normal CETIS work

Alan Turing, 1951. Source: wikipedia

Alan Turing, 1951. Source: wikipedia

No, there’s no ‘Interaction’ missing in that title, this is about building a computer, or at least a small part of one, out of humans. The occasion was a birthday party that the department I work in, Computer Science at Heriot-Watt University, held to commemorate the centenary of Alan Turing’s birth. It was also the finale of a programming competition that the department set for sixth-formers, to create a simulation of a Turing Machine. So we had some of the most promising computer science pupils in the country attending.

As well as the balloons, cake and crisps, we had some party games, well, activities for our guests. They could have a go at the Turing test, at breaking enigma codes, and my contribution was for them to be a small part in a computer, a 2-bit adder. The aim was to show how the innards of a computer processor are little more than a whole load of switches and it doesn’t matter much (at least to a mathematician like Turing) what these switches are. I hoped this would help show that computers are more than black boxes, and help put add some context to what electronic computers were about when Turing was working. (And, yes, I do know that it was Shannon not Turing who developed the theory.)

So, it starts with a switch that can turn another switch on and off. Here’s a simulation of one which uses a transistor to do that. If you click on that link a java window should open that shows a simple circuit. The input on the left is at a Low voltage, the output is at a low voltage. Click on the input to set it to a High, and it will turn on the transistor, connecting the output to the high voltage source, so the output goes High. So by setting the input to high voltage (presumably by pressing a switch) you can set the output to high voltage. You’re allowed to be under-impressed at this stage. (Make sure you close any windows or browser tabs opened by that link, leaving them open might cause later examples not to work)

Turing didn’t have had access to transistors. At the time he worked these switches were electromechanical relays, a physical spring-loaded switch that was closed by the magnetic attraction between a coil and permanent magnet when a current ran through the coil. Later, vaccuum tube valves were available to replace these, but much to Tommy Flowers chagrin, Turing wasn’t at all interested in that. For mathematicians the details of the switching mechanism are a distraction. By not caring, maybe not even knowing, about the physics of the switch Turing was saved from worrying about a whole load of details that would have been out of date by the 1960s; as it is his work is still relevant today. This illustrates my favourite feature of Mathematics, which is that maths is the only subject where it is best not to know what you are talking about.

Back to this thing of turning a voltage signal high or low by turning a voltage high or low.

Two transistor AND gate

Two transistor AND gate

That may be underwhelming, but put two of these next to each other and something interesting happens: the output will only be High if both the inputs are. In other words the output is High if both input 1 AND input 2 are high. That’s mathematics: a simple logic calculation. You can try it out in the simulation. You can also try other arrangements that show an OR logic calculation and an XOR calculation, that is an exclusive OR, the output is high if on of input 1 or input 2 is high but not both. We call these circuits logic gates. Remember to close all windows and browser tabs when going from one simulation to another.

This is where we leave electronics and start using the audience. My colleague and I each had a flag and we gave everyone in the audience a flag. We were the inputs, they had to be logic gates; they had to raise their flag if she AND I both raised ours, or if she OR I had a flag up, or if she or I, but not both of us raised a flag (the XOR calculation).

The next trick was to show how these logic calculations relate to adding numbers together: A+B = S. First, of course, the numbers must be represented as binary with a low voltage/flag down equivalent to the digit 0 and high voltage/flag up equivalent to the digit 1. And we have to do the addition one digit at a time, starting from the units. Adding the first digit, the units, is easy enough. 0+0 = 0, 0+1=1, 1+0=1, 1+1=0 with 1 to carry. Think of that input 1 + input 2 = output, where the output can either be the digit for the sum or the digit to carry. For the sum, the output is 1 if either input 1 or input 2 is high, but not both, so S = input 1 XOR input 2; and we carry 1 if = input 1 AND input 2 are 1. The second and subsequent digits are harder since we need to add the digit from each number and the carry, but it’s not too difficult.

We can use logic gates to do the calculation for each bit of the addition. The circuit looks like this:
You can hopefully see how bit one of the sum if the XOR of the inputs for bits one of the numbers A and B, and the carry to the calculation of the second bit is the AND of these inputs. Again there is a simulation you can try, you might need to stretch the JAVA window to see all the circuit. Try 1 plus 1 (01+01 = 10 so set inputs A1 and B1 High, A2 and B2 Low to gives Output S1 Low and Output S2 High). And 2 + 2 (10 + 10).

We implemented this circuit using our audience of flag-wavers. We put pupils on the front row to be the inputs, pupils on the next row to be gates 1-4, and so on, making sure that each one knew at whom they should be looking and what condition should be met for them to raise their flag. We ran this three times, and each time it worked brilliantly. OK, so we could only add numbers less that 3, which isn’t much computing power, but given another 35 people we could have done eight-bit addition. And I’m pretty sure that we could have managed flip-flops and registers, but we would need something like 10,000 pupils to build a processor equivalent to an 8086, so the logistics might be difficult.

Text and Data Mining workshop, London 21 Oct 2011

There were two themes running through this workshop organised by the Strategic Content Alliance: technical potential and legal barriers. An important piece of background is the Hargreaves report.

The potential of text and data mining is probably well understood in technical circles, and were well articulated by JohnMcNaught of NaCTeM. Briefly the potential lies in the extraction of new knowledge from old through the ability to surface implicit knowledge and show semantic relationships. This is something that could not be done by humans, not even crowds, because of volume of information involved. Full text access is crucial, John cited a finding that only 7% of the subject information extracted from research papers was mentioned in the abstract. There was a strong emphasis, from for example Jeff Lynn of the Coalition for a digital economy and Philip Ditchfield of GSK, on the need for business and enterprise to be able to realise this potential if it were to remain competetive.

While these speakers touched on the legal barriers it was Naomi Korn who gave them a full airing. They start in the process of publishing (or before) when publishers acquire copyright, or a licence to publish with enough restriction to be equivalent. The problem is that the first step of text mining is to make a copy of the work in a suitable format. Even for works licensed under the most liberal open access licence academic authors are likely to use, CC-by, this requires attribution. Naomi spoke of attribution stacking, a problem John had mentioned when a result is found by mining 1000s of papers: do you have to attribute all of them? This sort of problem occurs at every step of the text mining process. In UK law there are no copyright exceptions that can apply: it is not covered by fair dealling (though it is covered by fair use in the US and similar exceptions in Norwegian and Japanese law, nowhere else); the exceptions for transient copies (such as in a computers memory when readng on line) only apply if that copy has not intrinsic value.

The Hargreaves report seeks to redress this situation. Copyright and other IP law is meant to promote innovation not stifle it, and copyright is meant to cover creative expressions, not the sort of raw factual information that data mining processes. Ben White of the British Library suggested an extension of fair dealling to permit data mining of legally obtained publications. The important thing is that, as parliament acts on the Hargreaves review people who understand text mining and care about legal issues make sure that any legislation is sufficient to allow innovation, otherwise innovators will have to move to those jurisdictions like the US, Japan and Norway where the legal barriers are lower (I’ll call them ‘text havens’).

Thanks to JISC and the SCA on organising this event; there’s obviously plenty more for them to do.