Wilbert Kraan » educational content http://blogs.cetis.org.uk/wilbert Cetis blog Wed, 22 Apr 2015 13:17:21 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.22 Using standards to make assessment in e-textbooks scalable, engaging but robust http://blogs.cetis.org.uk/wilbert/2013/11/06/using-standards-to-make-assessment-in-e-textbooks-scalable-engaging-but-robust/ http://blogs.cetis.org.uk/wilbert/2013/11/06/using-standards-to-make-assessment-in-e-textbooks-scalable-engaging-but-robust/#comments Wed, 06 Nov 2013 16:30:22 +0000 http://blogs.cetis.org.uk/wilbert/?p=209 During last week’s EDUPUB workshop, I presented a demo of how an IMS QTI 2.1 question item could be embedded in an EPUB3 e-book in a way that is engaging, but also works across many e-book readers. Here’s the why and how.

One of the most immediately obvious differences between a regular book and an e-textbook is the inclusion of little quizzes at the end of a chapter that allow the learner to check their understanding of what they’ve just learned. Formative assessment matters in textbooks.

When moving to electronic textbooks, there is a great opportunity to make that assessment more interactive, and provide richer feedback, and connect the learning to a wider view of how a student is doing (i.e. learning analytics). The question is how to do that in a way that works across many e-reading devices and applications, on a scale that works for publishers.

QTI item in Adobe Editions

QTI item in Adobe Editions

Scalability is where interoperability standards like EPUB3, IMS Learning Tool Interoperability (LTI) and IMS Question and Test Interoperability (QTI) 2.1 come in. People use a large number of different software systems in the authoring, management, and playback of e-books. Connecting each of those to all the others with one-off custom integrations just gets too complex, too expensive and too brittle; that’s why an increasing number of publishers and software vendors agreed on the EPUB specification. As long as you implement that spec, solutions can scale across many e-book applications. The same goes for question and test material, where IMS QTI does the same job. LTI does that job for connecting VLEs to any online learning tool.

Which leaves the question of how to square the circle of making the assessment experience as engaging and effective as possible, but also work on devices with very different capabilities.

Fortunately, EPUB3 files can include a number of techniques that allow an author to adapt the content to the capability of the device it is being read on. I used those techniques to present the same QTI item in three different ways; as a static quiz – much like a printed book –, as a simple interactive widget and as a feedback rich test run by an online assessment system inside the book. The latter option makes detailed analytics data available and it should also make it possible to send a grade to a VLE automatically.

The how

QTI item in Apple iBooks

QTI item in Apple iBooks

For the static representation and the interactive widget, I relied on Steve Lay’s rather brilliant transform from QTI XML to HTML5 (and back again), and to make the HTLM5 interactive with some javascript. By including this QTI HTML5 in the EPUB, you get all the advantages of standard QTI, in a way that still works in a simple, offline reader such as Adobe Editions as well as more capable software such as Apple’s iBooks.

For the most capable, online ebook readers such as Readium, the demo e-textbook connects to QTIWorks, an online QTI compliant assessment engine. It does that via IMS LTI 1.1, but in a somewhat unusual way: in LTI terms, the e-book behaves as a tool consumer. That is; like a VLE. Using a hash of an Oauth secret and key, it establishes a connection to QTIWorks, identifies the user, and retrieves the right quiz to show inside the ebook. A place to send the results of the quiz to is also provided, but I’ve not tested that yet. QTIWorks makes detailed report available of what the learner did exactly with each item, which can be retrieved in a variety of machine readable formats.

QTI item in Readium

QTI item in Readium

Because the secret and the key have to be included in the book, the LTI connection the book establishes is not as secure as an LTI connection from a proper VLE. For access to some formative assessment, that may be a price worth paying, though.

The demo EPUB3 uses both scripting and some metadata to determine which version of the QTI item to show. The QTI item, the LTI launch and the EPUB textbook are all valid according to their specifications, and rely on stock readers to work.

Acknowledgements and links

David McKain for making QTIWorks
Steve Lay for the QTI HTML transforms
John Kristian of the OAuth project for the OAuth javascript library
Stephen Vickers for the ceLTIc IMS LTI development tools

The (ugly, content-less) demonstration EPUB3 and associated code is available from Github.

]]>
http://blogs.cetis.org.uk/wilbert/2013/11/06/using-standards-to-make-assessment-in-e-textbooks-scalable-engaging-but-robust/feed/ 1
Question and Test tools demonstrate interoperability http://blogs.cetis.org.uk/wilbert/2012/03/16/question-and-test-tools-demonstrate-interoperability/ http://blogs.cetis.org.uk/wilbert/2012/03/16/question-and-test-tools-demonstrate-interoperability/#comments Fri, 16 Mar 2012 13:32:31 +0000 http://blogs.cetis.org.uk/wilbert/?p=162 As the QTI 2.1 specification gets ready for final release, and new communities start picking it up, conforming tools demonstrated their interoperability at the JISC – CETIS 2012 conference.

The latest version of the world’s only open computer aided assessment interoperability specification, IMS’ QTI 2.1, has been in public beta for some time. That was time well spent, because it allowed groups from across at least eight nations across four continents to apply it to their assessment tools and practices, surface shortcomings with the spec, and fix them.

Nine of these groups came together at the JISC – CETIS conference in Nottingham this year to test a range of QTI packages with their tools, ranging from the very simple to the increasingly specialised. In the event, only three interoperability bugs were uncovered in the tools, and those are being vigorously stamped on right now.

Where it gets more complex is who supports what part of the specification. The simplest profile, provisionally called CC QTI, was supported by all players and some editors in the Nottingham bash. Beyond that, it’s a matter of particular communities matching their needs to particular features of the specification.

In the US, the Accessible Portable Item Profile (APIP) group brings together a group of major test and tool vendors, that are building a profile for summative testing in schools. Their major requirement is the ability to finely adjust the presentation of questions to learners with diverse needs, which is why they have accomplished by building an extension to QTI 2.1. The material also works in QTI tools that haven’t been built explicitly for APIP yet.

A similar group has sprung up in the Netherlands, where the goal is to define all computer aided high stakes school testing in the country in QTI 2.1 That means that a fairly large infrastructure of authoring tools and players is being built at the moment. Since the testing material covers so many subjects and levels, there will be a series of profiles to cover them all.

An informal effort has also sprung up to define a numerate profile for higher education, that may yet be formalised. In practice, it already works in the tools made by the French MOCAH project, and the JISC Assessment and Feedback sponsored QTI-DI and Uniqurate projects.

For the rest of us, it’s likely that IMS will publish something very like the already proven CC QTI as the common core profile that comes with the specification.

More details about the tools that were demonstrated are available at the JISC – CETIS conference pages.

]]>
http://blogs.cetis.org.uk/wilbert/2012/03/16/question-and-test-tools-demonstrate-interoperability/feed/ 0
On integrating Wave, Wookie and Twitter http://blogs.cetis.org.uk/wilbert/2009/07/31/on-integrating-wave-wookie-and-twitter/ http://blogs.cetis.org.uk/wilbert/2009/07/31/on-integrating-wave-wookie-and-twitter/#comments Thu, 30 Jul 2009 23:47:20 +0000 http://blogs.cetis.org.uk/wilbert/?p=54 Stitching together the new ‘realtime’ collaborative platforms seems an obvious thing to be doing in theory, but throws up some interesting issues in practice.

Back when Wave was first demonstrated by Google, one of the robots (participants in a Wave that represent some machine or service) that got a few minutes in the limelight was Tweety the twitterbot. Tweety showed how updates to a Wave can be piped through to Twitter, and vice versa. Great, you can follow conversations in the one, without leaving the other.

Tweety the twitterbot pipes tweets into a Wave

Tweety the twitterbot pipes tweets into a Wave

In the same vein, when Google unveiled Wave Gadgets (small applications that can be integrated into a Wave), their collaborative, multi-user, concurrent or realtime nature was so similar to Wookie, that Scott Wilson converted them to run on Wookie rather than Wave. Now that Wave is slowly opening up, what could be more natural than seeing whether a bunch of people on a Wave could be interacting with a bunch of people in Moodle via a Wookie widget?

Technically, all this is possible, and I’ve seen it work- but it does require some thinking about how the contexts fit together.

For example, Tweety relays all tweets from all followers of one account to everyone in a Wave. Worse, it relays everyone’s blips from that Wave as separate tweets via the one account. Apart from the fact that seeing other people’s contributions appear in your own name, the Wave conversation is just too noisy and fast to make any sense in a micro-blogging context. Likewise, all tweets from all followers of one person (live!) is a lot of noise when a worthwhile Wave conversation can only really be about a couple of linked tweets between a few followers.

The Wookie ‘Natter’ chat widget in Wave integration looked more of a natural fit. As integrations go, this was pretty shallow: Natter is displayed in Wave, but has no idea who is interacting with it there. Put differently, everyone in Wave is just one participant in Natter. This is potentially quite useful, as everyone else in Natter is in a wholly different environment, and therefore engaged in a different activity. They can’t see what’s going on in the linked Wave. Channelling contributions through one entity therefore makes sense; linked but separate.

The Wookie 'Natter' chat widget in Wave

The Wookie 'Natter' chat widget in Wave

A deeper integration where Wave participants showed up in a Wookie widget properly, much like Moodle or Elgg based people do now, looks technically feasible and socially doable. Unlike Twitter and Wave, the basic user interaction models of Wookie and Wave look similar enough not to jar.

Other integrations are also conceivable: you could push updates on a Wookie widget as blips to a Wave via a robot (Tweety-style) and vice versa, but that looks like a lot of effort for not much gain over a widget approach.

Longer term, though, the best way to integrate realtime platforms seems to be via something like the Wave protocol. That way, people can pick and choose the environment and/or user interface that suits them, independent of the social context or network they’re interacting with. Fortunately, that’s what Google is aiming for with some more releases of open source client and server code and updates to the Wave protocol.

For a wider overview of Wave and its potential for teaching and learning, see my earlier post

]]>
http://blogs.cetis.org.uk/wilbert/2009/07/31/on-integrating-wave-wookie-and-twitter/feed/ 1
Blackboard pledges open standard support. http://blogs.cetis.org.uk/wilbert/2009/06/24/blackboard-pledges-open-standard-support/ http://blogs.cetis.org.uk/wilbert/2009/06/24/blackboard-pledges-open-standard-support/#comments Wed, 24 Jun 2009 00:39:28 +0000 http://blogs.cetis.org.uk/wilbert/?p=52 Ray Henderson, President of Blackboard’s Teaching and Learning division, formerly of Angel learning, made a very public commitment to supporting standards yesterday.

Although even Ray admits that the final proof will be in the software if and when it arrives, the public statement alone is something that I genuinely thought would never happen. From its inception, Blackboard, and most of the rest of the closed source educational technology community, have followed a predictable US technology market path: to be the last competitor standing was the goal, and everyone would betray every stakeholder they had before they’d be betrayed by them. As with other applications of Game theory in the wild, though, there seems to be at least a suggestion that people are willing to cooperate, and break the logic of naked self-interest.

What’s on offer from Ray is, first and foremost, implementation of IMS’ Common Cartridge, followed by other IMS specifications such as Learning Tool Interoperability (LTI) and Learner Information Services (LIS). SCORM and the Schools Interoperability Framework (SIF) also get a mention.

On the CC front, the most interesting aspect by far is a pledge to support not just import of cartridges, but also export. In a letter to customers, Ray explicitly mentions content authored by faculty on the system, which suggests that it wouldn’t just mean re-export of canned content. You’d almost think that this could be the end of the content “Blackhole”

Catches?

The one immediate catch is this:

creators of learning content and tools will of course still need to have formal partnerships (for example in our case participating in the Blackboard Building Blocks program or the Blackboard Content Provider network) with platform providers like us in order to connect their standards-compliant tool or content to eLearning platforms through supported interfaces.

This doesn’t strike me as at all obvious, and the given reasons – to ensure stability and accountability – not entirely convincing. That customers are on their own if they wish to connect a random tool that claims to exercise most of IMS LTI 2.0, I can understand. But I don’t quite understand why a formal relationship is required to upload some content, nor how that would work for content authors who don’t normally enter into such formal relations with vendors. It’s also not easy to see how such a business requirement would be enforced without breaking the standard.

The other potential catch is that Blackboard’s political heft, combined with its platform’s technical heft, means that the standards that it wants to lead on end up with high barriers to entry. That is, interfaces that are easy to add to Blackboard, may not be so easy to add to anything else. And given Blackboard’s market position, it’s their preferences that might well trump others.

Still, the very public commitment is to the open standards, and the promise is that the code will vindicate that commitment. Even a partial return on that promise will make a big difference to interoperability in the classic VLE area.

]]>
http://blogs.cetis.org.uk/wilbert/2009/06/24/blackboard-pledges-open-standard-support/feed/ 1
Google Wave and teaching & learning http://blogs.cetis.org.uk/wilbert/2009/05/29/google-wave-and-teaching-learning/ http://blogs.cetis.org.uk/wilbert/2009/05/29/google-wave-and-teaching-learning/#comments Fri, 29 May 2009 13:47:54 +0000 http://blogs.cetis.org.uk/wilbert/?p=47 The announcement of Google’s new Wave technology seems to be causing equal parts excitement and bafflement. For education, it’s worth getting through the bafflement, because the potential is quite exciting.

What is Google Wave?

There’s many aspects, and the combination of features is rather innovative, so a degree of blind-people-describing-an-elephant will probably persist. For me, though, Google Wave exists on two levels: one is as a particular social networking tool, not unlike facebook, twitter etc. The other is as a whole new technology, on the same level as email, instant messaging or the web itself.

As a social networking tool, Wave’s brain, erm, ‘wave’ is that it focusses on the conversation as the most important organising principle. Unlike most existing social software, communication is not between everyone on your friends/buddies/followers list, but between everyone invited to a particular conversation. That sounds like good old email, but unlike email, a wave is a constantly updated, living document. You can invite new people to it, watch them add stuff as they type, and replay the whole conversation from the beginning.

As a new technology, then, Google Wave turns every conversation (or ‘wave’ in Google speak) into a live object on the internet, that you can invite people and other machine services (‘robots’) to. The wave need not be textual, you can also collaborate on resources or interact with simple tools (‘gadgets’). Between them, gadgets and robots allow developers to bring in all kinds of information and functionality into the conversation.

The fact that waves are live objects on the internet points to the potential depth of the new technology. Where email is all about stored messages, and the web about linked resources, Wave is about collaborative events. As such, it builds on the shift to a ‘realtime stream’ approach to social interaction that is being brought about by twitter in particular.

The really exciting bit about Wave, though, is the promise that – like email and the web, and unlike most social network tools – anyone can play. It doesn’t rely on a single organisation; anyone with access to a server should be able to set up a wave instance, and communicate with other wave instances. The wave interoperability specifications look open, and the code that Google uses will be open sourced too.

Why does Wave matter for teaching and learning?

A lot of educational technology centres around activity and resource management. If you take a social constructivist approach to learning, the activity type that’s most interesting is likely to be group collaboration, and the most interesting resources are those that can be constructed, annotated or modified collaboratively.

A technology like Google Wave has the potential to impact this area significantly, because it is built around the idea of real time document collaboration as the fundamental organising concept. More than that, it allows the participants to determine who is involved with any particular learning activity; it’s not limited to those that have been signed up for a whole course, or even to those who where involved in earlier stages of the collaboration. In that sense, Google Wave strongly resembles pioneering collaborative, participant-run, activity focussed VLEs such as Colloquia (disclosure: my colleagues built Colloquia).

In order to allow learning activities to become independent of a given VLE or web application, and in order to bring new functionality to such web applications, Widgets have become a strong trend in educational technology. Unlike all these educational widget platforms (bar one: wookieserver), however, Wave’s widgets are realtime, multi-user and therefore collaborative (disclosure: my colleagues are building wookieserver).

That also points to the learning design aspect of Wave. Like IMS Learning Design tools (or LD inspired tools such as LAMS), Wave takes the collaborative activity as the central concept. Some concepts, therefore, map straight across: a Unit of Learning is a Wave, an Act a Wavelet, there are resources, services and more. The main thing that Wave seems to be missing natively is the concept of role, though it looks like you can define them specifically for a wave and any gadgets and robots running on them.

In short, with a couple of extensions to integrate learning specific gadgets, and interact with institutional systems, Waves could be a powerful pedagogic tool.

But isn’t Google evil?

Well, like other big corporations, Google has done some less than friendly acts. Particularly in markets where it dominates. Social networking, though, isn’t one of those markets, and therefore, like all companies that need to catch up, it needs to play nice and open.

There might still be some devils in the details, and there’s an awful lot that’s still not clear. But it does seem that Google is treating this as a rising platform/wave that will float all boats. Much as they do with the general web.

Will Wave roll?

I don’t think anyone knows. But the signs look promising: it synthesises a number of things that are happening anyway, particularly the trend towards the realtime stream. As with new technology platforms such as BBSs and the web in the past, we seem to be heading towards the end of a phase of rapid innovation and fragmentation in the social software field. Something like Wave could standardise it, and provide a stable platform for other cool stuff to happen on top of it.

It could well be that Google Wave will not be that catalyst. It certainly seems announced very early in the game, with lots of loose ends, and a user interface that looks fairly unattractive. The concept behind it is also a big conceptual leap that could be too far ahead of its time. But I’m sure something very much like Wave will take hold eventually.

Resources:

Google’s Wave site

Wave developer API guide. This is easily the clearest introduction to Wave’s concepts- short and not especially technical

Very comprehensive article on the ins and outs at Techcrunch

]]>
http://blogs.cetis.org.uk/wilbert/2009/05/29/google-wave-and-teaching-learning/feed/ 11
IMS QTI and the economics of interoperability http://blogs.cetis.org.uk/wilbert/2009/04/07/ims-qti-and-the-economics-of-interoperability/ http://blogs.cetis.org.uk/wilbert/2009/04/07/ims-qti-and-the-economics-of-interoperability/#comments Tue, 07 Apr 2009 00:35:47 +0000 http://blogs.cetis.org.uk/wilbert/?p=30 In the twelve years of its existence, an awful lot has been learned about interoperability by IMS staff and members. This is nowhere more apparent than in the most quintessentially educational of interoperability standards: question and test items (QTI). A recent public spat about the IMS QTI specification provides an interesting contrast to two emerging views of how to achieve interoperability. Fortunately for QTI, they’re not incompatible with each other.

Under the old regime, the way interoperability was achieved was by establishing consensus among the largest number of stakeholders possible, create a spec, publish it and wait for the implementations to follow. With the benefit of hindsight, it’s fair to say that the results have been mixed.

Some IMS specs got almost no implementation at all, some galvanised a lot of development but didn’t reach production use, and some were made to work for particular communities by their particular communities. On the whole, many proved remarkably flexible in use, and of sound technical design.

Trouble was, more often than not, two implementations of the same IMS spec were not able to exchange data. To understand why, the QTI spec is illustrative, but not unique.

For a question and test spec to be useful to most communities, and for several of these communities to be able to share data or tools, a reasonably wide range of types needs to be supported. QuestionMark (probably the market leader in the sector) uses the wide range of question types that its product supports as a key differentiator. Likewise, though IMS QTI 2.1 is very expressive, a lot of practitioners in the CETIS Assessment SIG frequently discuss extensions to ensure that the specification meets their needs.

The upshot is that QTI 2.1 is implementable, as a fair old list of tools on wikipedia demonstrates, but implementing all of it isn’t trivial. This could be argued to be one reason why it is not in wider use, though the other reason might well be that QTI 2.1 was never released as a final specification, and now is no longer accessible to non-IMS members.

To see how to get out of this status quo, the economics of standard implementation need to be considered. From a vendor’s point of view – open or closed source – , implementing any interoperability spec represents a cost. The more complex and flexible the specification, the higher that cost is. This is not necessarily a problem, as long as the benefit is commensurate. If either the market is large enough, or else the perceived value of the spec high enough for the intended customers to be willing to pay more, the specification will be economically viable.

Broadly two models of interoperability can be used to figure out a way to make a spec economically viable, and which you go for largely depends on your assumptions about the technical architecture of the solution.

One model assumes that all implementations of a spec like QTI are symmetrical and relatively numerous. Numerous as in certainly more than two or three, and possibly double digits or more, and systems as in VLEs. With that assumption, the QTI situation needs clear adjustment. The VLE market is not that large to begin with, and is fairly commoditised. There is little room for investment, and there has not been a demonstrated willingness to pay for extended interoperable question and test features.

From the symmetrical perspective, then, the only way forward is to simplify the spec down to a level that the market will bear, which is to say, very simple indeed. Since, as we’re already seeing with the QTI 1.2 profile in Common Cartridge, it is not possible to satisfy all communities with the same small set of question and test items, there will almost certainly need to be multiple small profiles.

There are several problems with such an approach. For one, reducing the feature set to the lowest cost has a linear relation to the value of the feature set to the end user. Beyond a minimum it might be almost useless. Balkanising the spec’s space to several incompatible subsets is likely to exacerbate this; not just for end-users, but also tool and content vendors.

What’s worse, though, is that the underlying assumption is wrong. Symmetrical interoperability doesn’t work. To my knowledge, and I’d love to be corrected, there are no significant examples of an interoperability spec that has significant numbers of independent implementations that happily export and import each others’ data. The task of coordinating the crucial details of the interpretation of data is just too onerous once the number of data sources and targets that a piece of software has to deal with gets into the double digits.

Symetrical, many-to-many interoperability; 8 systems, 56 connections that need to work

Symetrical, many-to-many interoperability; 8 systems, 56 connections that need to work

Within the e-learning world, SCORM 1.2 (and compatible IMS Content Packages) came closest to the symmetric, many-to-many ideal, but only because the spec was very simple, the volume of the market large, compliance often mandated and calculated into Requests For Proposals (RFPs), and vendors were prepared to coordinate their implementations in numerous plugfests and codebashes as a consequence. Also, ADL invested a lot of money in continuous implementation support. Even then, plenty of issues remained, and, crucially, most implementations were not symmetrical: they imported only. Once the complexity of the SCORM increased significantly with the adoption of Simple Sequencing in SCORM 2004, the many-to-many interoperability model broke down.

Instead, the emergence of solutions like Icodeon’s SCORM 2004 plug-in for VLEs brought the spec back to the norm: asymmetrical interoperability. Under this assumption, there will only ever be a handful of importing systems at most, but a limitless number of data sources. It’s how HTML works on the web: uncountable sources that need to target only about four codebases (Internet Explorer, Mozilla, WebKit, Opera), one of which dominates to such an extent that the others need to emulate its behaviour. Same with JPEG picture rendering libraries, BIND implementations and more. In educational technology it is how Simple Sequencing and SCORM 2004 got traction, and it is starting to look as if it will be the way most people will see IMS Common Cartridge too.

Under this assumption, implementing a rich QTI profile in two or three plug-ins or web services becomes economically much more viable. Not only is the amount of required testing much reduced, the effective cost of implementation is spread out over many more systems. VLE vendors can offer the feature for much less, because the total market has effectively paid for just two or three best-of-breed implementations rather than tens of mediocre ones.

Asymetrical, many-to-many interoperability; 8 source systems, 2 consuming systems, 16 connections that need to work

Asymetrical, many-to-many interoperability; 8 source systems, 2 consuming systems, 16 connections that need to work

This is not a theoretic example. Existing rich QTI 2.1 implementations make the asymmetric interoperability assumption. In Korea, KERIS (Korea Education and Research Information Service) is coordinating the development of three commercial implementations of the rendering and test side of QTI, but many specialised authoring tools are envisaged. Likewise, in the UK, two full implementations of the rendering and test management side of QTI exist, but many subject specific authoring tools are envisaged. All existing renderers can be used as a web application, and QTIEngine is also explicitly designed to work as a local plug-in or web service that can be embedded in various VLEs.

That also points to various business models that asymmetric interoperability enables. VLE vendors can focus on the social networking core, and leave the activity specific tools to the specialists with the right expertise. Alternatively, vendors can band together and jointly develop or adopt an open source code library, like the Japanese companies that implemented Simple Sequencing under ALIC auspices, back in the day.

Even if people still want to persist with symmetrical interoperability, designing the specification to accommodate both assumptions is not a problem. All that’s required is one rich profile for the many-to-few, asymmetric assumption, and a very small one for the many-to-many, symmetric assumption. Let’s hope we get both.

Resources

A brief overview of the current QTI 2.1 discussion

Wikipedia’s QTI page, which contains a list of implementations

More on the KERIS QTI 2.1 tools

The QTIEngine demo site

An interview with Kyoshi Nakabayashi, formerly of ALIC, about joint Simple Sequencing implementation work

]]>
http://blogs.cetis.org.uk/wilbert/2009/04/07/ims-qti-and-the-economics-of-interoperability/feed/ 4
A test in learning widgets http://blogs.cetis.org.uk/wilbert/2008/11/24/a-test-in-learning-widgets/ http://blogs.cetis.org.uk/wilbert/2008/11/24/a-test-in-learning-widgets/#comments Mon, 24 Nov 2008 16:55:28 +0000 http://blogs.cetis.org.uk/wilbert/2008/11/24/a-test-in-learning-widgets/ In order to explore how widgets can work in teaching and learning practice, I’ve been blue-petering a one-off formative assessment widget. That little exercise uncovers a couple of interesting issues to do with usability, security and pedagogy.

Recipe

Ingredients:

  • Google docs account
  • Sproutbuilder.com account
  • Mediawiki account with friendly administrator
  • Moodle installation with administrator access
  • (optional) iGoogle account, plain html site, Apple Dashboard, Windows Vista Sidebar, etc.

Take the Google docs account, and create a new spreadsheet. Rustle up a form from the toolbar; you have a choice between simple text, paragraph text, multiple choice, checkboxes, choose from a list and a scale. The questions will be added as columns to the first sheet. To calculate marks from the returns, use sheet 2 (the answers from the form will brutally overwrite anything on sheet 1). Refer to the cells in sheet 1 from the formulae in sheet 2. Finish with inserting widgets or charts that sum up your calculations. Send out the form via email, and let the sheet stew.

Google widget

In the mean time, soften up your mediawiki instance by asking your friendly administrator to allow img and object tags in pages. Make sure that the wiki isn’t very public, or you might get burned. To embed the form, go to sproutbuilder.com, and create a widget with the google form as content. Publish the sprout, and copy the object tag code, and trim off the single pixel image code on the end. Stick the object tag in the wiki, et voila.

To display the results, go to the chart in the google spreadsheet you prepared earlier, and publish it. Stir the resulting img tag you will be presented with into the wiki, and enjoy.

The outline for a Moodle instance is fairly similar, but allows greater freedom in what gets mixed in, where- provided you have administrator privileges. The form widget, for example, can be called directly in the iframe Google provides, which can be put into a Moodle html block. Likewise, results gadgets can be stuck in an Moodle html block as the javascript concoction Google dispenses.

Usability and security

As the recipe indicates, the deployment of widgets could be much easier. Getting rid of the copying and pasting of gnarly bits of code is only a minor aspect of this issue; security is the much bigger aspect. The hacking of the mediawiki instance is a tad questionable from that point of view, even if the wiki isn’t open to all miscreants on the web. There is a mediawiki extension that takes care of the trusting and embedding of widgets, but it doesn’t look particularly easy to use.

Much the same goes for deploying widgets in a Moodle instance, even if Moodle’s more fine-grained controls over who has which privileges over what, makes things rather easier. I had a quick look at a web platform like Facebook, and couldn’t find a way in there for my gadgets at all. A lame list of my Google documents was as far as it went.

What’s required here, IMHO, is plug-ins to these web apps that allow administrators to set trusted domains of origin for widgets. That way, regular users can stick in homebrew and pre-packaged widgets from these trusted domains into their favourite platforms without stuffing up security.

When inserting the assessment widget into the VLE, it also struck me that it wasn’t offering a whole lot more functionality than was already there in Moodle. I can imagine that using the Moodle question forms is easier for some people than wrangling spreadsheet formulae too. Still, there are important advantages of using a web app like Google docs or Zoho. Most importantly, the fact that it allows learners and teachers to pick their favourite tools to a common environment. But that does presuppose that deploying the various channels in and out of the docs web app is easy.

Pedagogy

As is usual with a newly found hammer, you start looking for nails that may or may not be there. My first effort (Gadgets and mashups 101, log in as guest) therefore resulted in the shiniest widgets piled on top of each other. There is no indication of the right answer on submission, and the fact that the scores of everyone is plain to see, may not have been the best way to approach the learning activity, though.

The second effort was better, I feel (Gadgets and mashups 102, log in as guest). In this version, there is at least some feedback on submission, and an indication of which questions people struggled with, on average. Even so, using email or a link directly to the form on Google may be better still, with just the average score on each question as a gadget, and a list of respondents just for the teacher.

The spreadsheet can be viewed on Google docs.

You can see what the widgets look like in mediawiki on the CETIS wiki.

]]>
http://blogs.cetis.org.uk/wilbert/2008/11/24/a-test-in-learning-widgets/feed/ 2
Semantic tech finds its niches and gets productive http://blogs.cetis.org.uk/wilbert/2008/06/03/semantic-tech-finds-its-niches-and-gets-productive/ http://blogs.cetis.org.uk/wilbert/2008/06/03/semantic-tech-finds-its-niches-and-gets-productive/#comments Tue, 03 Jun 2008 12:10:36 +0000 http://blogs.cetis.org.uk/wilbert/2008/06/03/semantic-tech-finds-its-niches-and-gets-productive/ Rather than the computer science foundation, the annual semantic technology conference in San Jose focusses on the practical applications of the approach. Visitor numbers are growing at a ‘double digit’ clip, and vendors are starting to include big names such as Oracle. We take a look.

It seems that we’re through the trough of disillusionment about the fact that the semantic web as outlined by the Tim Berners Lee in 1999 has not exactly materialised (yet). It’s not news that we do not all have intelligent agents that can seek out all kinds of data on the ‘net, and integrate it to satisfy our specific queries and desires. What we do have is a couple of interesting and productive approaches to the mixing and matching of disparate information that hint at a slope of enlightenment, heading to a plateau of productivity.

Easily the clearest example from the conference is the case of Blue Shield of California, a sizeable health care insurer in the US. They faced the familiar issue of a pile of legacy applications with custom connections, that were required to do things they were never designed to do. As a matter of fact, customer and policy data (core data, clearly) were spread over two systems of different vintage, making a unified view very difficult.

In contrast to what you might expect, the solution they built leaves the data in the existing databases- nothing is replicated in a separate store. Instead, the integration takes place in a ‘semantic layer’. In essence, that means that users can ask pretty complex and detailed questions of any information that is held in any system, in terms of a set of models of the business. These queries end up at the same old systems, where they get mapped from semantic web query form into good old SQL database queries.

This kind of approach doesn’t look all that different from the Enterprise Service Bus (ESB) in principle, but takes takes the insulation of service consumers from the details of service providers rather further. Service consumers in a semantic layer have just one API for any kind of data (the W3C’s SPARQL query language) and one datamodel (RDF, though output in XML or JSON is common). Most importantly, the meaning of the data is modelled in a set of ontologies, not in the documentation of the service providers, or the heads of their developers.

While the Blue Shield of California case was done by BBN, other vendors that exhibited in San Jose have similar approaches, often built on open source components. The most eye catching of those components (and also used by BBN) is netkernel: the overachieving offspring of the web and unix. It’s not necessarily semantic tech per se, but more of a language agnostic application environment that competes with J2EE.

Away from the enterprise, and more towards the webby (and educational!) end of things, the state of semantic technology becomes less clear. There are big web apps such as the Twine social network where the triples are working very much in the background, or powerset, where it is much more in your face, but to little apparent effect.

Much less polished, but much, much more interesting is dbpedia.org- an attempt to put all public domain knowledge in a triple store. Yes, that includes wikipedia, but also the US census and much more. DBpedia is accessible via a number of interfaces, including SPARQL. The result is the closest thing yet to a live instance of the semantic web as originally conceived, where it really is possible to ask questions like “give me all football players with number 11 shirts from clubs with stadiums with more than 40000 seats, who were born in a country with more than 10M inhabitants“. Because of the inherent flexibility of a triple store and with the power of the SPARQL interface, dbpedia could end up powering all kinds of other web applications and user interfaces.

Nearer to a real semantic web, though, is Yahoo’s well publicised move to start supporting relevant standards. While the effect isn’t yet so obvious as semantic integration in the enterprise or dbpedia. it could end up being significant, for the simple reason that it focusses on the organisational model. It does that by processing data in various ‘semantic web light’ formats that are embedded in webpages in the structuring and presentation of search results. If you’d want to present a nice set of handles on your site’s content in a yahoo search results page -links to maps, contact info, schedules etc- it’s time to start using RDFa or microformats.

Beyond the specifics of semantic standards or technologies of this point in time, though, lies the increasing demands for such tools. The volume and heterogeneity of data is increasing rapidly, not least because means of fishing structured data out of unstructured data are improving. At the same time, the format of structured data (its syntax) is much less of an issue than it once was, as is the means of shifting that data around. What remains is making sense of that data, and that requires semantic technology.

Resources

The semantic conference site gives an overview, but not any presentations, alas.

The California Blue Shield architecture was built with BBN’s Asio tool suite

More about the netkernel resource oriented computing platform can be found on the 1060 website

Twine is still in private beta, but will open up in overhauled form in the autumn.

Powerset is wikipedia with added semantic sauce.

DBpedia is the community effort to gather all public domain knowledge in a triple store. There’s a page that outlines all ways of accessing it over the web.

Yahoo’s SearchMonkey is the project to utilise semweb light specs in search results.

]]>
http://blogs.cetis.org.uk/wilbert/2008/06/03/semantic-tech-finds-its-niches-and-gets-productive/feed/ 3
The e-framework, social software and mashups http://blogs.cetis.org.uk/wilbert/2007/10/10/the-e-framework-social-software-and-mashups/ http://blogs.cetis.org.uk/wilbert/2007/10/10/the-e-framework-social-software-and-mashups/#comments Wed, 10 Oct 2007 09:44:56 +0000 http://blogs.cetis.org.uk/wilbert/2007/10/10/the-e-framework-social-software-and-mashups/ The e-framework has just opened a wiki, and will happily accommodate Web APIs and mashups. We show how the former works with the submission of an example of the latter.

The e-framework is all about sharing service oriented solutions in a way that can be easily replicated by other people in a similar situation. The situation of the Design for Learning programme is simple, and probably quite familiar: a group of very diverse JISC projects want to be able to share resources between each other, but also with a number of other communities. Some of these communities have pretty sophisticated sharing infrastructures built around institutional, national or even global repositories that do not interoperate with each other out of the box.

There are any number of sophisticated solutions to that issue, several of which are also being documented as an e-framework Service Usage Model (SUM). Sometimes, though, a simple, low-cost solution -preferably one that doesn’t require agreement between all the relevant infrastructures beforehand- is good enough.

Simplicity and low cost are the essence of the Design for Learning sharing SUM. It achieves that by concentrating on a collection of collaboratively authored bookmarks that point to the Design for Learning resources. That way, the collection can happily sit alongside any number of more sophisticated infrastructures that need to store the actual learning resource. It also make the solution flexible, future-proof and applicable to any resource that can be addressed with a URL.

The JISC Design for Learning Sharing SUM diagramme

There is, of course, slightly more to it than that, and that’s what the SUM is designed to draw out. The various headings that make up a SUM draw out all the information that’s needed to either find the SUM’s parts (i.e. services), or to replicate, adapt or incorporate the SUM itself.

For example, the Design for Learning SUM shows how to link a bookmark store such as del.icio.us to a community website that can render a newsfeed. That is: it calls for a Syndicate service. This SUM doesn’t say which kind of Syndicate service exactly, but it does show where you have the choice and roughly what the choices are.

By the same token, the SUM can be taken up and tweaked to meet different needs with a minimum of effort. Accommodating different bookmark stores at the same time, for example, is a matter of adding one of several mash-up builders between the Syndicate service providers and the community website. Or you could simply refer to the whole SUM as one part of a much wider and more ambitious resource sharing strategy.

Fortunately, completing a SUM is a bit easier now that there’s a specialised wiki. Bits and bobs can be added gradually and collaboratively until the point that it can be submitted as the official version. Once that’s done, feedback from the wider e-framework community will follow, and the SUM hosted in the e-framework registry. You should be able to see how that works by watching the Design for Learning SUM page in the coming weeks.

The Design for Learning sharing SUM wiki page.
The Design for Learning support pages, which will have an implementation of the sharing SUM soon.
How you can start your own e-framework page is outlined on the wiki itself.

]]>
http://blogs.cetis.org.uk/wilbert/2007/10/10/the-e-framework-social-software-and-mashups/feed/ 2
Recycling webcontent with DITA http://blogs.cetis.org.uk/wilbert/2007/05/23/recycling-webcontent-with-dita/ http://blogs.cetis.org.uk/wilbert/2007/05/23/recycling-webcontent-with-dita/#comments Wed, 23 May 2007 12:07:46 +0000 http://blogs.cetis.org.uk/wilbert/2007/05/23/recycling-webcontent-with-dita/ Lots of places and even people have a pile of potentially useful content sitting in a retired CMS or VLE. Or have content that needs to work on a site as much as a pdf or a booklet. Or want to use that great open stuff from the OU, but with a tweak in that paragraph and in the college’s colours, please.

The problem is as old as the hills, of course, and the traditional answer in e-learning land has been to use one of the flavours of IMS Content Packaging. Which works well enough, but only at a level above the actual content itself. That is, it’ll happily zip up webcontent, provide a navigation structure to it and allow the content to be exchanged between one VLE and another. But it won’t say anything about what the webcontent itself looks like. Nor does packaging really help with systems that were never designed to be compliant with IMS Content Packaging (or METS, or MPEG 21 DID, or IETF Atom etc, etc.).

In other sectors and some learning content vendors, another answer has been the use of single source authoring. The big idea behind that one is to separate content from presentation: if every system knows what all parts of a document mean, than the form could be varied at will. Compare the use of styles in MS Word. If you religiously mark everything as either one of three heading levels or one type of text, changing the appearance of even a book length document is a matter of seconds. In single source content systems that can be scaled up to include not just appearance, but complete information types such as brochures, online help, e-learning courses etc.

The problem with the approach is that you need to agree on the meaning of parts. Beyond a simple core of a handful of elements such as ‘paragraph’ and ‘title’, that quickly leads to heaps of elements with no obvious relevance to what you want to do, but still lacking the two or three elements that you really need. What people think are meaningful content parts simply differs per purpose and community. Hence the fact that a single source mark-up language such as the Text Encoding Initiative (TEI) currently has 125 classes with 487 elements.

The spec

The Darwin Information Typing Architecture (DITA) specification comes out of the same tradition and has a similar application area, but with a twist: it uses specialisation. That means that it starts with a very simple core element set, but stops there. If you need to have any more elements, you can define your own specialisations of existing elements. So if the ‘task’ that you associate with a ‘topic’ is of a particular kind, you can define the particularity relative to the existing ‘task’ and incorporate it into your content.

Normally, just adding an element of your own devising is only useful for your own applications. Anyone else’s applications will at best ignore such an element, or, more likely, reject your document. Not so in DITA land. Even if my application has never heard of your specialised ‘task’, it at least knows about the more general ‘task’, and will happily treat your ‘task’ in those more general terms.

Though DITA is an open OASIS specification, it was developed in IBM as a solution for their pretty vast software documentation needs. They’ve also contributed the useful open source toolkit for processing content into and out of DITA (Sourceforge), with comprehensive documentation, of course.

That toolkit demonstrates the immediate advantage of specialisation: it saves an awful lot of time, because you can re-use as much code as possible. This works both in the input and output stage. For example, a number of transforms already exist in the toolkit to take docbook, html or other input, and transform it into DITA. Tweaking those to accept the html from any random content management system is not very difficult, and once that’s done, all the myriad existing output formats immediately become available. What’s more, any future output formats (e.g. for a new Wiki or VLE format) will be immediately useable once someone, somewhere makes a DITA to new format transform available.

Moreover, later changes and tweaks to your own element specialisations don’t necessarily require re-engineering all tools or transforms. Hence that Darwin moniker. You can evolve datamodels, rather than set them in stone and pray they won’t change.

The catch

All of this means that it quickly becomes more attractive to use DITA than make a set of custom transforms from scratch. But DITA isn’t magic, and there are some catches. One is simply that some assembly is required. Whatever legacy content you have lying around, some tweakery is needed in order to get it into DITA, and out again without losing to much of the original structural meaning.

Also, the spec itself was designed for software documentation. Though several people are taking a long, hard look at specialising it for educational applications (ADL, Edutech Wiki and OASIS itself), that’s not proven yet. Longer, non-screenful types of information have been done, but might not offer enough for those with, say, an existing docbook workflow.

The technology for the toolkit is of a robust, if pedestrian variety. All the elements and specialisations are in Document Type Definitions (DTDs) –a decidly retro XML technology– though you can use the hipper XMLSchema or RelaxNG as well. The toolkit itself is also rather dependent on extensive path hackery. High volume. real time content transformation is therefore probably best done with a new tool set.

Those tool issues are independent of the architecture itself, though. The one tool that would be difficult to remove is XSL Transforms, and that is pretty fundamental. Though ‘proper’ semantic web technology might have offered a far more powerful means to manipulate content meaning in theory, the more limited, but directly implementable XSLTs give it a distinct practical edge.

Finally, direct content authoring and editing in DITA XML poses the same problem that all structural content systems suffer from. Authors want to use MS Office, and couldn’t care less about consistent meaningful document structuring, while the Word format is a bit of a pig to transform and it is very difficult to extract a meaningful structure from something that was randomly styled.

Three types of solution exist for this issue: one is to use a dedicated XML editor meant for the non-angle bracket crowd. Something like XMLMind’s editor is pretty impressive and free to boot, but may only work for dedicated content authors simply because it is not MS Word. You can use MS Word with templates either directly with an plug-in, or with some post-processing via OpenOffice (much like ICE does). Those templates makes Word behave differently from normal, though, which authors may not appreciate.

Perhaps it is, therefore, best to go with the web oriented simplicity, and transform orientation of DITA, and use a Wiki. Wiki formats are so simple that mapping a style to a content structure is pretty safe and robust, and not too alien and complex for most users.

]]>
http://blogs.cetis.org.uk/wilbert/2007/05/23/recycling-webcontent-with-dita/feed/ 2