Wilbert Kraan » assessment (technology) http://blogs.cetis.org.uk/wilbert Cetis blog Wed, 22 Apr 2015 13:17:21 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.22 QTI 2.1 tool tutorial http://blogs.cetis.org.uk/wilbert/2015/02/25/qti-2-1-tool-tutorial/ http://blogs.cetis.org.uk/wilbert/2015/02/25/qti-2-1-tool-tutorial/#comments Wed, 25 Feb 2015 07:50:00 +0000 http://blogs.cetis.org.uk/wilbert/?p=243 Learning about an interoperability specification such as a QTI 2.1 becomes easier when you can see it working in a set of tools. In this post, we’ll create a very simple test using three freely available tools.

Item creation

We’ll get started with making an item, and we’ll use Kingston University’s Uniqurate. As an item and test editor it’s limited in what types of question items it does, which is also its virtue; it’s nice and simple.

Since we’re making just an item, we click “+Question”, and then hit the pencil and paper icon to edit the content of the new item we’ve just started.

uniqurate

The editing window already has boxes for the title of the item and the prompt- the item’s instructions. We’ll make a question about cities in Arizona, and ask people to pick those cities that are in that state from a list.

Since we’re doing a multiple choice question, we’ll pick the multiple choice widget from the list of components on the left, and drag it into the free slot underneath the prompt. Then we’ll start filling it with cities. Click “+” to add a slot, give each correct choice a score of 1, and each incorrect choice with 0. The maxchoices should be set at least to the number of all the correct choices.

To see what the item’s code looks like, hit the “Expert Mode” button. Be aware that changing the code in expert mode may not be retained when going back to easy mode.

Hit the save button, and save either the file or content package to a convenient place on your machine.

While Uniqurate is perfectly able to make tests itself, seeing whether the QTI item can be integrated into a test by another tool is a nice demonstration of interoperability, so we’ll use BPS Onyx for that.

Test creation

We start by hitting the “Create test” link, and giving the new test a name. The default test gives a good idea of the basic structure of a QTI 2.1 test.

To add the Arizona item to the test, it first needs to be added to the question bank, so we navigate to “Test resources”. In there, hit “Import content”, select the item, and hit upload.Onyx

Then we click on the edit button of the test we made to add the item. Highlight the section we want to put the item in, and hit the “Question bank” folder icon. Select the item in it, and click “Add an element”. The tabs on the item as well as the section and test give a good idea of all the configuration possibilities.

Once the test is done, hit “Save test”, then “Test resources”. Select the test in the list, and hit “Export”, and save the test package in a convenient place.

Running a test

To run our test, we’ll use QTIWorks. Click “Demos”, then “Quick Upload and Run”. While you click through the test, be sure to hit “Open Author’s feedback” to view all the state information QTIWorks collects about the test. All of this information is available in standard QTI results XML for each test of each candidate. QTIWorks

QTIWorks also has a number of QTI example items and tests readily built in. The full collection gives a very good idea of all the capabilities of QTI 2.1, and hitting the “Open Author’s feedback” link allows you to inspect the XML, as well as see what it does.

Uniqurate, Onyx and QTIWorks are not the only readily available QTI tools. Be sure to have a look at TAO too for a complete open source solution.

]]>
http://blogs.cetis.org.uk/wilbert/2015/02/25/qti-2-1-tool-tutorial/feed/ 0
When does a book become a web platform? http://blogs.cetis.org.uk/wilbert/2014/06/24/when-does-a-book-become-a-web-platform/ http://blogs.cetis.org.uk/wilbert/2014/06/24/when-does-a-book-become-a-web-platform/#comments Tue, 24 Jun 2014 23:24:57 +0000 http://blogs.cetis.org.uk/wilbert/?p=232 During last week’s CETIS conference I ran a session that assessed how ebooks can function as an educational medium beyond the paper textbook.

After reminding ourselves that etextbooks are not yet as widespread as ebook novels, and that paper books generally are still more widely read, we examined what ebook features make a good educational experience.

Though many features could have been mentioned, the majority were still about the experience itself. Top of the bill: formative assessment at the end of a chapter. Either online or offline, it needs to be interactive, and there need to be a lot of items readily available. Other notable features in the area include a desire for contextualised discussion about a text. Global is good, but chats limited to other learners in a course is better. A way of asking for clarification of a teacher by highlighting text was another notable request.

These features were then compared to what is currently the state of the art. Colin Smythe presented the latest EDUPUB work from IMS, IDPF and the W3C which integrates both VLEs and books, as well as analytics and assessment platforms. The solution is slick, and entirely web based. This contrasts with the solution I demoed before the formal EDUPUB work started. Unlike IMS’ example, my experiment does work in most any ebook software, but it doesn’t include IMS’ Caliper analytics capability.

But then Mick Chesterman of Flossmanuals and Manchester Metropolitan University reminded us that reading features aren’t the only ones worth considering. The open source Booktype platform allows communities to quickly and easily write books collaboratively, and then clone, share or merge them in a process called ‘federated publishing’.

The editability of standard, EPUB format ebooks also introduced the core question: what is the difference between an ebook and a web site? The interactivity and media support that is now possible in ebooks is blurring the distinction, but features such as the possibility of editing could prove a key distinction.

Another distinction, but one that may not persist, is a book’s persistence itself. With more functionality living outside of the book, on servers on the wider internet, how will a book endure? While intermittent connectivity means that offline access is still desirable now, will the ever increasing ubiquity of bandwidth spell the end for self-contained media?

The opening slides.

Colin Smythe’s presentation on EDUPUB.

My own presentation on embedding QTI in EPUB3.

Mick Chesterman’s slides on Booktype.

]]>
http://blogs.cetis.org.uk/wilbert/2014/06/24/when-does-a-book-become-a-web-platform/feed/ 2
Using standards to make assessment in e-textbooks scalable, engaging but robust http://blogs.cetis.org.uk/wilbert/2013/11/06/using-standards-to-make-assessment-in-e-textbooks-scalable-engaging-but-robust/ http://blogs.cetis.org.uk/wilbert/2013/11/06/using-standards-to-make-assessment-in-e-textbooks-scalable-engaging-but-robust/#comments Wed, 06 Nov 2013 16:30:22 +0000 http://blogs.cetis.org.uk/wilbert/?p=209 During last week’s EDUPUB workshop, I presented a demo of how an IMS QTI 2.1 question item could be embedded in an EPUB3 e-book in a way that is engaging, but also works across many e-book readers. Here’s the why and how.

One of the most immediately obvious differences between a regular book and an e-textbook is the inclusion of little quizzes at the end of a chapter that allow the learner to check their understanding of what they’ve just learned. Formative assessment matters in textbooks.

When moving to electronic textbooks, there is a great opportunity to make that assessment more interactive, and provide richer feedback, and connect the learning to a wider view of how a student is doing (i.e. learning analytics). The question is how to do that in a way that works across many e-reading devices and applications, on a scale that works for publishers.

QTI item in Adobe Editions

QTI item in Adobe Editions

Scalability is where interoperability standards like EPUB3, IMS Learning Tool Interoperability (LTI) and IMS Question and Test Interoperability (QTI) 2.1 come in. People use a large number of different software systems in the authoring, management, and playback of e-books. Connecting each of those to all the others with one-off custom integrations just gets too complex, too expensive and too brittle; that’s why an increasing number of publishers and software vendors agreed on the EPUB specification. As long as you implement that spec, solutions can scale across many e-book applications. The same goes for question and test material, where IMS QTI does the same job. LTI does that job for connecting VLEs to any online learning tool.

Which leaves the question of how to square the circle of making the assessment experience as engaging and effective as possible, but also work on devices with very different capabilities.

Fortunately, EPUB3 files can include a number of techniques that allow an author to adapt the content to the capability of the device it is being read on. I used those techniques to present the same QTI item in three different ways; as a static quiz – much like a printed book –, as a simple interactive widget and as a feedback rich test run by an online assessment system inside the book. The latter option makes detailed analytics data available and it should also make it possible to send a grade to a VLE automatically.

The how

QTI item in Apple iBooks

QTI item in Apple iBooks

For the static representation and the interactive widget, I relied on Steve Lay’s rather brilliant transform from QTI XML to HTML5 (and back again), and to make the HTLM5 interactive with some javascript. By including this QTI HTML5 in the EPUB, you get all the advantages of standard QTI, in a way that still works in a simple, offline reader such as Adobe Editions as well as more capable software such as Apple’s iBooks.

For the most capable, online ebook readers such as Readium, the demo e-textbook connects to QTIWorks, an online QTI compliant assessment engine. It does that via IMS LTI 1.1, but in a somewhat unusual way: in LTI terms, the e-book behaves as a tool consumer. That is; like a VLE. Using a hash of an Oauth secret and key, it establishes a connection to QTIWorks, identifies the user, and retrieves the right quiz to show inside the ebook. A place to send the results of the quiz to is also provided, but I’ve not tested that yet. QTIWorks makes detailed report available of what the learner did exactly with each item, which can be retrieved in a variety of machine readable formats.

QTI item in Readium

QTI item in Readium

Because the secret and the key have to be included in the book, the LTI connection the book establishes is not as secure as an LTI connection from a proper VLE. For access to some formative assessment, that may be a price worth paying, though.

The demo EPUB3 uses both scripting and some metadata to determine which version of the QTI item to show. The QTI item, the LTI launch and the EPUB textbook are all valid according to their specifications, and rely on stock readers to work.

Acknowledgements and links

David McKain for making QTIWorks
Steve Lay for the QTI HTML transforms
John Kristian of the OAuth project for the OAuth javascript library
Stephen Vickers for the ceLTIc IMS LTI development tools

The (ugly, content-less) demonstration EPUB3 and associated code is available from Github.

]]>
http://blogs.cetis.org.uk/wilbert/2013/11/06/using-standards-to-make-assessment-in-e-textbooks-scalable-engaging-but-robust/feed/ 1
QTI 2.1 spec release helps spur over £250m of investment worldwide http://blogs.cetis.org.uk/wilbert/2013/05/03/qti-21-spec-release-helps-spur-over-250m-of-investment-worldwide/ http://blogs.cetis.org.uk/wilbert/2013/05/03/qti-21-spec-release-helps-spur-over-250m-of-investment-worldwide/#comments Fri, 03 May 2013 20:11:33 +0000 http://blogs.cetis.org.uk/wilbert/?p=199 With the QTI 2.1 specification finalised and released, we’re seeing significant global investment in tools that implement the spec. Tools developed by JISC projects have been central.

It has taken a while, but since March this year, IMS Question and Test Interoperability 2.1 has been released as a final specification. That means that people can implement it, secure in the knowledge that it won’t change or disappear, even if there are likely to be future versions.

The release, not coincidentally, happens at a time when there is a lot of activity regarding the use of the specification around the world. This level of investment isn’t just due to a set of documents on a website, it is also due to the fact that there is a range of working implementations available that demonstrate how QTI 2.1 works, and that’s where a couple of Jisc projects play a crucial role. But let’s have a look at what people are doing with the spec around the world first.

The Netherlands

The biggest assessment project in the low lands at the moment is the effort to move all online school exams to the QTI 2.1 format. The multi-million Euro effort is led by the Commissie voor Examens, managed by DUO, with the CITO exam body and trifork as contractors. Because of the specific demands put upon the whole infrastructure, the partners will need an extensive profile.

Accompanying the formal exam profile is the NL-QTI effort led by Kennisnet. This pragmatic but relatively rich profile of the specification is meant to facilitate an eco system of material and software for general use in schools. We should see more of that profile in the near future.

Lastly, Surf is currently running the Assessment and Assessment Driven Learning programme in higher education, which will revolve around a sharable infrastructure for online assessment. Part of that programme will be an exploration to what extent such sharing can be facilitated by QTI 2.1

Germany

The main player here is the Onyx suite from BPS. This complete assessment suite of editor, test player, analytics module and converter is built around QTI 2.1, and has been used standalone as well as integrated with the OLAT VLE. One instance of the latter that is shared between all 13 universities in Saxony has about 50.000 users, with about 25.000 log-ins per day. Similar consortia exist in Thuringia and Rhineland-Palatinate, and there are further university specific installations with a combined total of about a 108.000 users. The hosted Onyx test player runs about 300 – 1000 test runs a day.

France

The work in France is on a smaller scale, but is mature and well targeted. The MOCAH team of UPMC, Paris 6 has developed a system where QTI 2.1 source is transformed such that it can be run on generic Java or PHP based web servers, as well as specialised QTI players. The focus is on the teaching of math to secondary schools students, and it has been used in 160 classes, where 400 patterns have been created. The latter are question item templates that generate large amounts of items for students to practice on; a key requirement.

South Korea

After experiments in the past with, among other tools, QTI 2.1 generated from common word-processing tools, KERIS – the Korea Education and Research Information Service – is now engaging vendors in a project to integrate QTI 2.1 in EPUB 3 ebooks. Various options are being explored at the moment, with results due later this year.

USA

This is where the development-at-scale is taking place at the moment, thanks to the Race To The Top (RTTT) projects that were funded by the Obama administration. There are two state-led consortia – Smarter Balanced and PARCC – with a mission to overhaul the whole assessment infrastructure in schools, base it on open standards and open source software, and provide a tranche of new material to go with it. They had an initial budget of $160-170 million each, with about a third of those budgets intended for tool development. QTI 2.1, along with the Accessible Portable Item Protocol (APIP) extensions, is at the heart of the initiative.

The size of those consortia is having effects elsewhere too. One major educational publisher has already decided to standardise internally on QTI 2.1, and others are looking at the same option. Not that such a thing is new: organisations such as the Northwest Evaluation Association (NWEA) and the world’s largest testing organisation – ETS – have already chosen QTI 2.1 as their internal ‘lingua franca’. Rather than make many point to point integrations between their own systems and collections, and then having to do that again with each organisation they partner with, they translate each format to and from QTI.

UK

Meanwhile, back in the UK, JISC has sponsored a small community – most recently via the Assessment & Feedback programme – that has played a vital role in making QTI 2.1 real. ‘Real’ in the sense of checking whether and how the specification would work, as it was being designed, in the case of Jassess. ‘Real’ also in the sense of putting QTI 2.1 material in the hands of a range of teachers and learners, via editing tools such as Uniqurate and playback tools such as QTIWorks. An excellent RSC Scotland post outlines exactly how those outputs of the QTI-DI and Uniqurate projects work.

All of these UK projects’ tools, guidance and assessment materials are known to all the above communities, as well as plenty of others I’ve not even mentioned. In some cases, the JISC sponsored tools have been extended by others, in other cases, the presence and online accessibility of the resources meant that those other communities knew what was possible, what their own tools and materials should look like, and how they could interoperate.

At this point, it’s not clear whether new Jisc will support future work in this area. What is clear, however, is that JISC’s past investment will continue to have a global effect well beyond the initial outlay.

]]>
http://blogs.cetis.org.uk/wilbert/2013/05/03/qti-21-spec-release-helps-spur-over-250m-of-investment-worldwide/feed/ 1
Assessment & Feedback tool development lessons http://blogs.cetis.org.uk/wilbert/2013/01/21/assessment-feedback-tool-development-lessons/ http://blogs.cetis.org.uk/wilbert/2013/01/21/assessment-feedback-tool-development-lessons/#comments Mon, 21 Jan 2013 12:19:03 +0000 http://blogs.cetis.org.uk/wilbert/?p=187 With most software development project in the JISC Assessment & Feedback programme drawing to a close, it’s a good time to look at some common themes in their findings.

There’s a small, but perfectly formed little cluster of four projects in ‘strand C’ of the Assessment & Feedback programme. Strand C is the techy corner, because it is these that projects that took existing open source tools and adapted them for use in organisations beyond the ones they were developed in.

Within the strand, the tools that were being developed were:

  • Rogō, a complete assessment authoring, playback and management system, developed by the eponymous project at Nottingham University, and deployed in three other institutions
  • OpenMentor, a system that analyses tutor feedback on assignments, developed at the OU, now deployed in two other institutions by the OMTetra project
  • QTIWorks, a full-featured, QTI compliant assessment and test player, developed at Edinburgh University, now deployed by the QTI-DI project
  • Uniqurate, an online, QTI compliant assessment and test authoring tool developed at Kingston University by the eponymous project, and coupled to QTIWorks

Looking through their development experiences, there’s a couple of themes that seem to recur:

User interface complexity

What to do when one set of users need something simple, and another set want full access to all functions? The clearest example of that dillema was presented to the Uniqurate project: there was an existing assessment item editor called mathqurate that gave access to all aspects of many different question types, but was only really usable by experts, and an earlier version of uniqurate that was very friendly, but also very limited. Which is why the current project aimed to become the “goldilocks editor” by offering a flexible but easily graspable set of item type modules, but also by offering different modes that are accessible to more intrepid users.

The most advanced of these modes gives the user access to the QTI source code of a question, which is something that is also available in QTIWorks. Another, arguably more important simple versus complex user interface issue that QTIWorks has to deal with is how to show runtime variables. For authors, this is vital, but for candidates it is rather confusing and often assessment defeating. Solution? Like Uniqurate: different modes for different audiences.

In OpenMentor, the audience is broadly the same – tutors –, but some wanted to know what’s going on in the ‘black box’ that takes their feedback on assignments and categorises it into a well-known taxonomy, while others where just happy with the results. The likely solution is also to include an advanced mode in a future version of the tool.

Interoperating with other systems

Or: how do I get user information in my tool without asking those users to type it all in?

OpenMentor and Rogō went down the LDAP route, given that it is the most common way to distribute person information inside organisations. It worked for these tools too, though Rogō had to spend quite some time at one of the new sites to adapt the LDAP to Rogō mapping. Some assembly may be required, in other words.

Rogō and QTIWorks also implemented the much newer IMS Learning Technology Interoperability (LTI) specification. This specification is designed to allow more ad hoc connections between a VLE and tools such as the tools from the assessment & feedback programme. LTI is intended primarily to identify users, but it can also be used to move some user information from one system to another, particularly when those systems may be in different organisations. This function is still evolving, though, as Rogō found when they looked for an external examiner role within LTI. They couldn’t find it when they implemented it, but LTI supports it now.

Fostering a community

Because all four projects are open source, and because they were all meant to facilitate wider adoption, community building with users and other developers was paramount. It’s not easy, though.

Uniqurate noticed this particularly with regard to the use of agile software methodologies, as outlined in their last blogpost. Agile is generally advocated because it makes sure development happens in small steps that track what users actually want. Except that the users in this case where very busy academics who were enthusiastic, but rarely available during term time. And a project is too short to easily work around that. Conclusion: sometimes other methodologies may work better.

The OMtetra project used workshops and surveys to engage their user community, which did work. Developer engagement might be a slightly different matter, however: there are three different public code repositories for OpenMentor, of different degrees of currency. The branch developed during this project is the slightly, rather than the very, stale one. Whether all the developments have made it through to the latest branch is not clear. It is still actively developed, however, and that’s the main thing.

For QTIworks, code and documentation is clearer, and with success: the code has been adopted by developers on one of the very large Race To The Top assessment projects in the United States. It has been used there to prototype some potentially revolutionary new functionality in interoperable assessment material, which is likely to become part of the QTI specification itself. Part of the success may also be due to the fact that, like Uniqurate, a demo version of QTIworks is available online.

Both QTIworks and Uniqurate, have, however, been used for teaching and learning in a relatively limited scale compared to Rogō. As the Rogō project discovered, that can be a mixed blessing. Once courses start to rely on a system, the demand for support of all kinds increases exponentially- and that’s before Rogō is being used widely for summative assessment. Sound user and installation documentation helps, but doesn’t resolve all issues that other organisations may need help with, whether there’s a support business model in place or not. Also, demands of other organisations inevitably lead to tensions with the priorities of the original developers. That’s manageable, but requires thought and ongoing commitment.

Conclusion

It is a bit difficult to generalise across these four projects, much less all open source software developments at universities. Yet it seems fairly clear that the main issue is community building: once the right number of the right mix of partners are on board, other issues become more tractable. Fostering such communities is difficult, but it is something that an organisation like OSSWatch can help with; as Rogō has already been doing.

]]>
http://blogs.cetis.org.uk/wilbert/2013/01/21/assessment-feedback-tool-development-lessons/feed/ 0
Question and Test tools demonstrate interoperability http://blogs.cetis.org.uk/wilbert/2012/03/16/question-and-test-tools-demonstrate-interoperability/ http://blogs.cetis.org.uk/wilbert/2012/03/16/question-and-test-tools-demonstrate-interoperability/#comments Fri, 16 Mar 2012 13:32:31 +0000 http://blogs.cetis.org.uk/wilbert/?p=162 As the QTI 2.1 specification gets ready for final release, and new communities start picking it up, conforming tools demonstrated their interoperability at the JISC – CETIS 2012 conference.

The latest version of the world’s only open computer aided assessment interoperability specification, IMS’ QTI 2.1, has been in public beta for some time. That was time well spent, because it allowed groups from across at least eight nations across four continents to apply it to their assessment tools and practices, surface shortcomings with the spec, and fix them.

Nine of these groups came together at the JISC – CETIS conference in Nottingham this year to test a range of QTI packages with their tools, ranging from the very simple to the increasingly specialised. In the event, only three interoperability bugs were uncovered in the tools, and those are being vigorously stamped on right now.

Where it gets more complex is who supports what part of the specification. The simplest profile, provisionally called CC QTI, was supported by all players and some editors in the Nottingham bash. Beyond that, it’s a matter of particular communities matching their needs to particular features of the specification.

In the US, the Accessible Portable Item Profile (APIP) group brings together a group of major test and tool vendors, that are building a profile for summative testing in schools. Their major requirement is the ability to finely adjust the presentation of questions to learners with diverse needs, which is why they have accomplished by building an extension to QTI 2.1. The material also works in QTI tools that haven’t been built explicitly for APIP yet.

A similar group has sprung up in the Netherlands, where the goal is to define all computer aided high stakes school testing in the country in QTI 2.1 That means that a fairly large infrastructure of authoring tools and players is being built at the moment. Since the testing material covers so many subjects and levels, there will be a series of profiles to cover them all.

An informal effort has also sprung up to define a numerate profile for higher education, that may yet be formalised. In practice, it already works in the tools made by the French MOCAH project, and the JISC Assessment and Feedback sponsored QTI-DI and Uniqurate projects.

For the rest of us, it’s likely that IMS will publish something very like the already proven CC QTI as the common core profile that comes with the specification.

More details about the tools that were demonstrated are available at the JISC – CETIS conference pages.

]]>
http://blogs.cetis.org.uk/wilbert/2012/03/16/question-and-test-tools-demonstrate-interoperability/feed/ 0
IMS Question and Test Interoperability 2.1 tools demonstrate interoperability http://blogs.cetis.org.uk/wilbert/2010/09/30/ims-question-and-test-interoperability-21-tools-demonstrate-interoperability/ http://blogs.cetis.org.uk/wilbert/2010/09/30/ims-question-and-test-interoperability-21-tools-demonstrate-interoperability/#comments Thu, 30 Sep 2010 22:20:04 +0000 http://blogs.cetis.org.uk/wilbert/?p=114 While most of Europe was on the beach, a dedicated group of QTI vendors gathered in Koblenz, Germany to demo what a standard should do: enable interoperability between a variety of software tools.

A total of twelve tools were demonstrated for the attendees of the IMS quarterly meeting that was being held at the University of Koblenz-Landau. The vendors and projects ranged from a variety of different communities in Poland, Korea, France, Germany and the UK, and their tools included:

All other things being equal, the combination of such a diversity of purposes with the comprehensive expressiveness of QTI, means that there is every chance that a set of twelve tools will implement different, non-overlapping subsets of the specification. This is why the QTI working group is currently working on the definition of two profiles: CC (Common Cartridge) QTI and what is provisionally called the Main profile.

The CC QTI profile is very simple and follows the functionality of the QTI 1.2 profile that is currently used in the IMS Common Cartridge educational content exchange format. Nine out of the twelve tools had implemented that profile, and they all happily played, edited or validated the CC QTI reference test.

With that milestone, the group is well on the way to the final, public release of the QTI 2.1 specification. Most of the remaining work is around the definition of the Main profile.

Initial discussion in Koblenz suggested an approach that encompasses most of the specification, with the possible exclusion of some parts that are of interest to some, but not all subjects or communities. To make sure the profile is adequate and implementable, more input is sought from publishers, qualification authorities and others with large collections of question and test items. Fortunately, a number of these have already come forward.

]]>
http://blogs.cetis.org.uk/wilbert/2010/09/30/ims-question-and-test-interoperability-21-tools-demonstrate-interoperability/feed/ 1