Understanding large numbers in context, an exercise with socrative

I came across an exercise that aimed to demonstrate that numbers are easier to understand when broken  down and put into context, it’s one a number of really useful resources for the general public, journalists and teachers from the Royal Statistical Society. The idea is that large numbers associated with important government budgets–you know, a few billion here, a few billion there, pretty soon you’re dealing with large numbers–but such large numbers are difficult to get our heads around, whereas the same number expressed in a more familiar context, e.g. a person’s annual or weekly budget, should be easy to understand.  I wondered whether that exercise would work as an in-class exercise using socrative,–it’s the sort of thing that might be a relevant ice breaker for a critical thinking course that I teach.

A brief aside: Socrative is a free online student response system which “lets teachers engage and assess their students with educational activities on tablets, laptops and smartphones”. The teacher writes some multiple choice or short-response questions for students to answer, normally in-class. I’ve used it in some classes and students seem to appreciate the opportunity to think and reflect on what they’ve been learning; I find it useful in establishing a dialogue which reflects the response from the class as a whole, not just one or two students.

I put the questions from the Royal Stats. Soc. into socrative as multiple choice questions, with no feedback on whether the answer was right or wrong except for the final question, just some linking text to explain what I was asking about. I left it running in “student-paced” mode and asked friends on facebook to try it out over the next few days. Here’s a run through what they saw:

Screenshot from 2015-03-31 14:54:19Screenshot from 2015-03-31 14:55:13Screenshot from 2015-03-31 14:55:52Screenshot from 2015-03-31 14:56:40Screenshot from 2015-03-31 14:58:46Screenshot from 2015-03-31 14:59:21

 

Socrative lets you download the results as a spreadsheet showing the responses from each person to each question. A useful way to visualise the responses is as a sankey diagram:
sankeymatic_1200x1000 (1)

[I created that diagram with sankeymatic. It was quite painless, though I could have been more intelligent in how I got from the raw responses to the input format required.]

So did it work? What I was hoping to see was the initial answers being all over the place, but converging on the correct answer, that is not so many chosing £10B per annum for Q1 as £30 per person per week for the last question. That’s not really what I’m seeing. But I have some strange friends, a few people commented that they knew the answer for the big per annum number but either could or couldn’t do the arithmetic to get to the weekly figure. Also it’s possible that the question wording was misleading people into thinking about how much would it cost to treat a person for week in an NHS hospital. Finally I have some odd friends who are more interested in educational technology than in answering questions about statistics, who might just have been looking to see how socrative worked. So I’m still interested in trying out this question in class. Certainly socrative worked well for this, and one thing I learnt (somewhat by accident) is that you can leave a quiz running in socrative open for responses for several months.

 

facebooktwittergoogle_plusredditpinterestlinkedinmail

QAA Scotland Focus On Assessment and Feedback Workshop

Today was spent at a QAA Scotland event which aimed to identify and share good practice in assessment and feedback, and to gather suggestions for feeding in to a policy summit for senior institutional managers that will be held on 14 May.  I’ve never had much to do with technology for assessment, though I’ve worked with good specialists in that area, and so this was a useful event for catching up with what is going on.
"True Humility" by George du Maurier, originally published in Punch, 9 November 1895. (Via Wikipedia, click image for details)
“True Humility” by George du Maurier, originally published in Punch, 9 November 1895. (Via Wikipedia)
 
facebooktwittergoogle_plusredditpinterestlinkedinmail

What is there to learn about standardization?

Cetis (the Centre for Educational Technology, Interoperability and Standards) and the IEC (Institute for Educational Cybernetics) are full of rich knowledge and experience in several overlapping topics. While the IEC has much expertise in learning technologies, it is Cetis in particular where there is a body of knowledge and experience of all kinds of standardization organisations and processes, as well as approaches to interoperability that are not necessarily based on formal standardization. We have an impressive international profile in the field of learning technology standards.  

When does a book become a web platform?

During last week’s CETIS conference I ran a session to assess how ebooks can function as an educational medium beyond the paper textbook.

After reminding ourselves that etextbooks are not yet as widespread as ebook novels, and that paper books generally are still most widely read, we examined what ebook features make a good educational experience.

Though many features could have been mentioned, the majority were still about the experience itself. Top of the bill: formative assessment at the end of a chapter. Either online or offline, it needs to be interactive, and there need to be a lot of items readily available. Other notable features in the area include a desire for contextualised discussion about a text. Global is good, but chats limited to other learners in a course is better. A way of asking for clarification of a teacher by highlighting text was another notable request.

Using standards to make assessment in e-textbooks scalable, engaging but robust

During last week’s EDUPUB workshop, I presented a demo of how an IMS QTI 2.1 question item could be embedded in an EPUB3 e-book in a way that is engaging, but also works across many e-book readers. Here’s the why and how.

One of the most immediately obvious differences between a regular book and an e-textbook is the inclusion of little quizzes at the end of a chapter that allow the learner to check their understanding of what they’ve just learned. Formative assessment matters in textbooks.

Inuagural Open Badges (Scotland) Working Group Meeting

Bill Clinton isn’t the only one creating a buzz about the open badges movement at the moment. Perhaps with slightly less coverage than the Clinton initiative, yesterday saw the first (Scottish) open badges working group meeting.

Organised by Grainne Hamilton at RSC Scotland, following the success and interest shown at their recent Open Badges Design Day, the meeting was very well attended with a group of really enthusiastic practitioners from across the Scottish education sector, many of whom are already implementing badges. There was also good representation from key agencies such as the SQA and the Colleges Development Network.

What struck me about the meeting was how much real buy-in and activity there was for badges from schools to colleges to universities. Whilst there was a lot of diversity in approaches (most people implementing badges are still at pilot stages), there were also a number of common themes of interest for future developments including badges for staff development purposes and the sharing of implementation of “badging” through VLEs in particular Moodle and Blackboard.

One of the great selling points of badges is their potential to bridge the gap between achievement and attainment of formal qualifications and give people (and in particular students) more opportunities to present things which aren’t recognised through formal qualifications. This was a prime motivator for many at the working group as they want to be able to allow students more ways to showcase/sell themselves to potential employers, and not have to rely on formal qualifications. This of course links to developments around e-portfolios.

There was also a lot of interest in using badges for staff development within colleges and universities. RSC Scotland is already paving the in this respect as they have developed a range of badges for their online courses and events, and a number of colleges are beginning to use badges for staff development activities.

Over the coming months a number of sub-groups will be forming around some of the key areas identified at yesterdays meeting, setting up a shared workspace and of course, most importantly sharing their work with each other and the wider working group, and of course the rest of the community.

If yesterday afternoon was anything to go by, there will be lots more to share around the development and implementation of badges. I’m certainly looking forward to being part of this exciting new group, and thanks again to Grainne and Fionnuala and the RSC for bringing this group together and their commitment to supporting it over the coming year.

QTI 2.1 spec release helps spur over £250m of investment worldwide

With the QTI 2.1 specification finalised and released, we’re seeing significant global investment in tools that implement the spec. Tools developed by JISC projects have been central.

It has taken a while, but since March this year, IMS Question and Test Interoperability 2.1 has been released as a final specification. That means that people can implement it, secure in the knowledge that it won’t change or disappear, even if there are likely to be future versions.

The release, not coincidentally, happens at a time when there is a lot of activity regarding the use of the specification around the world. This level of investment isn’t just due to a set of documents on a website, it is also due to the fact that there is a range of working implementations available that demonstrate how QTI 2.1 works, and that’s where a couple of Jisc projects play a crucial role. But let’s have a look at what people are doing with the spec around the world first.

The Netherlands

The biggest assessment project in the low lands at the moment is the effort to move all online school exams to the QTI 2.1 format. The multi-million Euro effort is led by the Commissie voor Examens, managed by DUO, with the CITO exam body and trifork as contractors. Because of the specific demands put upon the whole infrastructure, the partners will need an extensive profile.

Accompanying the formal exam profile is the NL-QTI effort led by Kennisnet. This pragmatic but relatively rich profile of the specification is meant to facilitate an eco system of material and software for general use in schools. We should see more of that profile in the near future.

Lastly, Surf is currently running the Assessment and Assessment Driven Learning programme in higher education, which will revolve around a sharable infrastructure for online assessment. Part of that programme will be an exploration to what extent such sharing can be facilitated by QTI 2.1

Germany

The main player here is the Onyx suite from BPS. This complete assessment suite of editor, test player, analytics module and converter is built around QTI 2.1, and has been used standalone as well as integrated with the OLAT VLE. One instance of the latter that is shared between all 13 universities in Saxony has about 50.000 users, with about 25.000 log-ins per day. Similar consortia exist in Thuringia and Rhineland-Palatinate, and there are further university specific installations with a combined total of about a 108.000 users. The hosted Onyx test player runs about 300 – 1000 test runs a day.

France

The work in France is on a smaller scale, but is mature and well targeted. The MOCAH team of UPMC, Paris 6 has developed a system where QTI 2.1 source is transformed such that it can be run on generic Java or PHP based web servers, as well as specialised QTI players. The focus is on the teaching of math to secondary schools students, and it has been used in 160 classes, where 400 patterns have been created. The latter are question item templates that generate large amounts of items for students to practice on; a key requirement.

South Korea

After experiments in the past with, among other tools, QTI 2.1 generated from common word-processing tools, KERIS – the Korea Education and Research Information Service – is now engaging vendors in a project to integrate QTI 2.1 in EPUB 3 ebooks. Various options are being explored at the moment, with results due later this year.

USA

This is where the development-at-scale is taking place at the moment, thanks to the Race To The Top (RTTT) projects that were funded by the Obama administration. There are two state-led consortia – Smarter Balanced and PARCC – with a mission to overhaul the whole assessment infrastructure in schools, base it on open standards and open source software, and provide a tranche of new material to go with it. They had an initial budget of $160-170 million each, with about a third of those budgets intended for tool development. QTI 2.1, along with the Accessible Portable Item Protocol (APIP) extensions, is at the heart of the initiative.

The size of those consortia is having effects elsewhere too. One major educational publisher has already decided to standardise internally on QTI 2.1, and others are looking at the same option. Not that such a thing is new: organisations such as the Northwest Evaluation Association (NWEA) and the world’s largest testing organisation – ETS – have already chosen QTI 2.1 as their internal ‘lingua franca’. Rather than make many point to point integrations between their own systems and collections, and then having to do that again with each organisation they partner with, they translate each format to and from QTI.

UK

Meanwhile, back in the UK, JISC has sponsored a small community – most recently via the Assessment & Feedback programme – that has played a vital role in making QTI 2.1 real. ‘Real’ in the sense of checking whether and how the specification would work, as it was being designed, in the case of Jassess. ‘Real’ also in the sense of putting QTI 2.1 material in the hands of a range of teachers and learners, via editing tools such as Uniqurate and playback tools such as QTIWorks. An excellent RSC Scotland post outlines exactly how those outputs of the QTI-DI and Uniqurate projects work.

All of these UK projects’ tools, guidance and assessment materials are known to all the above communities, as well as plenty of others I’ve not even mentioned. In some cases, the JISC sponsored tools have been extended by others, in other cases, the presence and online accessibility of the resources meant that those other communities knew what was possible, what their own tools and materials should look like, and how they could interoperate.

At this point, it’s not clear whether new Jisc will support future work in this area. What is clear, however, is that JISC’s past investment will continue to have a global effect well beyond the initial outlay.

Assessment & Feedback tool development lessons

With most software development project in the JISC Assessment & Feedback programme drawing to a close, it’s a good time to look at some common themes in their findings.

There’s a small, but perfectly formed little cluster of four projects in ‘strand C’ of the Assessment & Feedback programme. Strand C is the techy corner, because it is these that projects that took existing open source tools and adapted them for use in organisations beyond the ones they were developed in.

Within the strand, the tools that were being developed were:

  • Rogō, a complete assessment authoring, playback and management system, developed by the eponymous project at Nottingham University, and deployed in three other institutions
  • OpenMentor, a system that analyses tutor feedback on assignments, developed at the OU, now deployed in two other institutions by the OMTetra project
  • QTIWorks, a full-featured, QTI compliant assessment and test player, developed at Edinburgh University, now deployed by the QTI-DI project
  • Uniqurate, an online, QTI compliant assessment and test authoring tool developed at Kingston University by the eponymous project, and coupled to QTIWorks

Looking through their development experiences, there’s a couple of themes that seem to recur:

User interface complexity

What to do when one set of users need something simple, and another set want full access to all functions? The clearest example of that dillema was presented to the Uniqurate project: there was an existing assessment item editor called mathqurate that gave access to all aspects of many different question types, but was only really usable by experts, and an earlier version of uniqurate that was very friendly, but also very limited. Which is why the current project aimed to become the “goldilocks editor” by offering a flexible but easily graspable set of item type modules, but also by offering different modes that are accessible to more intrepid users.

The most advanced of these modes gives the user access to the QTI source code of a question, which is something that is also available in QTIWorks. Another, arguably more important simple versus complex user interface issue that QTIWorks has to deal with is how to show runtime variables. For authors, this is vital, but for candidates it is rather confusing and often assessment defeating. Solution? Like Uniqurate: different modes for different audiences.

In OpenMentor, the audience is broadly the same – tutors –, but some wanted to know what’s going on in the ‘black box’ that takes their feedback on assignments and categorises it into a well-known taxonomy, while others where just happy with the results. The likely solution is also to include an advanced mode in a future version of the tool.

Interoperating with other systems

Or: how do I get user information in my tool without asking those users to type it all in?

OpenMentor and Rogō went down the LDAP route, given that it is the most common way to distribute person information inside organisations. It worked for these tools too, though Rogō had to spend quite some time at one of the new sites to adapt the LDAP to Rogō mapping. Some assembly may be required, in other words.

Rogō and QTIWorks also implemented the much newer IMS Learning Technology Interoperability (LTI) specification. This specification is designed to allow more ad hoc connections between a VLE and tools such as the tools from the assessment & feedback programme. LTI is intended primarily to identify users, but it can also be used to move some user information from one system to another, particularly when those systems may be in different organisations. This function is still evolving, though, as Rogō found when they looked for an external examiner role within LTI. They couldn’t find it when they implemented it, but LTI supports it now.

Fostering a community

Because all four projects are open source, and because they were all meant to facilitate wider adoption, community building with users and other developers was paramount. It’s not easy, though.

Uniqurate noticed this particularly with regard to the use of agile software methodologies, as outlined in their last blogpost. Agile is generally advocated because it makes sure development happens in small steps that track what users actually want. Except that the users in this case where very busy academics who were enthusiastic, but rarely available during term time. And a project is too short to easily work around that. Conclusion: sometimes other methodologies may work better.

The OMtetra project used workshops and surveys to engage their user community, which did work. Developer engagement might be a slightly different matter, however: there are three different public code repositories for OpenMentor, of different degrees of currency. The branch developed during this project is the slightly, rather than the very, stale one. Whether all the developments have made it through to the latest branch is not clear. It is still actively developed, however, and that’s the main thing.

For QTIworks, code and documentation is clearer, and with success: the code has been adopted by developers on one of the very large Race To The Top assessment projects in the United States. It has been used there to prototype some potentially revolutionary new functionality in interoperable assessment material, which is likely to become part of the QTI specification itself. Part of the success may also be due to the fact that, like Uniqurate, a demo version of QTIworks is available online.

Both QTIworks and Uniqurate, have, however, been used for teaching and learning in a relatively limited scale compared to Rogō. As the Rogō project discovered, that can be a mixed blessing. Once courses start to rely on a system, the demand for support of all kinds increases exponentially- and that’s before Rogō is being used widely for summative assessment. Sound user and installation documentation helps, but doesn’t resolve all issues that other organisations may need help with, whether there’s a support business model in place or not. Also, demands of other organisations inevitably lead to tensions with the priorities of the original developers. That’s manageable, but requires thought and ongoing commitment.

Conclusion

It is a bit difficult to generalise across these four projects, much less all open source software developments at universities. Yet it seems fairly clear that the main issue is community building: once the right number of the right mix of partners are on board, other issues become more tractable. Fostering such communities is difficult, but it is something that an organisation like OSSWatch can help with; as Rogō has already been doing.

eTextBooks Europe

I went to a meeting for stakeholders interested in the eTernity (European textbook reusability networking and interoperability) initiative. The hope is that eTernity will be a project of the CEN Workshop on Learning Technologies with the objective of gathering requirements and proposing a framework to provide European input to ongoing work by ISO/IEC JTC 1/SC36, WG6 & WG4 on eTextBooks (which is currently based around Chinese and Korean specifications). Incidentally, as part of the ISO work there is a questionnaire asking for information that will be used to help decide what that standard should include. I would encourage anyone interested to fill it in.

The stakeholders present represented many perspectives from throughout Europe: publishers, publishing industry specification bodies (e.g. IPDF who own EPUB3, and DAISY), national bodies with some sort of remit for educational technology, and elearning specification and standardisation organisations. I gave a short presentation on the OER perspective.

Many issues were raised through the course of the day, including (in no particular order)

  • Interactive and multimedia content in eTextbooks
  • Accessibility of eTextbooks
  • eTextbooks shouldn’t be monolithic and immutable chunks of content, it should be possible to link directly to specific locations or to disaggregate the content
  • The lifecycle of an eTextbook. This goes beyond initial authoring and publishing
  • Quality assurance (of content and pedagogic approach)
  • Alignment with specific curricula
  • Personalization and adaptation to individual needs and requirements
  • The ability to describe the learning pathway embodied in an eTextbook, and vary either the content used on this pathway or to provide different pathways through the same content
  • The ability to describe a range IPR and licensing arrangements of the whole and of specific components of the eTextbook
  • The ability to interact with learning systems with data flowing in both directions

If you’re thinking that sounds like a list of the educational technology issues that we have been busy with for the last decade or two, then I would agree with you. Furthermore, there is a decade or two’s worth of educational technology specs and standards that address these issues. Of course not all of those specs and standards are necessarily the right ones for now, and there are others that have more traction within digital publishing. EPUB3 was well represented in the meeting (DITA is the other publishing standard mentioned in the eTernity documentation, but no one was at the meeting to talk about that) and it doesn’t seem impossible to meet the educational requirements outlined in the meeting within the general EPUB3 framework. The question is which issues should be prioritised and how should they be addressed.

Of course a technical standard is only an enabler: it doesn’t in itself make any change to teaching and learning; change will only happen if developers create tools and authors create resources that exploit the standard. For various reasons that hasn’t happened with some of the existing specs and standards. A technical standard can facilitate change but there needs to a will or a necessity to change in the first place. One thing that made me hopeful about this was a point made by Owen White of Pearson that he did not to think of the business he is in as being centred around content creation and publishing but around education and learning and that leads away from the view of eBooks as isolated static aggregations.

For more information keep an eye on the eTernity website