Games animals play

Play is an important part of animal development, as with child development: animals learn to hunt and fight just as children learn to perform tasks and socialise.  And as with humans, animal play isn’t just limited to learning for future survival, but is a valuable part of day-to-day wellbeing.  Providing adequate mental stimulation and engagement is particularly important for captive animals, confined in relatively small environments where normal behaviour such as hunting is very limited, and with feeding and other activities subject to external schedules.

The TOUCH (Technology to Orangutans for Understanding and Communicating cross-species for greater Harmony) project, based at the Hong Kong Polytechnic University’s School of Design, is working on the design of digital systems to enable humans and orangutans to play games together – and, in particular, games where orangutans will almost certainly beat their human competitors.  Orangutans perform particularly well on games similar to pelmanism that rely on visual memory, and will almost invariably out-perform any human challenger.

The Hong Kong orangutans aren’t the first to engage with computer games: Samatran orangutans at Zoo Atlanta have been using them for several years as researchers attempt to understand their cognitive processes in order to help plan interventions to increase the survivability of the species in the wild.  Where the TOUCH project differs is in looking at games primarily as entertainment for non-humans, and as a focal point for enhancing cross-species communication and interaction.

In both projects, as in others, tangible rewards such as food or ‘social praise’ from their human playmates are provided to help train the animals to play within the rules or framework of the game, but many are content to continue playing even without such rewards: game play itself is ‘inherently rewarding‘ for them.  Playing within the rules, or consciously transgressing them, is fundamental to a ludological view of games: construction of the fourth wall, acceptance of the ability to only go up ladders and down snakes and the impossibility of going up snakes or down ladders, is what gives play structure and meaning.  YouTube is full of wonderful clips of all kinds of animals interacting with digital games, but not playing in the sense of following rules; the actual pleasure they get from them is also debatable.

Engaging cats in digital games, either solo or with a human partner, is the focus of Cat Cat Revolution, which is exploring the development of games on the iPad to enable this.  The project’s video, below, shows some varying results, but it’s clear that the game captures the attention and curiosity of the cats, in particular the youngest kitten in the study.  Similarly, iPad Game for Cats, a free game with paid-for additional levels available, clearly provides great entertainment for cats of all sizes.  Unlike TOUCH, which found that many of the orangutans were very happy to play purely for praise and interaction, the extent of the engagement between feline and human participants isn’t clear: while it’s obvious that the humans are getting a great deal of pleasure from playing with and watching their pets, the cats seem interested purely in the game with the human interaction being incidental (but then, they are cats ;) ).


These studies are fascinating.  Positioning animals as digital gamers, and knowing participants within multiplayer, multi-species games can enable us to learn so much more about them, ourselves, and the nature and universals of play.  Most of all, improving the welfare of captive animals and potentially increasing their ability to survive in the wild through skills learned through digital play would be the greatest outcomes of all.

Of course, like kids everywhere, sometimes it’s not the game but the box it came in that provides the most entertainment ;)

Technologies in use in the JISC Assessment and Feedback programme Strand B (evidence and evaluation)

The JISC Assessment and Feedback Programme is now in its fifth month, looking at a wide range of technological innovations around assessment and feedback in HE and FE.  Strand B is focused on the evaluation of earlier work, gathering and evaluating evidence on the impact of these innovations and producing guidelines and supporting material to facilitate their adoption in other subject areas and institutions.  These projects cover a broad range of technologies, but are themselves not involved in technological development but rather in examining and reporting the impact of such developments.

The information here was gathered through fairly informal conversations with the projects, building on the information initially provided in their funding applications.  Information from these calls is added to our project database (PROD) – you can see some of the amazing uses this information can be put to in the series of blog posts Martin Hawksey has produced as part of his work on visualisations of the OER programme, as well as some of the work by my colleagues David, Sheila and Wilbert.

This blog post is rather less ambitious than their work (!), and is intended to provide a quick snapshot of technologies that projects in this specific programme strand are finding valuable for their work.  For more information on the projects in general you can find all my posts on this programme linked from here.

Underlying technologies

Although the underlying technologies – that is, the technologies used by the innovations they’re evaluating – aren’t the direct focus of these projects, I’ve included them as they’re obviously of interest.  They also show the very broad range of approaches and methods being evaluated by this project.

Several of the projects expressed a strong desire to reuse existing tools and resources  such as MS Office and other commercial software solutions, rather than reinvent the wheel by developing new software; there were also very compelling issues around staff training for new systems, staff familiarity and comfort with existing systems and strong pressure from staff, students and management to work within institutional VLEs.



Feedback delivery

MS Word (annotated markup documents)

eTMA (electronic tutor marked assignment) system

Assignment timetables (diaries)

MS Access




Online marking


Student generation of assessment content, social network functionality supported


Plagiarism detection


Blackboard Safe Assign

Bug reporting

Pivotal Tracker

Surface tables to improve online marking process

Pen devices to improve online marking process

Managing self-reflection workflow


Online learning diary

Automated writing technique evaluation tool

Turnitin eRater

Communication with students, course news, deadline reminders


Peer assessment tool


Centralised email account, blog and microblog for managing assignment submissions and communicating with students and staff


Communication with students


Blog for discussion of common Q&As, general assignment feedback



Adobe Connect


Evidence gathering

As these projects are about collecting and evaluating evidence, the approaches taken to this are of obvious interest.

There was a strong emphasis on interviewing as the main approach, with audio and video interviews being recorded for subsequent analysis and dissemination where appropriate approval has been given.  Jing was the main recording system cited for this.  Surveys (which can be considered a kind of asynchronous interview) were also mentioned, with Survey Monkey being the tool of choice for this.

Less structured impressions were also sought, with Jing again being cited as a valuable tool for capturing staff and student feedback.  Twitter was also mentioned for this purpose.

Evidence analysis

The emphasis of this strand is on qualitative rather than quantitative outcomes, with users’ experiences, case studies and the development of guidance documents and staff development resources being the main focus.

Nvivo was cited as the tool of choice for the transcription and coding of audio and written feedback for subsequent analysis.  Collaborative writing, analysis and version control are the main concern for this part of the projects, and are being addressed through the use of Google Docs and SharePoint.

Standards referenced

The standards used by projects in this programme are fairly generic.  None of these projects are using standards such as those produced by IMS as they were felt to be not really relevant to this level of work.  One project was looking at the use of IMS Learning Tools Interoperability as providing an approach to integrating their software development with a number of different VLEs being used by institutions within their consortium.  Beyond this, the standards referenced were unremarkable: primarily MP3 and HTML.


All the projects have thorough dissemination plans in place to ensure that their findings are shared as widely as possible.  It was great to see that all the projects referenced the JISC Design Studio, a fantastic resource that is well worth digging around in.  Overall there is a wide range of technologies being used to ensure that the findings from these projects reach as broad an audience as possible.  Again, there is a clear mix between established, proprietary software and free services, reflecting the range of technologies in use within institutions and the different institutional contexts of these projects.



Recording seminars


Publishing videos



JISC Design Studio


Guidance documents

Peer reviewed publications

Project website


Elluiminate Live

Dissemination and community building


Case studies



Dissemination and community building



MS Office Communicator (now Lync)


Google docs

Sharing stable versions


Screen capture – staff development




Project blog


Conference attendance