My cetis posts now live on my self hosted blog, if you are interested in education post then I have grouped them together here.
Those who subscribe to the CETIS newsletter will receive the top posts of the month ranked by the total number of views they’ve had, while its nice to see what our audience find interesting sometimes we have our own personal favourites that we’d like to share. I asked CETIS staff if they wanted to share any of blog posts they had written in 2012 they were fond of and why..
My favourite post of 2012 is “The MOOC just got even better“, which mentions some of my reflections on taking Stanford’s HCI MOOC over the autumn semester. There has been a lot of MOOC-bashing lately and whilst they’re not perfect (in fact Coursera has only been around for a year), from a student’s point of view they’re a great way to access free education from reputable institutions. Sometimes I think we’re so busy looking at the technology or the process, that we forget about the student. To be involved in something at the beginning that will undoubtedly mature and change is exciting and I look forward to seeing how MOOCs will evolve.
For this year, I choose not a post that has received comments, but one that has not, Follower guidance idea.
I had first used the term “follower guidance” in a CETIS e-mail in May 2011, so it had brewed in my mind for over a year before this exposition. I think we (in CETIS/IEC) should be doing this kind of exposition of vision, whether or not it is immediately recognised or responded to. In this case, I have to accept that people have not yet digested the idea enough to comment on it, though when I explain it face-to-face it seems to be understood and appreciated at some level. So I offer this post as a hope for the future — maybe it will be referred to by others as the ideas come to make more sense.
First one is “a conversation around what it means to be a digital university“. This was a personal favourite as it was more of a staff development activity for me as it allowed me to co-author some thoughts with my Strathclyde Uni colleague Bill Johnston around strategic aspects of becoming a digital university. We’ve had very positive feedback, conference presentation and a couple of papers in the pipeline from this. We’ve also been approached by Napier University to be critical friends over the next year as they develop their digital strategy.
The second one, is one of those posts that I kind of wrote off the cuff and is “learning analytics, where do you stand?“. It was really useful to reflect on a presentation from Gardner Campbell about learning analytics, and I got quite a few comments, which is always good. The post also helped to set the scene for our work on analytics this year which culminating in the CETIS Analytics Series.
Notes on technology behind cMOOCs: Show me your aggregation architecture and I’ll show you mine
This post started as a simple analysis of the infrastructure around MOOCs, but as a wrestled with the text a couple of revelations emerged.
Focusing on the connectivist style of courses it’s evident that instructors are picking up the tools around them to manage courses. Because, as Downes commented, ‘users are assumed to be outside the system for the most part, inhabiting their own spaces’ aggregation of data becomes key. Even when you deal with well-established technologies like blogs there are interoperability issues with extracting data like category/tags and user comments. One of the key challenges in moving connectivist style MOOCs forward is developing tools that can effectively aggregate data from a range of sources and provide actionable insights for both tutors and students. This post highlights some current work and possible future directions.
Do you git it?: Open educational resources/practices meets software version control
Software version control tools like Git have long provided software developers a space to collaboratively work on projects providing an easy way track, contribute and modify code even when offline. Given the features of remixing and branch existing material you’d think it would make the ideal repository for open educational resources (OER). This solution is not without its issues such as confusing terminology and very structured workflows but it’s interesting to see non-coders adopt Git as a place to host their content. This post highlights some existing examples like open bid writing, music and course content and asks should we be Gitting OER.
Not so much because of what is in the blog post but because of the work it represents which illustrates how CETIS can spot an innovation that looks interesting and work with Jisc and Jisc services to trial it in a UK F&HE context.
Since Phil has already chosen his JLeRN Experiment blog post, I’m going to choose this: OER Booksprints Reflections
I’ve chosen this as post as undertaking a booksprint to synthesise, record and disseminate the technical outputs and issues surfaced by Jisc programmes was an entirely new approach for both Jisc and CETIS. A booksprint is essentially an accelerated facilitated writing retreat and our aim was to draw together the significant technical outputs of three years of the JISC / HEA Open Educational Resources Programmes, reflect on issues that arose and identify future directions. I think its fair to say that we all approached the task with some trepidation and perhaps even a little scepticism but we were all greatly surprised and encouraged by the result: “Technology for open educational resources – Into the Wild”, which will be available to download as a free ebook, or to print on demand in the new year. On reflection, we all agreed that this was a very effective way to synthesise the complex outputs of the programmes and I would certainly recommend this approach to others. And who knows, we might even plan another sprint in the new year!
Whether you’re trying to flog wares, advertise consultancy skills or simply have a big ego; it seems that we all want to abuse social media for a cause. I find it interesting that we all choose to use different social media services and guess that our strategy depends on on many factors such as what it is we are sharing, the audiences we are trying to reach and ultimately the user base size of the social media service.
Yesterday Obama claimed the most popular Twitter and Facebook posts of all time with over 640 000 retweets and 3 million likes of a picture of him and his wife and it occurred to me that the two huge social media strategies; that of the U.S presidential candidates will be winding down.
With my social media strategy restricted to posting on Google Site’s and this very blog I thought it would be interesting to poke about and reflect on how the candidates things. I found much analysis of both Obama’s and Romney’s campaign already exists; here are some things I found interesting:
Email still plays a huge role
Ed Hallen did an analysis of both candidates email campaigns. The strategies of both candidates are quite complex, but it is clear that email plays a huge roll and there seem to be some important themes to the strategy.
• It matters who in the organisation sends the message.
Both campaigns restricted the times that messages appeared to come from the candidates themselves to more urgent emails. Other emails came from the VP-candidate or spouse.
• Subject matters
Emails from the Democrat camp were often punchy with a semi colon, which Ed claims was a tested way making people more likely to read the message. On the other hand the Republican camp used relaxed one word subjects. ‘Hey’ being the most common
* Know your audience
Having signed up to the lists, both camps know that they are preaching to the converted. It seems the list were more likely to be used for issues such as fund raising then trying to get new votes.
Obama on Reddit
Both sides had the obvious online presence: Twitter, Facebook, Youtube, Google + and Linked in but I was shocked to see Obama do an ‘Ask me Anything’ on my favourite news aggregation site 2 months ago; but in retrospect Reddit is the perfect choice.
• Targeting an audience on the edge
Reddit often comes under attack for suffering from group think popular opinions voted to the top with disagreements being voted down and often deleted. The demographic of Reddit in the U.S. is young males who lean towards the Democrats. This meant Obama could essentially reaching out to an audience who already supported him but were in a demographic not likely. The groupthink mentality would mean that arguments against his replies would not float to the top ( Plus his reference to internet memes gave him geek cred – Not Bad! ).
Social sites create their own analysis dashboards
While twitter was an obvious choice for both candidates I found it interesting that twitter worked with Topsy to create a dashboard to mine itself for data on the candidates and the topics surrounding them. I felt this was a sign that the service knows just how important the data it holds is and was a clear message to the world that if you want to win your cause you have to try and play its game. Twitter wasn’t the only service doing this, Microsoft’s example can be found here.
Mitt Romney on 4chan
When I purchase an item I like having a physical thing to hold and show for my purchase. There is something about my physical CD collection that my digital one does not capture; is it all those colourful cases, the fancy sleeve artwork, the smell of a new cd or am I just a hoarder? My addiction to the physical means I am often late to the party where purchasing digital versions of media content is concerned. When I do finally cave in and opt for my first digital taste of something I remember what it was and the exact reason I opted for digital over physical. My first digital CD was a limited print and only sold in physical form at U.S gigs . My first digital games came from a ‘pay what you want’ indie charity bundle.
Yesterday I purchased a physical book, the book is called Getting started with Dwarf Fortress. For those that don’t know Dwarf Fortress is the 2nd greatest game ever (fact) and is free to download and play. It’s a very complex city building game where you manage a bunch of fortress building dwarfs while coping with many dangers such as goblins, vampires dwarfs, lack of beer and ‘the occasional rampant megabeast’. It’s a hard game with a high learning curve and ASCII graphics. My favourite thing about it is the sheer amount of things in the game. The below flowchart is a community creation to show a beginner what should be done in order to get started:
The game is constantly updated and as a result this flowchart gets bigger and bigger!
The problem with a physical book on a subject like Dwarf Fortress is that it can date quickly. While the book will be a great help for me in turning Atolkol into a successful Dwarf fortress I wonder how useful it will be two or three game updates down the line. Will the book have a short shelf life with the subject of the book being constantly updated? Reading the back of the book I spotted this about free updates:
I headed over to forums and the author had this to say on the matter:
“Yes – it isn’t well explained on O’Reilly’s site, but O’R ebook customers will be alerted when the book is updated and able to grab a new copy. Other owners can, if they wish, “register” their book with O’R for $5 and also get the updates.
Print book will always be current to the version bought at the time of purchase. The current version is, basically the May releases of DF – so, exceptionally current“
This is a new concept to me; it could be common practice that I haven’t noticed, as I haven’t purchased an Ebook before. My shelf is full of text books on the same subject, not because I particularly love Java but because updates to it render old books useless. I like the idea of ‘patching books’ that a book can evolve along with its subject matter.
I checked out the Ebook and not only does it get updates, it is full of beautiful colourful pictures that don’t work very well in the black and white book. So now I have another digital media content first, my first Ebook.
For as long as I have been a web developer with CETIS we have relied on analysing server logs to give an indication of traffic sources and visitor trends. This approach existed long before I joined CETIS and seemed like a logical way of doing things, CETIS has had many web servers and many different developers have installed different tools and resources and since they were all using the same servers and producing the same style logs it has been a reasonable method of producing comparable stats.
While this method of collecting stats has stayed the same over the life of CETIS, the direction of CETIS and the environment that it finds itself in has changed over time and a need for a new strategy has become apparent.
Challenges from JISC CETIS and the environment
- JISC CETIS is more distributed from a technical point of view
Historically CETIS has had access to physical servers that sat in a server room somewhere in a University. A recession later and shifts in University policies mean that the abundance of resource is no longer available. While there are lots of external providers are happy to help you produce a flexible service and tie you into their hosting packages it does raise issues. Do we have access to server logs? Are the logs the same? If not then are the stats produced similar to the stats package we use? Can we even produce stats?
Similarly JISC CETIS is moving away from bespoke code when there are popular services that do the same thing and this raises similar questions. What stats do the services produce, are they comparable with other services, is there an API and will we have to pay to access what we’ve collected down the line.
- JISC CETIS is more distributed from a people point of view
Staff in JISC CETIS are technologically savvy and have our opinions on the services and techniques that we like. While I think it is a good thing to have such a technically diverse organisation trying new and exciting things it is also a problem from a stats analysis point of view. Are staff hosting their blogs, events and resources on cloud services and if so how do we measure the use of these resources?
- A call for more sophisticated analytics
- Google Analytics is more intelligent when it comes to what is and isn’t a visitor or a bot
- The hacks for Google Analytics to track binary files and RSS are not very good.
A hybrid solution
Finally I think that as organisations become more distributed and stats become more personal a web analytics strategy becomes more of an individual responsibility. I’m not quite sure what an effective strategy where analysis of individuals resources trends is helped to steer the organization as a whole would look like.
More to come…
Over the past few months I have developed an interest in agent-based modelling using tools such as Netlogo, RePast or Swarm. These tools combined with increases in processing power make it incredible easy to get started and soon sucked me in.
Agent-based models are computation models that are used to make predictions about the interactions of agents in a system and how these interactions may affect the system on a whole. Quite often the systems being modelled are ones where simple small interactions on a low level have a huge effect on the overall system at a higher level such as how greenhouse gasses blocking infrared light might have an effect on global temperature (have a play on the model by Lisa Schultz here!)
After playing with these models I got me wondering about the possibilities of modelling the interactions of agents within educational institutions and how we could use these techniques to explain the emergence of behaviour but at the same time I have worried about how we would validate these models without ‘hard data’.
Here at the University of Bolton we have recently switched VLE to Moodle and it appeared to me that what could seem like a simple process of ‘changing the VLE’ was actually made up of very complex communications and interactions between the staff based here. Using this as a starting point I got together with a colleague and started to create a model that explained how we thought the communications within the University might look and how these communications could be disrupted or improved using technology.
At 2011 Cal Conference in Manchester my colleague Mark Johnson presented the model as a way of explaining how we thought technological interventions could be used to change communications and how this might have an effect on the how the institutions works on a whole.
The model showed the different types of communications between certain groups of people and how these communications could change when they people were placed in different social situations or when technological interventions were made.
I thought the response from the audience was great, who did not worry about the validity of the model itself but seemed to find the visual representation of how we thought technical intervention may change communication useful. The reaction of the audience at the session made me realise that a powerful aspect of agent-based modelling might simply be the ability to demonstrate what your view on a problem is.
The Village Pump is an information hub for activities relating to the Flexible Service Design programme. The hub is a go to place for FSD related articles, events and contact information. Articles in the hub are aggregated from sources using their RSS feed and currently we gather information sources such as blogs, JISCMAIL lists and forums.
We would like to populate the hub with as much useful information as possible so if you have any suggestions on sources you would like to see aggregated in to the village pump or if you are running a project blog and would like its posts to be aggregated then please comment on this entry with information and RSS feed.
The Cloudworks team have been developing an API for Cloudworks to allow developers to create their own visualizations, programs and mash ups. The API currently supports calls to get data and I found this is a great opportunity to test different ways of visualizing and organising the resources within cloudworks for the Design Bash 2010. Sheila MacNeill has created a cloud to store and discuss Cloudworks API tests, my test demos can be found there and the cloud works team are encouraging developers to get stuck in and have a go.
If you would like to play with the API yourself and add your own demos you will need to get an API by signing up to Cloudworks and contacting Nick Freear. There is some excellent documentation on using the API already made available by the Cloudworks team and Nick was happy to provide example code, his PHP example can be seen here (don’t forget to change the user agent/api key/cloud id):
Edit: Removed the code since the best place to grab a PHP example is Nicks snippler.
This should get you up and running and is pretty self-explanatory. I hope to turn my demos into something more useful and post some specific examples of things that can be achieved using the Cloudworks API.
Digital distribution is everywhere; applications such as iTunes provide the ability for digital products such as MP3s, movies and computer software to be delivered to audiences over the Internet instead of using physical media such as CDs, DVDs or Blu Ray. They provides easy and direct sales to a global market. With iTunes and the App Store ‘Apple’ may be the company that comes to mind when digital distribution is discussed but it shouldn’t be forgotten that plenty of video game consumers have been using these systems for years and recent announcement at this years Game Developer Conference 2009 have really shown that there are plenty more exciting developments to come.
Over the past 5 years gaming has seen a massive rise in digital distribution systems; many customers have been more then willing to make the switch from obtaining a physical copy of computer game software from a ‘bricks and mortar’ shop to downloading it through through distribution systems such as Steam, Impulse, Xbox Live Arcade and PSN. Some of the advantages distribution systems have over their convention counterpart include:
- Instant user feedback
- Anti-cheating Systems for online games
- Auto patching
- Downloading purchased-content from any location
For myself the big draw was (and still is!) the last bullet point. The idea that once I bought a game I could download it as many times as I wanted; even if I buy my product from a ‘real’ shop, the first thing I do is enter the serial code into a system as a backup, just in case it gets lost/snapped/broken.
Since I started using such systems when Steam first launched in 2003 there have always been two questions for me. The first is is how long will be before we no longer need to download the game? When can we stop buying into the expensive CPUs,GPUs and PPUs that games require and let all the processing be done server side? A recent announcement this week and ongoing work by Valve suggest it might be closer then we think.
The second was how digital distribution systems could move into different markets. In the UK most gamers will have broadband connections and a 7th generation console(or PC) since we are constantly after that new game and are a easy target for publishers; but what about markets that don’t buy into the latest consoles and games? Brazil has a massive gaming market, but one quite different from the situation in the UK with older consoles such as the Master System still seeing re-releases as late as 2006. Another exciting announcement at GDC 2009 saw a console designed exactly for such markets, pushing digital distribution as its method of obtaining games sales.
Moving into the Cloud
This week at the Game Developer Conference 2009 it was announced that LiveOne is a game distribution system that promises to take the load away from your computer and onto the cloud; allowing resource hungry games to play on modest hardware.The LiveOne developers maintain that the main bottleneck is bandwidth, with lower bandwidth users simply being met with a smaller screen resolution. This is really exciting news for gamers; does it mean it mean that digital distribution and cloud computing will kill the console/PC spec war, will we no longer go through generation after generation of video game consoles?
Although it would seem that Valve don’t think we are ready for such a radical shift they are still moving in a similar direction with their product ‘Steam Cloud’. Although Steam Cloud still delivers the game to the end user via a full download the idea is that variable data such as save games and settings are stored in the cloud meaning users can log on from any terminal with the game installed and carry on from where they left off.
Expanding the Market
Tectoy announced they would be attempting to push digital distribution into ‘The Next Billion’ Market by creating a console that will sell and distribute games through 3G or Edge networks using a virtual currency not unlike Microsoft or Wii Points. The seems to be no shortage of publishers wanting their games on the system; and a quick scan of the games that will be available (Crash Nitro Kart, Quake, Sonic Adventure to name a few) would make it seem that these publishers are egar to bring their old games to new markets; with digital distribution being the ideal means to do so. I find it fascinating that the company have decided that a decided to enter these markets via the digital distribution route.
It is no secret that there is a huge amount of money in games and this is the driving force behind these incredible innovations. As always though the technology will filter down and hopefully we will see the technology in other areas. Could application processing in the cloud mean we see an end to the PC CPU/GPU spec wars forcing PC manufacturers to focus their initiative in others areas? Will it mean that high-end programs will be able to run on your mobile phone with a small client purchase simply being made over 3G/Wimax etc? I’m sure there are plenty of exciting discussions to be had at future CETIS Cloud Computing and Institutions Working Group meetings!