Real-Time Communication through your Browser

This is a nice follow-on to my previous post regarding the web and the work of the W3C. As we’ve seen, the web and its technologies have been evolving and getting more powerful and while some will still eschew the growing relevance of the web (and its friendly neighbourhood viewing window, the browser) in a world of apps apps apps, the W3C continues to push forward its capabilities.

So step forward the newly formed Web Real-Time Communications Working Group. The mission of the group is to define a set of client-side APIs to enable real-time communications through the browser…video, audio, no plug-ins or downloads. The Charter page also states “supplementary real-time communication” so we’re also looking at screen sharing – or at least ‘browser window sharing’ – I think I’d be safe in saying.

One of the great things about this – imho – is that the working group will be looking closely at device APIs and pushing work on those forward, which, along with the DAP (Device APIs & Policy WG) should hopefully propel the development of APIs for device capabilities such as use of camera, microphone and the whole area of media capture and streaming. I then automatically think of the mobile space…mobile web apps for video chat anyone? :)

The working group has a timescale that looks at getting their first recommendations out toward the end of next year.

Want to see it in action? Well, Ericsson Labs (who are co-chairing the working group) rather kindly produced a video demo – Beyond HTML5: Peer-2-peer conversational video in HTML5. It is below…for your viewing pleasure. You can also read their accompanying blog post at https://labs.ericsson.com/developer-community/blog/beyond-html5-peer-peer-conversational-video

[youtube]http://www.youtube.com/watch?v=kM2EFWpTWc8&feature=player_embedded[/youtube]

Mobile Web Apps: A Briefing Paper

I’ve recently written a JISC CETIS briefing paper on the topic of Mobile Web Apps.

Mobile Web Apps: A Briefing Paper

Mobile Web Apps: A Briefing Paper

With the growth and constant shift in the mobile space institutions could be forgiven for feeling a little lost as to how to best tackle the issue of delivering content and/or services that are optimised for mobile devices. Apple, Android, Blackberry, Windows Phone…app ecosystems seemingly everywhere you turn and each requiring different development approaches; SDKs, programming languages, approval processes and terms & conditions. I think it’s fair to say that for institutions, looking to deliver to mobile devices while being as inclusive as possible, this area is something of a minefield.

A viable, alternative approach is developing Mobile Apps using open web technologies and standards; technologies that continue to improve performance and offer more powerful functionality – as is now being talked about quite a bit on the topic of HTML5.

The briefing paper is intended to give an overview of this space and cover some of the key talking points, with a collection of useful resources with which to delve deeper into the subject for those that decide that mobile web apps are indeed a workable solution for them. I’m hoping that an interested audience would consist of institutional web staff, students services, learning technologists, maybe even an IT services manager here and there :)

It’s in PDF format but I’ll also be looking to get it in web form on the CETIS website over the next few days and, of course, I’d welcome any feedback and questions on it here.

If you’re interested, get it at http://wiki.cetis.org.uk/images/7/76/Mobile_Web_Apps.pdf

Georgia Tech releases open standards mobile AR browser

Argon is a mobile Augmented Reality (AR) Browser for the iPhone. From the website:

Argon is the completely open standards augmented reality browser that allows rapid development and deployment of Web 2.0 style augmented reality content.

Argon renders a standards compliant combination of KML, HTML, CSS and JavaScript served via typical HTTP servers

Multiple simultaneous channels, analogous to browser tab on the desktop, let authors create dynamic and interactive AR content using existing web development toolsets.

The browser is stated as being the reference implementation of Georgia Tech’s work on the KHARMA Mobile AR Architecture, which combines HTML for content with KML for defining geographical co-ordinates (as used by Google Maps, Google Earth & Yahoo Maps).

Argon Mobile AR Browser

Argon Mobile AR Browser

One thing that seems to counter-balance this standards flag bearing though (for me, at least) is the fact that Argon is only available on iPhone – in fact, the developers go so far as to specify that it is best run on the latest version, iPhone 4. Hopefully that will change over time and we’ll see versions for the other popular mobile platforms too: the ever growing Android and the recently adrenaline-injected Windows Phone 7. After all, it would seem a little odd lauding the open standards route while then being restricted to a single delivery platform.

But there’s plenty of growing room in the still young AR space. With the technology making a significant appearance in this year’s Horizon Report – given a ‘Time-to-adoption’ period of 2-3 years, and us already seeing mobile augmented reality being implemented at Exeter Uni on their JISC LTIG Project: Unlocking the Hidden Curriculum, it’s good to see a new offering in this area to possibly compete with the current big players: Layar, Wikitude & Junaio.

My wish? My wish is that we could see something like Argon develop into a platform for AR developers, built on open standards, that would be supported by those players and open up the AR space to easily create interactive and immersive mobile AR experiences & content that you could then deploy cross-browser. Like I say though…early days yet. Hopefully we’ll see it happen.

Oh..one more thing…I have installed Argon on my (now lowly) iPhone 3GS and while the browser looks pretty standard fare – channel view, map, search, etc – unfortunately it seems there are absolutely no POIs (Points of Interest) nearby and the search for local channels isn’t yet implemented. So, as yet, it’s a bit difficult to get a handle of whether Argon would float my boat. Next up I shall go and check out the developer’s area and have a look at creating my own POIs and content. I’ll let you know how I get on…

The Argon browser can be found at http://argon.gatech.edu/

*** Update ***

There are POIs available nearby – I just hadn’t looked at the getting started tutorial properly (I know…I’m one of those blokes that doesn’t read the manual). I’m liking the search box in the realview but the POI icon itself is a bit flaky and judders about a bit too much – I suspect their recommendation of using iPhone 4 is down to the gyroscope aiding with that, which the 3GS doesn’t have. But as you can see from the screenshot, it does the basics and I would imagine one can customise the look with your own CSS. Now…let’s hope their documentation is clear and helpful and not simply written by some Tefal headed genii in a Georgia Tech Lab…

Screenshot of Argon AR Browser

Screenshot of Argon AR Browser

Mobile Web Roundup

The Mobile Web

Well…I’ve been travelling around the interweb, reading – or simply adding to Instapaper for later and trying to get round to reading – lots of lovely articles, blog posts and suchlike on the current happenings around the Mobile Web. As you’ll well know (seeing as you’re reading this) the Mobile Web is a hot topic at the moment, so I thought I’d highlight some of the things I’m reading up on right now.

The Opera Mobile Web Optimisation Guide

The guys at Opera are superb when it comes to talking and teaching about web development techniques and the current state of the web. I’ve enjoyed listening and talking to both Patrick Lauke and Bruce Lawson in the last few months and Bruce has taken his talk around this and built it into a handy guide, available on the Opera Developers website. Bruce talks about the options available when looking to deliver your content to mobile devices and gives loads of really useful advice and tips on stuff to do, stuff to avoid and delivers a really nice outline on why CSS Media Queries are so powerful and can help you build mobile-aware, adaptive websites that don’t have to check which browser the content is being delivered to but checks the device settings themselves (think “display resolution”). I strongly encourage you to check this guide out if you haven’t before.

http://dev.opera.com/articles/view/the-mobile-web-optimization-guide/

Combining meta viewport and media queries

Following on with the CSS Media Queries angle, this article on quirksmode gives you a full walkthrough of how to combine <meta name=”viewport” content=”width=device-width”> with media queries to enable your website to resize to fit any display. It tells you what these do, why you should use them and gives the whole technique, with helpful screenshots. Excellent.

http://www.quirksmode.org/blog/archives/2010/09/combining_meta.html

Rethinking the Mobile Web

This is a truly great set of slides that Bryan Rieger of Yiibu recently delivered at the Over The Air event at Imperial College, London. Here Bryan gives us an outline of device usage, mobile browsers and – echoing Patrick’s & Bruce’s message – the options available to us when it comes to making mobile friendly websites (and apps – we can’t ignore those). Bryan puts a damn good point across that maybe we should design our sites for mobile devices FIRST, then add in the capability for the site to adapt for desktop. It’s a different way of approaching the whole creation and I’m really into that way of thinking.

http://www.slideshare.net/bryanrieger/rethinking-the-mobile-web-by-yiibu

The Mobile Web is NOT The Next Big Thing

Haha…now to end with something more leftfield :) This article, written by web designer John O’Nolan takes a playful swipe at those people that trot out the whole “Next Big Thing” line. John gives us an entertaining look at the evolution of the web on mobile and does put a nice perspective on things. What’s even better is that some of the thoughtful comments round out the whole thing to make a nicely smart piece on viewing the state of the mobile web.

http://john.onolan.org/the-mobile-web-is-not-the-next-big-thing/

So, a few things there for you to have a look at and digest. We’ll be seeing this talked about more and more I suspect. Cheers! M

Mobile Tech meeting raises issues

I recently ran a JISC CETIS event on mobile technology at the University of Bolton and, it seemed to me, to be rather successful. Of course the day was packed, we ran over time and my session on AR at the end of the day was rushed and sketchy…but it nicely lines up some more focused future events.

First of all, the presentations from the day are available on our wiki at http://wiki.cetis.org.uk/Mobile_Tech_Meeting_15th_June_2010

Throughout the day we highlighted some of the key challenges, issues and general questions that attendees shared in this space…

Feasibility of supporting massive variety of devices, software, etc

With a huge variety (around 350) of the mixture between devices, manufacturers, families and platforms, how does an institution deliver to mobile while having to focus on the all-important issue of inclusion? Apps are the flavour of the day right now with the runaway success of Apple’s App Store leading to competing providers to follow suit and push development along the native app path. However, with the advent of HTML5 and CSS3 now giving web developers far more power to create engaging and powerful web applications, along with new frameworks that harness these and JavaScript allowing the use of APIs that can tap into the native functionality of devices such as Geolocation – now we can have a fairer and more balanced discussion about “Apps v. Web”. You can read more about these frameworks at http://www.webmonkey.com/2010/06/new-frameworks-give-mobile-web-apps-a-boost/

Who supports the use of mobile in institutions?

There are 2 main parties to think about here – Preparing staff within institutions & the support for students (perhaps through induction processes). Now, assuming this would involve different departments and that these should (ideally) have a dialogue with each other…who supports the supporters?

Integration with existing systems: VLE, PLE, eP…

This ties in – for me at least – with the discussion around the Distributed Learning Environment & the widgets work that CETIS is heavily engaged in. The mobile device seems such an obvious part of a learner’s “PLE” (as in, it’s personal) that this area is ideal for focusing on the overlap and connectivity between institutionally controlled systems and the tools and services that learners use. Also, the provision of data from institutional services to mobile devices. Can I get a map of where I am on the campus? Can I see if there’s an available room nearby and book it, check my timetable or search the library?

Personal & Professional

This is an interesting one for me and it also links to the PLE area (in the way I think about it anyway). Increasingly, the ubiquity and all-round saturation of technology in so many parts of all our lives is leading to this blurring between work and private/personal life. As professionals we face these questions and for some of us, our whole use of technology has almost completely broken down the lines between the two. The things I do at work are the things I am interested in outside of work too, so I’ll find myself twittering and posting facebook links at any time, anywhere. But is this the same for learners? Also, context and location is hugely important. The use of mobile devices enables you to capture photographs, video, blog, twitter…whatever…from wherever you are (yes, assuming connection, etc), so what are the ethical issues?

Business Case

Now, this seemed to get the most nodding of heads. How do we make the business case to our institutions for the need to engage with mobile technology and focus some development? Do we assume it is want the learners want or is it something that we think is important and growing and soon-to-be all pervasive? How can mobile learning improve learning in general? Is there a case for it? Where does the focus get placed and (!!) the money go toward?

Can the pedagogy map to the affordances given to us by the technology available? Two of the presentations on the day covered work in Geography field students and assessment in healthcare practices. I think it’s easy to to see how these areas are prime for the enabling and enhancing of in-the-workplace/field activities that mobile devices and their functionality providebut… Is mobile tech from an institutional, learning delivery sense, not really applicable or practical for all?

Lots and lots of questions.

One thing I’m sure of is that the mobile tech area is currently the most fast moving (almost dizzyingly so) and exciting areas around in educational technology at the moment. The opportunities that such increasingly affordable and powerful technology, always on, always connected are handing to so many of us are changing the shape of the learning landscape. Institutions need to get a handle on this, otherwise they’ll be quickly left behind…but I know, it’s not a simple issue.

Oh and yes, I know I said above that this tech is with “many of us”. I’ve not forgotten the very important aspect of inclusion, in all its forms. But I think I’ll leave you with this blog post from one of our speakers at the event, Dr. Richard Hall (DMU) - Inclusion, social relations and theory: issues in mobile learning

http://www.learnex.dmu.ac.uk/2010/06/inclusion-social-relations-and-theory-issues-in-mobile-learning/

Mobile tech, web-apps & frameworks

One of the big questions around institutions throwing themselves into the mobile learning world is how do you cater for such a huge variety of handsets and operating systems? Tom Hume, Managing Director of Future Platforms (http://www.futureplatforms.com/) recently presented at the excellent Eduserv Symposium: The Mobile University. Tom pointed out that to reach 70% of UK mobile owners, you need to be available on 375 different devices, 70 different families from 8 manufacturers.

But anyway, go and check out Tom’s talk, along with all the others from that day, on the Eduserv website: http://www.eduserv.org.uk/events/esym10/presentations

The following resource is related to this and exists in a debate that is building in some quarters: If different providers are channeling development of different application platforms, and you look at it and think, “Argh! how do we manage to cover THAT lot??”…do we get the question – Apps v. Web?

Sencha Touch: Mobile-Web Framework

Sencha Touch: Mobile-Web Framework

Up step a number of JavaScript frameworks and support for cutting-edge web features afforded by HTML5 and CSS3, while also enabling developers to take advantage of device capabilities such as geolocation (rather an important aspect given we’re talking mobile, eh!). So this way we mix the 2, being able to develop we-apps that can run across a variety of devices while being able to have that very nice look we see in native apps.

So…here’s the more in-depth link on the Webmonkey site that covers a few of these frameworks. Check it out..it’s very interesting :)

http://www.webmonkey.com/2010/06/new-frameworks-give-mobile-web-apps-a-boost/

This is, of course, closely related to CETIS’s work in the widgets space and the Distributed Learning Environment.

XCRI Support Project wraps up

March sees the end of the JISC funded XCRI Support Project as it signs off leaving the development of the XCRI (eXchanging Course Related Information) specification for sharing (and advertising) course information looking very healthy indeed.

The support project picked up where the original XCRI Reference Model project left off. Having identified the marketing and syndication of course descriptions as a significant opportunity for innovation – due to the general practice in this area being one of huge efforts around re-typing of information to accommodate various different systems, sites and services…then to have that information maintained separately in various places – the XCRI Reference Model project mapped out the spaces of course management, curriculum development and course marketing and provided the community with a common standard for exchanging course related information. This would streamline approaches to the syndication of such information and give us the benefits of cost savings when it comes to collecting and managing the data and opens up the opportunities for a more sustainable approach to lifelong learning services that rely on course information from learning providers.

Over the course of the next three years the XCRI Support project developed the XCRI Course Advertising Profile (XCRI-CAP), an XML specification designed to enable course offerings to be shared by providers (rather like an RSS feed) and by other services such as lifelong learning sites, course search sites and other services that support learners in trying to find the right courses for them. Through the supervision and support of several institutional implementation projects the support project – a partnership between JISC CETIS at the University of Bolton (http://bit.ly/PZdKw), Mark Stubbs of Manchester Metropolitan University (http://bit.ly/PZdKw) and Alan Paull of APS Limited (http://bit.ly/cF6Fhd) – promoted the uptake and sustainability of XCRI through engagement with the European standards process and endorsement by the UK Information Standards Board. Through this work the value of XCRI-CAP was demonstrated so successfully as to ensure it was placed on the strategic agenda of national agencies.

Hotcourses manages the National Learning Directory under contract from the Learning and Skills Council (LSC). With over 900,000 course records and 10,000 learning providers the NLD is possibly the largest source of information about learning opportunities in the UK, which learners and advisers can access through dozens of national, regional and local web portals. Working with a number of Further Education colleges Hotcourses is now developing and piloting ‘bulk upload’ facilities using XCRI to ease the burden on learning providers supplying and maintaining their information on the NLD. UCAS also continues to make progress towards XCRI adoption. Most recently, at the ISB Portfolio Learning Opportunities and Transcripts Special Interest Group on January 27, 2010, UCAS colleagues described a major data consolidation project that should pave the way for a data transfer initiative using XCRI, and cited growing demand from UK HEIs for data transfer rather than course-by-course data entry through UCAS web-link. The project is a two-phase one, with XCRI implementation in phase II, which is due to deliver sometime in 2011.

Having ensured that the specification gained traction and uptake the project has worked extensively at developing the core information used by XCRI into a European Norm with harmonisation from other standards that addressed this space developed elsewhere across Europe. It is this process which has seen the evolution of XCRI from a standalone specification to a UK application profile of a recognised international standard. This could now be transitioned to an actual British Standard through BSI IST 43 (the committee of the British Standards Institution which looks at technical standards for learning education and training). At the same time adoption of the specifications were continued to be supported through engagement with policymakers and suppliers while the technical tools developed for adopters continued to be updated and maintained.

XCRI Aggregator DemoA couple of key tools were developed by the support project to assist implementers of XCRI. An aggregator engine was setup and maintained by the project and is demonstrated at http://www.xcri.org/aggregator/. This shows how its possible to deploy an aggregator setup that pulls in courses from several providers, and offers a user interface with basic features such as searching, browsing, bookmarking, tags and so on. It also demonstrates some value-added aspects such as geocoding the course venues and displaying them on Google Maps. Once you’ve had a look at the demonstrator you can get hold of the code for it at http://bit.ly/9eViM2
The project also developed an XCRI Validator to help implementers check their data structure and content. This goes beyond structural validation to also analyse content and provide advice on common issues such as missing information. Currently the development of this is very much at a beta stage but implementers can try out this early proof-of-concept at http://bit.ly/aeLArY. Accompanying this is a blog post describing how to use the validator at http://bit.ly/aHoJtH

Up to press there have been around 15-20 “mini-projects” which were funded to pilot implementation of XCRI within institutions. These looked at developing course databases using the specification, extending existing systems and methods to map to XCRI and the general implementation of generating the information and the exporting of this via web services. Not to say that this was the only project activity around XCRI. Various other Lifelong Learning projects have had an XCRI element to them along the way and all these have contributed to forming an active community around the development and promotion of the spec.

This community’s online activity is centred around a wiki and discussion forum on the XCRI Support Project website at http://xcri.org and while the support project is now officially at an end, the website will stay around as long as there is a community using it – currently its maintained by CETIS. Some XCRI.org content may move to JISC Advance as XCRI moves from innovation into mainstream adoption. However, as long as people are trying out new things with XCRI – whether thats vocabularies and extensions or new exchange protocols – then XCRI.org provides a place to talk about it, with the LLL & WFD project at Liverpool (CCLiP – http://www.liv.ac.uk/cll/cclip/) currently looking at how to improve the site and provide more information for non-technical audiences.

More information on the XCRI projects can be found at the JISC website, specifically at http://bit.ly/awevwQ

Augmented Reality – A Game Changer in Mobile Learning?

I think we’re getting to the point where, by now, many of you will have found it hard to avoid hearing about the latest technology buzz. No, not Cloud Computing. This technology is, shall we say, more ‘tangible’. It is – quite literally – technology that you can hold in your hand.

augmented-reality-1_fg7yg_54Yes folks, by now many of you will be aware of this growing buzz around the rather snazzy and futuristic sounding ‘Augmented Reality’ (AR). The headlines are growing, the clamour is getting more excitable by the day and even though it’s only really hit the public consciousness relatively recently, I don’t think we’re far away from that glorious, early-doors hype bubble popping to the sounds of “well…there’s not many apps!”, “is it just for restaurants and tube stations?” and “it’s not a game-changer, it’s a fad!”. For that last one just look at the James Cameron’s Avatar 3D story (and nobody’s even seen it yet!) ;)

Still. It’s an exciting technology and one that – like Cameron’s 3D in cinema – will be a game-changer (imo), not simply a fad, given the opportunities it will open up to enable and enhance the immersive delivery of rich content to mobile platforms. As with anything though, it’s not going to happen overnight. Right now the bugbear with 3D is the lack of supporting cinema screens and similar applies to AR capable devices. However, that will inevitably change.

There are 2 types of AR applications at the moment – Mobile & Desktop. Desktop uses marker-based images to create animations (both 2D & ’3D’) and – in some cases – also include interactive controls. I’ll cover this in a future post I think. For this brief post though I’m talking about Mobile AR. This is where an application uses your phone’s GPS to know where you are and its magnetometer – or more simply, the digital compass – to know which way you’re facing. Couple those together then add in the live video feed coming through your phone’s camera and bingo! We have location aware data overlayed on your image of the world around you.

At the moment I’ve got 3 AR apps on my iPhone – Robotvision (http://robotvision-ar.com/), Wikitude (http://www.wikitude.org/) & Layar (http://layar.com/). I’ll write a post in which I cover these soon but for now the obvious question is simply, “well, how could they be used in a learning activity?” – Oh and yes, I’m excluding ‘learning where the nearest Costa Coffee is’ from my criteria. Now I’m no teacher but I can see this…

Imagine, you’re studying Local History and looking at the changing architecture and layout of the town centre. You stand on a street, point your phone* at a scene and overlayed on the live image are archive photographs of the location spanning the decades. Touch the image of the Town Hall and you’re given the option of viewing a Flickr pool, visiting the town hall’s website, or going to the Wikipedia page and reading up on the history of the building. Buildings, street views, whole town layouts perhaps..

[youtube]http://www.youtube.com/watch?v=gwtmk1ZjhY0[/youtube]

* I must point out that I use the term ‘phone’ very loosely here. What I really mean is the “smartphone”, the pocket sized computer. The gadget in my pocket that is already more powerful than the old PC I have in the back bedroom.

Or let’s say you’re on a Geology field trip, trekking around the Isle of Wight. Pointing your mobile device  (see, we’re evolving already!) at a nearby outcrop and there on top of the scene, through your camera, are some controls that will display detailed information about what you’re looking at. That it’s made of sandstone perhaps, that it’s river formed as opposed to wind. The difference between river and wind formations!

So…AR meets Social Media meets The Cloud you could say. Like I say, there’ll be further posts around this from me – where I’ll attempt to look a bit closer at the applications out there and my thoughts on them.

Cheers! M