Wednesday, 14 October 2009

Lab, ethnography, remote: How should we test usability/UX?

This blog follows from talks and discussions at UXBrighton on October 14th. Thanks to speakers from Ethnolabs, Pidoco, Feralabs and Flow. The discussions on the validity and utility of remote testing inspired this blog.

Any UX expert will tell you that testing with real users is the key to successful usability research, but there are a number of methodologies that can be applied in UX research, and applying the right one is dependent on the particular context. The researcher obviously wants the most accurate data they can attain, but they are constrained by time, budget and resources.

Planned lab testing can be useful to measure efficiency because session tasks can be set up in very specific ways. Its also much easier to make use of sophisticated bio-metric techniques such as eye tracking to measure attention and facial expression & galvanic skin response to track emotion. However it may also create a pseudo-context that eludes the reality of the natural user-system interaction.

In many instances ethnography may be more appropriate. This may be done by holding face to face sessions in users natural contexts (e.g. at work or at home), encouraging the user to demonstrate their natural interactions with the system under test, and recording as much data as possible and analysing properly later. This is a research technique I applied in my MSc thesis research with Ubuntu users. This has the advantage of flexibility, because the researcher can control the flow and focus of the session depending on the kinds of interactions and problems that naturally come up.

However, I belief it also has the possibility to create un-natural behaviour through the observer effect, i.e. that people change their behaviour when they feel they are being watched. They may begin to interact with the system in ways intended to meet the researcher's expectations or to impress them. This may lead to in-accurate usage data and ultimately a false picture of user-system interaction. The ability of the researcher to avoid these instances by recognizing un-natural behaviour is crucial to solving genuine usability problems and developing accurate user profiles.

On the other hand remote user testing of screen activity may be a better alternative. We should not forget that in many cases, (particularly with PCs and websites) users interact with systems on their own. The context is that of user and computer, not user and computer and observer. Remote data collection may lead to more accurate usage data. By letting users interact with their system in their natural context and remotely recording data, perhaps at random times of the day, I believe the observer effect is reduced.

Of course there is something missed by using this technique; facial expressions, tone of voice and an explanation of why the user is behaving in the way they do. Furthermore, it will often lead to a large amount of raw data, which can be difficult to properly analyse. However, in terms of actual usage data I believe it may often be an improvement on face to face ethnography. Combining remote observation with other techniques, such as getting users to keep learning diaries as they encounter problems, and face to face feedback sessions; may lead to an accurate picture of user-system interaction and a product's usability.

Ultimately there is no one best technique. Researchers must look at the context of the product under analysis to assess suitability. This context includes things like who the users are, what sort of interactive product is being (re)designed, what stage of development the product is under, and so on.

Likewise, which technique is most appropriate depends on the research question. Do we just want to collect usage data? Is there a particular usability problem we want to solve? Do we want to specifically understand the needs and contexts of the users? Do we want to access diverse user groups? Also other factors such as, time, resource and budget availability, inevitably effect the technique that is chosen.

Knowing how to answer these questions is as much a part of being a UX researcher as performing the research itself.

Tuesday, 1 September 2009

True AI and immortality through the law of accelerating change: Fact or fiction?

Several years ago I read a book called the 'Singularity is Near' by Ray Kurzweil, which both shocked and amazed me. Kurzweil, a transhumanist, predicts that well within our lifetimes, both true AI and other far fetching phenomena such as immortality through reverse biomedical engineering, as well as downloading our consciousness into a fully realistic shared computer simulation, through the law of accelerating change, otherwise known as Moore's Law. At the time I was heavily engrossed in philosophy of AI, cognition and consciousness; and the book left me with the belief that much of Kurzweil's predictions could be true, because I saw nothing philosophically impossible about such rapidly evolving phenomena, and his arguments were so convincing.

Now I'm not saying that this guy is definitely full of ####, nor that he is a nutter (though he probably is), but something smelt fishy about his predictions. See Kurzweil, as well as having some ridiculous number of degrees (I think its about 20) and having a history of solid predictions about the future, is also a very successful entrepreneur. One of his ventures is a range of medicinal long life products, which he sells at a pretty steep price. Could he have written three best sellers telling people they could live to see immortality through nano-technological reverse bio-engineering and hinting at his own endeavours to stay alive through a range of daily supplements? Maybe. Maybe not. We couldn't rule out his ideas purely based on that possibility.

Yesterday, I ended up re-igniting the debate with some friends over a few drinks, and I found out a few things I had probably been over-looking in my previous ignorance. I would consider, both the guys I were chatting to far better computer and AI experts than I am.

First it was pointed out that the idea of the law of accelerating returns, i.e. that CPU power doubles every 18 months, leading to exponential growth and huge improvements in surprisingly short spaces of time, was grinding to a halt in our current state of technological growth. Apparently they just can't fit any more transistors (or whatever they are called) on the circuit boards without overheating. This burst the AI in 50 years bubble to some extent for me, though the debate got far more philosophical than just technological possibilities, i.e. What is intelligence? Do we need embodiment of an agent? Turing tests, Etc.

I'm not going to go into all of that (I'm done with philosophy), but if we give a loose definition of true AI being functionally equivalent to an average human being in all respects, with functions isolated, so that you couldn't tell the difference between the two; is it technologically possible in 50 years? This is similar to what Steve Harnad calls T3 Turing machines I believe. So there might be a language Turing machine, the classic test where an AI converses with you over a text interface to convince you its real. Another one might be an AI with visual processing that can recognize patterns and objects as-well as us. Another might be to produce a piece of music or art that evoked genuine emotion as creative humans can do, and so on and so forth.

I think it might be for many of those tests to be passed soon. The original Turing test for example, I would predict a pass in 20-30 years and perhaps similar for visual and audio pattern recognition. Producing sophisticated music and art maybe 10 or so years after that. But true intelligence, links all of those functions together, arguably in an embodied system, to produce one multimodal integrated intelligent entity. Whilst we might be able to produce a load of functional t3 Turing machines, since the human brain is about a billion times more sophisticated than the biggest super-computers we have now; linking the functions together in one AI to replicate true intelligence will be impossible until we make massive break throughs in processing capacity. One possibility could be some giant parallel processing cloud computer, made possible through the internet or similar, but it would be difficult for us to conceive of that as one entity. Let alone an intelligent one.

However 50 years really is a long time so we agreed it wasn't impossible. Think how far we have come in the past 50 years and its surely only going to speed up. There are so many new possibilities in processing technologies on the horizon; 3D chips, DNA processing, nano-technology, quantum computing. I just don't think processing power will be an issue in 50 years. Its going to be more of an issue of understanding how to make a truly intelligent system and then some how programming it so that it is sophisticated enough to pass for intelligence. That is where Kurzweil may be over optimistic. The task is absurdly complex.

What about immortality through reverse bioengineering. We quickly dismissed this idea. Ok so we might be able to slow down ageing processes through eating the perfect balance of nutrients through supplements. We might be able to develop antibodies and medicines to stop us getting diseases. We might even be able to build nano-bots that live in our blood stream protecting us from any internal threat and communicating to act as swarm intelligence (another one of Kurzweil' excentric ideas). But sooner or later something will get us. Be that a rapidly mutating virus that out does any nano-bot immune system (natural selection always finds a way), or by a car crash, a heart attack or a million other things.

So.... Can we live forever? No its nonesense. Can true AI happen in 50 years? Maybe. But there isn't enough evidence right now to suggest it will happen. And what about one of Kurzweil's other ideas... Living in the Matrix by plugging us all up to a giant supercomputer that stimulates our senses to produce a shared experience that is indistinguishable to real life... I'm not even going to go there...



Monday, 29 June 2009

Why would you pay money for OSX - when you can custom it in Ubuntu for free!?


A little tweaking and you can have a Mac OSX clone on Ubuntu. You can even have the global menu system on the top panel (see above). Plus the visual effects on Ubuntu are already top notch (see below)

Thursday, 21 May 2009

Ubuntu Usability Project

Over the summer I will be working with Canonical, the sponsor of the linux operating system Ubuntu. My project focus will be on the information architecture usability of Ubuntu 9.04.

Untill now I had not used Ubuntu. I had always been happy to put up with Vista. Once you have tweeked Vista enough, I find it pretty usable, even though I have heard countless usability gurus ranting on about how much they hate it. Ok so there are a few problems, but it does the job, and like I said if you know what your doing you can tweek it.

The operating system Ubuntu is built (in part) on usability, and on first impressions it appears pretty intuitive. Its a little ugly and cramped I find, but I like the way they remove the start task bar with applications, system and place drop down menus.

I find you need to live with most software (operating systems in particular) for a few weeks before you really get a grasp of its usability. So I aim to do exactly that for the duration of the project. But to be honest it’s not just my usability that’s important here. so its going to be real everyday users that need to be researched.

The aim of the project is going to be to come up with some grounded re-design proposals for the next version of Ubuntu. I will be taking an iterative approach to evaluation, starting by delving into the open source community via forums, blogs, IRC to gain some feedback from stakeholders and real users. This will lead onto some contextual inquiries, in which I will meet some real Ubuntu users (which Canonical have kindly agreed to provide payment incentive for) and conduct some open ended interview with the participants. This session will be mainly about getting to know the users and their context of use (e.g. what applications they use Ubuntu for). It will also involve the participants demonstrating their use with the system.

Fingers crossed this should uncover some usability issues and point towards where the research should go next. At this stage I don’t know what will happen, but it could lead to some card sorting tasks, icon recognition tasks or refined goal orientated usability tasks in the follow up session. And this should then lead to some re-design proposals.

One thing that’s really only just grabbed my attention, is the sheer volume of work involved in a project like this. Its huge! I thought 3 or 4 months would be far more time than I needed, but it probably not half enough time to do a complete job of the entire OS. At best what I research will provide a snap shot of Ubuntu usability. Software like operating systems need a lifetime of research and a whole lot of resources to get things just right (especially when releases happen every 6 months as is with Ubuntu). Nonethless I am looking forward to the opportunity to develop my UCD skills and contribute towards a real software project.

Just to finish, if any Ubuntu users (or people in the open source community) read this and have any interesting comments on their experiences, please let me know. The more real data there is the better the result will be, and as I’m starting to find out about, the open source community is all about... well... community participation.

Tuesday, 7 April 2009

Awwww... Hope the tooth fairy comes for you...

I had a conversation with someone a few weeks ago about belief in the supernatural and belief in science. I was told I was being close minded for denying ghosts, but blindly believing in science. Firstly the whole blind faith thing was nonesense, as people such as myself certainly dont blindly follow anything, instead adopting a skeptical eye untill substantial evidence is demonstrated. Secondly, if you believe in ghosts for which there is no proper evidence that can be verified beyond personal testonimy, you may aswell believe in Santa and the tooth fairy. Loads of people believe in them, so they must exist!  I was even told it was sweet that I had this nice little idea of science. What the hell... After much frustration I gave up and resorted to mocking that the tooth fairy might come tonight. If you cant beat them join them. Anyway before I bother to rant anymore, I found this video, which makes my point nicely.


Tuesday, 24 March 2009

Global Tennis – “Brighton to Bristol… Cardiff to Calais… Cork to New York… The globe is in your court!”


This is a blog I wrote for Brighton based GPS games developer 'Locomatrix' about some project work I have been working on at Sussex - http://www.locomatrix.com/wordpress/


One of the all time greatest computer games has to be Atari’s old school classic; ‘Pong’. For those of you that are un-familiar with the game, the aim of the game is to bounce a ball past your opponent by moving a bat left and right across a baseline. The skill is in finding the right angles off the bat, often using the sidewalls, to out-do your opponent. The beauty of Pong is its simplicity and the game-play is highly addictive.

Over the past couple of months I have been working with a team in collaboration with Brighton based GPS gaming company; Locomatrix. We have been designing and prototyping a game we have titled ‘Global Tennis’, which takes the Pong game metaphor and combines it with real world game-play. Instead of using a clunky keyboard to control the bat, players run in the real world whilst their movements are tracked by GPS. By co-ordinating ball movements on the on screen interface with bat movements through short sprints; players can now have the same fun the old school classic brought, whilst outside exercising and enjoying the weather. Plus they can compete against other players from all over the world. Pretty neat eh?!

We applied a traditional tennis scoring system to the game, with players competing for game points to win sets, and a choice of 1, 3 or 5 set matches. We were also well aware that we needed to suit the game to more or less hyperactive users, so players can adjust the difficulty through baseline length and ball speed. Want to run energetically back and forth like shuttle runs, set the ball speed high and the baseline length low. Fancy a slower jog over distance, set the ball speed low and the baseline length higher. The choice is left up to the players.

In designing the game, we followed an iterative development process. Our initial user inquiry, showed an interest for the game concept and a positive attitude in combining gaming and exercise. Our lo-fidelity prototypes of the games UI also got positive feedback from users. We kept the interface neat and simple to give it the retro character the original had. We also went down to the seafront to do some real world tests of the games speed and distance settings. The footage taken also allowed us to put together some rather amusing illustrative hi-fidelity prototype. It was a good laugh, and we can definitely see kids and families having hours of fun on the beach, in the park, or even in the back garden for those that don’t have the guts to run around in public holding a GPS unit. Over the next few weeks our team will be pulling together final prototypes of the in game menu system and developing UML diagrams and pseudo code to describe the back-end game mechanics in full. 

So if Global Tennis is produced and takes off, expect to see lots of people running back and forth - looking very odd indeed - in public at a beach or park near you soon!

Friday, 6 March 2009

Mindflex - The telekinetic super kids of the future

I recently remembered how absolutely incredible direct brain interfacing is. In particular the emotiv headset really blew my mind - which was suppose to come out last Xmas - but got delayed. It should retail at about £150, which seems like a complete bargain when you consider how cutting edge it is.

The other day, a friend pointed me towards the Mindflex from the makers of Barbie, retailing at $79.99 (see the video at the bottom). It allows kid to control a hovering ball through a maze using the powers of their mind. When I was a kid all we had was Lego. Later generations got computer games, which my generation were just as keen on. The step from Lego and action figures to immersive computer games was a big step. But today and in years to come kids are going to have toys that allow them to manipulate matter with their mind.

What I do rate is that they are going back to a physical product rather than a virtual one, taking advantage of technology and the real affordances that are naturally found in the real world. Lego was great because it got kids thinking creatively. I recently had a session in Interactive Learning Environments at Sussex University, where some research students brought out a new Lego type toy. It had movable parts and kinetic memory that allowed kids to build robotic creatures and input movement sequences through real movement. Then when hooked up to a computer the toys could replicate the movements from memory. This got kids thinking about locomotion in a very engaging way. Give a kid a text book or write stuff on a black board and half of them switch off; totally disinterested. I’m a firm believer that getting kids engaged in learning by getting them involved in creative practise is the way forward. Especially if we don’t want a future of dull robotic kids with no creative drive; “Computaa saz noo!”

The potential of DBI toys for creative learning is surely enormous and highly rewarding for a kid as well. Mind you if kids’ obsessions with games like Guitar Hero and the stupidity of certain kids taking Grand Theft Auto to the streets are anything to go by; we could end up with an army of obsessed 10 year olds terrorizing the street with their new found super power, throwing home made weapons at people with the power of their mind. But with that aside, combining DBI technologies with Lego type creative toys could be really engaging for kids. Obviously the technology needs to get a bit more sophisticated than Mindflex; but if DBI entertainment takes off properly, competitive markets will drive some very innovative products.

In fact screw giving the kids these toys. I want to try it!