Being a big fan of the motto “Today is the first day of the rest of your life” and an avid believer that change can (and should) be applied at any moment, the start of a new year still marks an important point for me. It’s the time for me to look back at what I’ve achieved in the past year, and to set personal goals for the new year. So here it goes… continues
Or, That faithful night in Bastilla.
This story starts roughly ten weeks ago. After leaving the Bastilla Club – definitely not a classy spot, but let’s call it a characteristic, music club – my right ear was deafened by about 20%. This is a normal occurrence for me after being exposed to loud music for a couple of hours. But it was different this time: my hearing didn’t return to 100% over the next couple of days. Quite to the contrary: it got much worse as all kinds of beeps and buzzing noises started to develop. In the weeks that followed, while I kept hoping for my ear to ‘just fix itself’, the noises spread to my left (good) ear as well. A period of panicking followed. “Would I ever be able to experience silence again? Why didn’t I wear earplugs? Is this going to get worse?!”.
Recently I did a talk on the importance of contextual user testing WebExpo 2013 conference in Prague. During the talk I shared my experiences and learnings from the user tests conducted for SalesChamp.
These days user testing fortunately is nothing unusual, but I wanted to show the importance of doing this in the realm of the user instead of a laboratory set-up. I showed the assumptions that were busted during the user testing sessions in the field and shared best practices.
Interview on NL-CZ differences and education
After my talk I was interviewed by Honza Sládek on the differences between working for Dutch and Czech clients and education options for would-be UX designers.
Having a beer with Jesse James Garrett
Although sharing my love for contextual user testing and meeting fellow passionate web geeks was great, maybe my conference highlight was something else: having a beer with Jesse James Garrett.
It’s not often that one gets to meet someone who made such an impact on the industry and can genuinely be called a pioneer in the UX-field, let alone having a chat and a beer with them.
In all in all it was a great experience!
After my talk at WebExpo 2013 I was approached by Czech designer Adam Hrubý who handed me his business card: Smart! This “reverse” business card doesn’t ask the receiver to please contact the one who gave it out. Instead it tells the recipient that he’s lucky to get one. And it’s personal at the same time. I love it!
I didn’t show the front of the card, but mine said: “I love you style”. Apparently, Adam has a made a whole set of different fronts:
Check out his shot on Dribbble.
Recently I was invited for an in-company talk about a hotly debated subject: HTML5 vs. native apps. The company that invited me was struggling with the decision which path to take. What made it interesting is that the apps to be developed would all be used exclusively in-house and that this being a fully Microsoft-oriented company all employees would be given Lumia phones running Windows Mobile.
I wanted to give a balanced overview of the current state of HTML5, and although it may come as no surprise that I’m in the HTML5-camp I am not blind to the challenges that we are facing. So instead of only showing off all the glorious delights I also shared the sometimes harsh reality of things we as HTML5 developers have to deal with.
Maybe my slides may come in handy for other people in the same situation, so I’ve decided to share them on SlideShare. All feedback and comments are welcome.
Final decision and conclusion
Eventually the company choose to pursue both strategies. Given the short timespan in which they had to deliver the mobile (and the cross-platform argument out of the window) they decided to start with native apps on the phone platform. However, on their intranet they are running with HTML5, which eventually may be extended to other areas as well.
In the end, it all comes down to choosing the right tool for the right job. Sometimes it will be HTML5, sometimes it will be native.Although I’m definitely betting my money on the web, I’m curious what the future will bring. These are interesting times indeed.
On the 19th of September the yearly WebExpo conference will kick-off again in Prague. With great talks on topics such service design, front-end and back-end development, product development and life hacking this is definitely the place to be for all web aficionados!
Contextual user testing
This year I’ve been invited to do my talk about contextual user testing. During this talk I’ll share my experiences with contextual user testing for SalesChamp and the best practices I’ve learnt.
Check out the WebExpo 2013 program for all talks and topics.
Get a 20% discount off your ticket
So if would you like to see my talk — or any of the other cool talks (did you see that Jesse James Garrett will be keynoting?) —, get your tickets now and profit from the 20% discount (first three tickets only). Be there or be ■.
The other day I was installing Chrome on my new iPhone 5 when I stumbled upon a very questionable screen:
On first view it seems to be a regular Terms of Service page requiring checkbox approval from the user. However, the approval is given by clicking the “Accept & Continue”-button. The checkbox is in fact an opt-in to approve the sending of usage statistics to Google. Given the fact that most screens like this one require the user to tick the checkbox before submitting this is a deceiving trick to fool the user into applying.
We know that Google left its “Don’t be evil” roots quite a few years ago, but this still surprised me. I actually had already ticked the checkbox and was instants away from hitting the submit button before I realized I was being fooled.
A dark grey Dolphin
A couple of years back it was cool to have a browser on your phone. Nowadays you need to have at least three to be cool. So after installing Chrome I returned to the App Store to fetch Dolphin, the mobile browser. And again, I was being fooled:
Admittedly this is not as bad as Chrome’s example, for two reasons:
- The “Accept”-button is at the top of the screen, far away from the opt-in checkbox.
- The text is more concise, making it more probable that users will read it and realize what’s up.
Still, I can’t imagine that the designers thought this was the most logical place for such an opt-in element if it wasn’t to trick people in opting-in.
Is this a bad thing?
In the end usage statistics should be beneficial to the users. But I still feel very much tricked by these tactics. There are better ways to get opt-in for this, for example by applying progressive engagement.
Google could ask for opt-in on start-up after the browser has been used at least two times. It should do so in a non-intruding fashion of course, like in the (“empty”) start-up tab. This also gives Google a much bigger canvas to actually explain why it is collecting all this data and how it benefits the user.
But Google being Google they’ve probably tested this very well and found out this generates the highest opt-in rate. But from both a user and a UX perspective I don’t like this at all.
Recently I asked the question “Should you optimize mobile experiences based on individual handedness?” on the UX Stack Exchange board. This question came to mind while looking at this mock-up that UX.SE user abbood created:
My immediate thought was that the up-vote control was on the wrong side. Why? Because my feeling is that up-votes happen more often than down-votes — something not interesting is not worth spending a click on — and on mobile the primary action (up vote) should be on the left. But I realized this view was quite self-centric: I’m a lefty. For a right-handed person having the primary control on the right probably makes much more sense, I thought.
Applying Fitt’s Law to touch devices
Justin Smith wrote an excellent article on the application of Fitt’s Law on touch devices. In his article he argues that Fitt’s Law applicability to mobile experiences depends on the way the user is holding the device. If she doesn’t need to change the way she holds the device the law applies. For example, when using the device single-handedly in portrait mode. It also applies when holding the devices double-handedly in landscape mode (assuming you are able to reach all parts of the interface, which depends on the size of the device). However, when holding the device with the one hand and controlling it with the other Fitt’s Law doesn’t apply, as you now need to take other variables into account.
In my experience the way a user holds a device depends on how she interacts with it. When drafting an e-mail it makes sense to use two hands. However, when doing a a check-in on Foursquare it makes sense to only use one hand. The interaction time is very limited and shouldn’t require a lot of movement. On top of that we should also consider the context: the user is probably entering (or has just entered) a space and is doing multiple things at the same time. Looking around for familiair faces, grabbing a seat or shaking someone’s hand (regardless of how rude that is).
Foursquare’s primary action: the check-in
Foursquare offers a lot more functionality than merely doing a check-in, but checking in is still the primary action. What’s more: it’s also the action that usually takes place in the context I outlined above (while being in the middle of some other activity).
Therefor the most important action for a Foursquare user is to reach and tap the check-in button in the interface. In the image on the right I’ve highlighted the target area. As you can see it’s in the top-right corner.
This shouldn’t present a problem for right-handed users holding their iPhone in their primary hand. Although it’s not easiest target to reach with just a thumb, it’s certainly possible.
However, when holding the device in your left hand and using your left thumb it’s simply impossible to reach the target. Let’s take a look at the thumb reach of the left hand drawn on top of the Foursquare interface.
Left-handed thumb reach in Foursquare
In the image on the right I’ve used green to illustrate the primary thumb reach area, the area that is reachable without requiring a lot of stretching of the thumb.
I’ve highlighted the secondary thumb reach area in orange. This part is still within thumb’s reach, but it requires some stretching to cover it. Anything outside these areas is simply impossible to reach without moving the position of the device within the hand or using a second hand (or having absurdly long, flexible thumbs).
What becomes clear when looking at this illustration is that it’s impossible to reach the check-in target single-handedly with your left thumb. I can confirm this from my own experience: doing a Foursquare check-in requires me to either control the device with my right hand or use two hands: one for holding the device and one for tapping.
Note: these are just approximate illustrations, actual thumb reach may differ from person to person and also depends on the form factor of the device.
Is this a problem? Partially, yes.
Given the fact that 10% of the world population is left-handed my initial thought was that this would be a problem. Ignoring 10% of 7 billion people doesn’t seem to make sense. But that’s not necessarily the case.
In another answer to my UX.SE question adrianh linked to a very interesting article published on UXmatters: “How Do Users Really Hold Mobile Devices?“. In his article Steven Hoober shares the results of 1,333 observed interactions with mobile devices. The finding that surprised me the most? That in 33% of all single-hand interactions the left thumb was on the screen. This doesn’t match with the left-vs-right handedness distribution at all.
One other thing that stood out was that users continuously switch the way they hold their phone. This suggests that the order of an interface has less impact on lefties than I expected. However, we can only guess if the observed behaviour comes naturally or if users behave this way because they are adapting to the interface. Does the interface drive behaviour or is it the other way around?
In either case, optimizing an interface for usage in either single hand or any other configuration — regardless of handedness — should not be ignored.
I initially asked my question based on the premise that most device interactions happen single-handedly and using one’s primary hand. Steven Hoober’s findings debunk this, or at least partially. But that doesn’t mean that you should accept this on face value and ignore handedness in your interface design.
Adapt to individual handedness
My first idea was to detect the handedness of the user and dynamically adapt the interface to this. Detecting handedness is probably very hard to do, so maybe it should be a manual setting (no pun intended) within the app’s or platform’s configuration. However, if we look at Steven Hoober’s findings that would hardly be necessary: users continually switch the way they hold their device and a lot of the single-handed interaction isn’t performed with the primary hand. On top of that this method would present all kinds of difficulties like interface recognition problems.
Choose a neutral interface order
UX.SE user Benjamin Malley presented a different suggestion. Instead of choosing between optimization for either left-handed or right-handed usage you could also choose for a neutral solution. Compare the following two interfaces:
The interface on the left shows the iPhone’s Lock screen. In this interface the user needs to slide the bar from left to right to execute the unlock.
The interface on the right shows the “Power down” screen on Windows Phone. The user needs to slide the bar downwards to confirm powering down.
In both situations a slide action is required to confirm the action and to prevent accidentally executing the action. However, Apple’s interface seems to have a bias towards right-handed usage. Sliding from left to right is harder to do when holding the phone in the left hand than it is when holding the phone in the right hand.
Microsoft decided on a neutral solution: the slide down isn’t harder when executed with the left hand instead of the right.
Prototype, test and refine your interface
There’s one recommendation that can safely be given: prototype, test and refine your interface. This advice applies to almost all UX challenges that we confront, and this one is no stranger.
There are various ways to test your interface. User-testing in a controlled setting by recording the mobile interaction is an obvious way to do it. But you could also consider quantitatively A/B testing your interface. Design an alternate version of the interface and serve it to a sample of your user base. You could consider KPIs like how long it takes for users to reach and tap the moved button, or how often mis-clicks happen, etc. Doing A/B testing of interface lay-out maybe hard when you’re doing native apps, but when you’re doing web apps it’s certainly possible (another win for the web :).
Regardless of the used testing method, one thing is clear: you can’t replace testing on actual devices. It’s impossible to reliably test element reachability on paper sketches. And apart from the reachability of the controls in your interface there’s also the issue of fingers overlapping the screen. You want to make sure that users don’t hide essential information on the screen just because they are interacting with your interface (especially in interaction-rich contexts). So, you absolutely need to prototype and test on real devices.
We’ve learnt that the hand in which users hold their devices doesn’t need to match with their individual handedness. But, interfaces can definitely be much harder to control when using the left hand instead of the right (or vice versa). You should not ignore this. Aim for neutral solutions and make sure to prototype, test and refine your interfaces.
Many thanks to the fantastic people at UX.SE who share their knowledge and experience with the UX community for nothing but reputation and badges. Special thanks to abbood, adrianh and Benjamin Malley for sharing their insights.
A very common UX myth — one that clients will “remind” me of in about every project — is that everything should be accessible within 3 clicks. Numerous research studies and practical implementations have debunked this. It has been discovered that the number of click doesn’t negatively contribute to the user experience or even improve it! As part of Adaptive Path’s advice to improve Twitter’s user engagement an extra step was introduced in the sign-up process. The result? 29% more first-time tweeps completed the on-boarding process than before the re-design.
What’s become apparent that users don’t mind clicking, as long as every clicks brings them closer to their goal. Having a good navigation that’s clear to the user at every point is what matters. Of course, if every page takes 5 seconds to load an extra click is bad for the overall experience, but that has much more to do with your website’s performance than with the number of clicks. It’s simply the by-product of the click.
Save clicks where you can
However, if you can save a click you should. Your users don’t mind clicking if it leads to what they want, but if you can remove intermediary steps you should go for it. Making things easier for your users improves the user experience. You should always A/B test modifications like these of course, especially in e-commerce websites. But generally removing unnecessary clicks, particularly in web apps with repetitive tasks, works.
A real-world example: Toggl
Let’s look at a real-world example: Toggl’s time-tracking control. Toggl is great time tracking application (well, as “great” as a time tracking app can get, it still sucks to do time tracking of course) that puts a lot of focus on improving the usability of the app. They need to of course: time tracking is annoying as it’s basically a meta-action. So making using their app as simple as possible directly improves their competitiveness (and thereby their bottom-line).
Now, let’s take a look at how Toggl can save a click. In Toggl there are two different time entry modes: manual time entry and automatic time tracking. The first mode let’s you enter a specific start and stop time and the second tracks your time automatically, starting now. Because doing time tracking perfectly is impossible if you work on more than one task you’re bound to leave the timer running for too long or forgot to start it. Because of this I switch between the two modes about 10 times a day. And this is where Toggl can drop a click.
Step 1: Manual time input mode
When you are in manual time input mode, the time tracking widget looks like this:
You can enter the task, pick a project, enter a start and stop time and log it using the “Save”-button. In this case I already wrote down the name of the task, when I realized that I should use the automatic time tracker. To access the automatic mode I need to click the “Use timer”-link above the “Save”-button.
Step 2: Switching to timer mode
Now that I’ve switched to automatic time tracking the widget looks like this:
The most visible change is the background color. It’s a very great way to indicate to the user that the mode has changed, and at all times it makes it clear to the user which mode he’s in. Two other changes in the widget: the button’s label has changed from “Save” to “Start” and the label of the link to switch to the other mode is now “Add manually”.
However, the time is not tracking yet.
Step 3: Starting the timer
Now to actually start the timer I need to press the “Start”-button.
Again the background color has changed, this time to green to indicate that the timer is running. Also the primary button color and label has changed to provide a clear visual clue to the user where to click to stop tracking time.
This is all great, but why didn’t the timer start immediately when I clicked “Use timer”? Most of the times I’ve written the titel of the task down already and if not: I would consider the actions related to time tracking to be part of the task itself.
Should I complain about a click? Yes!
Now you may interject that I’m complaining about nothing. What’s the impact of saving a single click on all the clicks you do in a day? The answer is simple: it depends. If tracking time was something I did maybe once or twice a day (or even less) it would be no biggy. But I track about 20 tasks daily and on top of that: time tracking sucks, remember? The annoyance of having to do an unnecessary click when you’re already doing something that you don’t really like can have a significant impact on your mood while using the app.
When doing a repetitive task such as time tracking, every click counts. So make sure that you remove unnecessary steps where you can.