Self-Checkout at Wal-mart
Having worked as a design consultant on several point-of-sale systems for national retail, I find myself evaluating self-checkout stations as I come across them in my everyday routine–even when I don’t particularly want to.
It’s a sickness I can’t escape these days.
One system that I find particularly offensive–and I say that with my “user” hat on, not necessarily as a designer, although it should be fairly difficult to separate the two–is the implementation I’ve seen and used at Wal-mart.
As an entity representing nearly 90% of household shoppers across the U.S., Wal-mart owes its customers the best possible experience–not just the best possible prices. And as a corporate leader in the area of maximizing efficiencies, it’s somewhat surprising to see the ball dropped so closely to the end of an otherwise successful conversion funnel.
I should disclaim here that this evaluation may only apply to the Wal-mart in my area. I know it’s fairly common for a national store to pilot systems at different locations across the country, so if you’ve had a different experience, I only hope that yours has been better.
So, with that, here are my main areas of concern.
What’s That Throbbing Pain?
Large shiny sculpted buttons are practically a requirement for any touch-screen sale system, especially touch-screens that interface with the consumer during checkout. Why is this, you ask? Because, and forgive the umpteenth explanation of this word on this site, they need to have the right affordance. That is–in this context, anyway, they should ideally look like physical objects which exist in space and time. So much so, that they should visually seduce one into use.
In a word, tangibility.
Actually, in the past I’ve designed touch-screen systems with buttons so shiny and so full of color contrast, that I thought they might actually induce seizures. Well, maybe not so much, but you get the idea.
“Big shiny object. Must press.”
And in its original state, Wal-mart gets the button right. But they end up losing points on two counts for their buttons–one I’ll get to in a minute. The part I’d like to point out now is the unexplainable throbbing gradient animation which eventually takes over the entire area of the main button components. This cell phone picture was taken as the pulsating action reached its fullest point of gradient saturation.
As you can see, the light color of the gradient practically makes the white text contained within it disappear. At this state, the buttons are no longer readable. And considering that the timing of the animation is slow and lumbering, the resultant affect is that the primary means of interaction effectively becomes usable only half of the time. This is really unacceptable, as the device should have been designed to be legible by users with vision impairments. As a highly segmented culture with demographics all across the board, why isn’t Wal-mart thinking about their users with disabilities?
But even as a user with normal vision, I found myself standing in line, possibly holding up other shoppers, waiting for the gradient to come back into a readable state. This drawback creates obvious inefficiencies for a checkout system that’s supposed to be fast and easy and begs the world’s largest company to re-evaluate the design. Why does this animation need to take place at all?
In the absence of any real reason, one only wishes that the buttons were simply given enough visual contrast to carry through the task at hand–that is, reading the labels to understand the possible actions and moving on with the rest of the checkout.
Clap Your Hands and Say Yeah!
Unfortunately, the buttons aren’t only limited by their visual appearance. Perhaps my biggest pet peeve of all is the fact that the buttons do not return any feedback once they’re pushed. I don’t know if I’ve hit a button on the screen because other than the screen changing eventually, nothing indicates that I’ve operated on a button successfully. This seemingly small point is so important to the design of a touch-screen point of sale system, that it cannot be stressed it enough.
Because touch-screen displays rely on a very specific type of spatial interaction, the overhead for stimulus-response is inordinately more necessary then if one were simply using a keyboard. Some ideas to repair the response path, which I do consider imperative to any good touch-screen design, are actually quite simple:
- Give the interface an audible sound once a button is clicked. Any simple sound, even a chicken squawk, is better than no sound at all. It should be brief and immediate and leave no question that the screen was actually touched.
- Create a “pressed” state for the button after it is clicked. Render it so it looks like it’s been pushed it into the interface. This hasn’t actually happened, of course, but the perception that it’s happened will certainly reinforce that an action’s taken place.
These suggestions together create an optimal feedback return for the user. Unfortunately, both are missing from Wal-mart’s implementation. Audibly and visually, the interface should demonstrate that it’s actively responding to the user’s activity. Failing to do so creates an impression that things may not be working, or not working in the way that I, the user, want it to happen.
The Devil’s In the Details
Slightly less offensive but no less a wonder is the attention to detail the system’s designers have put into 3D animation. This kind of unsolicited help appears to be a unique feature to Wal-mart’s self-checkout and one that appears to have its own pitfalls too. While I’m certain this on-the-fly training was started with the best of intentions, I have to wonder if it’s actually more of a hindrance to the user than anything else.
It seems to be trying to communicate spatial relationships. After I was done checking out my items, I decided to pay using my debit card, which requires me to use the credit card terminal located at the far right end of the checkout station (not ideal in itself, but that’s beyond the scope of this post). While the animation appears to be telling me to swipe my card, it’s implicitly also trying to orient me into where to perform this step on the station.
The problem is twofold:
- Showing me how to swipe my card is putting the cart before the horse. Those instructions are already displayed on the credit card station as a separate subtask. By focusing on the how, and only indirectly instructing the where, the sequence of events becomes out of sorts and doesn’t necessarily match up to my goal. I, as a user, now have to deduce the next step from an illustration with layered meaning. That can be time-consuming and frustrating for a user.
- There’s another problem in that the 3D images on the screen don’t necessarily match up to the station I’m using. There are similarities, yes, but the rendered station and the actual station appear different enough that a one-to-one comparison of the machines seems a necessary adjunct to understanding the message of the 3D animation. It isn’t. All that’s needed is a directional cue towards the credit card terminal, perhaps a color-coded sign on the screen that could be repeated on the physical display itself.
Actually there’s another problem, the 3D simulation uses a slow panning effect to suggest areas of focus. This invariably increases the cognitive burden on the user, as the picture moves around suggesting different locations at each turn, while it simultaneously increases the wait time for the overall process. So not only do the 3D animations fail to do what they’ve more than likely been designed to do, they increase the frustration level for both the user and for everyone else waiting at the checkout counter.
I can’t say that this is the worst self service experience I’ve ever encountered, or ever will encounter as a customer, but I do see it as having the most jaw-dropping impact. Wal-mart, both as America’s #1 retailer and as an international model for business, should be setting the bar much, much higher. Self check-out could be made so much more efficient and less burdensome to the user, which would ultimately streamline the process as a whole and increase ease-of-use, among other benefits, for the consumer.
Eventually, I’m certain Wal-mart and the other stores will get it right. But until then, a lot of design analysis, observation, and empirical research needs to be collected for self-checkout systems to make their way down the express lane.