Usability testing, the lean way
September 25, 2017
Agile and Scrum methodologies have changed the way software is built, allowing us to plan, develop, and release small chunks of software in continuous cycles, achieving twice as much, in half the time. Development teams can finally improve their speed via better internal communication, overcoming silo issues and delivering application updates more quickly and efficiently.
It sounds amazing, but where is the time for user research if UX practitioners are rushing to feed the development team with new mockups? Where is the 6-months-up-front-pixel-perfect-design-masterpiece if the scope is changing every two weeks and new features and improvements are always re-prioritised by the Product Owner? How can design teams avoid expensive rework because of missing user feedback on their interfaces before going to development?
Since we are not our users, as designers, product owners, business analysts, and digital product enthusiasts, we shouldn’t rely on our assumptions to evaluate if our app interfaces are ready to be signed off, developed, and presented to our customers.
An incredible amount of new products still fail when they’re presented to customers, sometimes even before the launch. One of the key reasons is that they don’t meet users’ needs and pain points, and this failure is often due to a lack of research and customers feedback.
Since agile/scrum methodologies have always been development-driven, designers are still struggling to find their place in these ecosystems, but the opportunities to deliver more up-to-date, delightful, and easy-to-use digital experiences are there.
Lean verses traditional testing
What are the main differences between traditional and lean user testing? Both techniques have pros and cons, but the main differences are probably related to time effort, what to test, and the deliverables. Think about the resources you have, the outcomes you want to gain from your testing sessions and what you are expected to present to your stakeholders. Only then can you decide which approach best fits your project.
Planning for testing
If you already have an existing app, you would ideally run a complete, not lean, usability testing activity together with other research methods. This will help you to benchmark your existing product and identify current pain points, needs and opportunities to dive into for your future iterations.
Ideally this research piece of work would come before iteration 0, and should guide the definition of the new information architecture of the app, add and prioritise the first items in the backlog, and flesh out the first design ideas.
Once you define a first design draft based on your initial user research and you want to verify your assumptions, it’s time to start planning your next lean user testing sessions.
A generic schedule could look like this:
- Planning: ½ to 1 full day (script, prototypes, setting up environment)
- Running 6 x 1/2h sessions: 1 full day
- Analysis and implementation: 1 day
You want to make sure to keep the whole testing process under four days, otherwise, it loses its leanness and affects what the design team is able to deliver during the sprint. You want to strike the right balance between uncovering insights from users and actually preparing the final designs for implementation.
How often should user testing session be run? Whenever there is something to test! Depending on the duration and schedule of your project, you might want to have fortnightly or monthly lean testing sessions. This approach gives you enough time to design the first drafts and always be ready with a couple of scenarios that you want to evaluate with users.
TIP: If you want to validate your ideas and initial concepts with a design sprint, before even starting to design and develop your product, the last day of your sprint is the best moment to run a lean user testing session.
Who's involved in lean testing
Usability testing revolves around users, but many other stakeholders can get involved in it and benefit from the results. With this activity, you can make your client and team engaged with users and drive your whole environment to be more user-centred.
From your team
Having a partner in crime to take notes and help with the setup is highly recommended, but you can definitely test by yourself, too.
If you have recruited participants, always run a pilot test with a colleague to make sure the prototype is working, the scenarios are clear, and that you’re prepared for interviewing. You’ll only have 20–30 minutes, so make sure you feel confident about what you’re going to ask beforehand.
Try to involve as many people from your team as possible to act as external observants: this allows you to keep the rest of your team in the loop and minimise the effort to report back to them. You can use screen sharing tools like Hangouts or Lookback to allow external people to see what’s happening without having an awkward crowd in the room.
It’s always better to recruit real or potential customers. Considering the time constraints, make use of a specialised market research recruitment agency to speed the process up.
Sometimes, though, there is no dedicated budget to professionally recruit people, or participants are hard to find. As Steve Krug says in his bestseller Don’t Make Me Think:
Testing one user is 100 percent better than testing none, even if that’s not the right user.
Find people within your company (but not part of your project team), to act as participants. Looking for participants outside of design and development is also a good idea: HR and Sales are great departments to find potential candidates, and they’ll probably be really keen to get involved and to know more about what your team, and their own company, is working on.
What to lean test
When it comes to what you should test, there is not a huge difference between lean and traditional testing, but you will need to consider how much context you’re giving to the participant before jumping into the test. In lean, you’ll have to shorten the warm-up questions and jump to the real testing as soon as possible.
Since you don’t have time to cover too many topics, you should focus on small chunks of your user interface, similar to a scrum sprint. Your primary goal should be to answer the questions affecting the following month of development.
If the person in front of you knows the service or is a customer already, you can discuss scenarios specific to that product. Otherwise, you might cover more generic areas of your interface, like sign up, payments, or account management, since they’re usually features that people are already familiar with.
Copy is also a great area to validate in these sessions since its goal is to be meaningful straight away. If it’s not easy to understand in a few seconds, then there is something to change.
Architecture and navigation
If you are at the beginning of your project you can flesh out your information architecturewith card sorting. When designing apps, which typically involve many actions that the user can perform, card sorting is extremely useful to organize not only sections and categories, but also define where people are expecting to find the initial trigger to an activity. Running card sorting in a lab allows you to gather more information on why users organise content in a specific way.
After you’ve run an initial sorting exercise you can then evaluate the architecture of your app by testing the navigation before actually designing the screens. You can show one single screen with the main navigation (or multiple variations of it), and ask the user ‘What do you think this area of the app would allow you to do?’
With qualitative feedback you can better evaluate if you’re going in the right direction and also figure out users’ expectation from a specific call to action by asking them direct questions such as When you tap this button, what would you expect to see?’
You don’t necessarily need to build what’s behind that button — users can actually guide you before you design!
In the middle of a project, you might want to evaluate one new flow to make sure that every step makes sense and is easy enough for the user to accomplish. You can test with paper prototypes, printed screens, or digital prototypes.
In this case, to maximise the outcome of the test, a prototype with mockups is probably your best option, since it minimises testing friction with people that are not used to interacting with paper sketches or do not fully understand what a wireframe is. Furthermore, if you use final mockups, you will get feedback for visual design, UI interactions, and copy as well. These elements are all structural parts of the experience and should not be neglected during testing.
Set up a prototype with your favourite software (I tend to like InVision) and try to include all the steps of a scenario, from the beginning to the end.
Testing concepts can be extremely valuable for the Product Owner to evaluate and prioritise new features or improvements to the existing product. Simply present one screen to the participant and use it as a trigger for an open conversation. You will be surprised to see how much feedback you can get from a single screen!
TIP: Always start your prototype from the device home screen, letting the users launch the app themselves and navigate through the main screens to find the action you want them to perform. This will make the experience less biased, allowing you to figure out if people can reach your flow in the first place.
Running the test
As mentioned before, try to minimise the introductory part of the session by either sending out a brief to the participants or making sure you recruit people that are already familiar with the product.
If you are running the test together with a peer, define who’s facilitating the conversation and who’s taking notes.
To make sure that the note taker records only the key points of the session, define in advance which outcomes and feedback you want to get out of the testing. It’s always good to take notes in a digital format, to avoid the burden of re-writing or trying to understand somebody’s else handwriting.
Reframer is a pretty effective web tool to take notes for research purposes. If working with external clients, set up an account for them, so they can go back to the collected raw notes in a single repository.
On the other hand, having a laptop in between the facilitator and the participant sometimes creates a barrier that adds friction to the conversation. If you’re managing the session by yourself, consider instead using post-it notes for the most important quotes and annotation.
Rapid Iterative Testing and Evaluation (Rite)
Notes are important to keep track of what happens during the test, but the goal of lean usability testing is usually to improve design concepts as quickly as possible. That’s why we often test using the RITE method. By testing with two participants and then iterating on the fly before the following sessions, we can evaluate multiple options based on users’ feedback, and potentially finish the sessions with a final version of the prototype to show to stakeholders as the outcome of the activity, without going through lengthy reports and documentation that would take up precious time.
Working with Sketch, Invision and the Craft plugin fits perfectly with this kind of flow. Make sure to include at least a half hour break between sessions so you can update prototypes or write down the draft of the key outcomes while they’re still fresh.
Outcomes and actions
When multiple stakeholders are involved in the design process, you want to make sure that you get them involved as soon as possible in the conversation. This means that as soon as you’re finished with the lean usability testing, get back in touch with your stakeholders to present the outcomes.
There are multiple lean ways of presenting back to your stakeholders that you can combine based on your environment.
This works best when you’re familiar with your stakeholders and they’re already across the product and the testing process. Get everyone in a room, possibly the day after the sessions so the feedback is still in your mind, and present the initial prototype and the new designs based on the RITE process. No slide decks, no thrills, but it’s incredibly effective in terms of showing speed and reactivity to managers. In the end, what they want is the product, and you’re giving it to them straight away.
Email with bullet points
When you can’t arrange a meeting straight away, it’s still good to get people involved with a quick email. Focus on a summary list of the outcomes and one for the recommendations and changes you applied to the interface, attaching both the initial prototype and the updated one. Most of the time managers are completely fine with this and don’t require more.
If your audience is not 100% familiar with your approach and/or the product you’re working on, provide some more documentation in a slide deck. Annotated screenshots are pretty effective since they provide both context and testing feedback at the same time. Always make sure to go straight to the point without using jargon, and try to include recommendations in the same slide. Add quotes to provide more insights and create more empathy with users.
Testing video footage
Videos are extremely powerful to introduce the rationale behind certain design choices and to provide explanations for stakeholders who are not familiar with usability testing. Keep the clips short, not more than 2 minutes, and make sure not to give any clues as to who the participants are (especially when testing with internal staff).
Maximise outcomes while minimising effort
Usability testing is one of the most valuable activities during digital product development, both in terms of making the right design choices and keeping an active conversation with all the parties involved during the process.
Every project is different, and every session can be different from the previous one. The more you experiment with it, the more creative ways of testing you are going to find.
We want to maximise outcomes and minimise effort. Lean testing is a great option if you work in agile environments, without much time or budget to run long usability testing activities, but still, want to validate your assumptions.
Try to always present and implement designs only after you’ve tested them, even with just one person. As mentioned before, testing with one is still better than none.
This will give you a true rationale and the confidence that you’re leading your product in the right direction.
Author: Mario Carabotta, User Experience Designer