Share:
The purpose of benefits love is to change the way people use benefits—to help them focus on the things they care about in life—which reinforces our commitment to human-centered design in product development. It’s not just about seeing what our technology can do; it’s about what value that technology brings to people’s lives, and making sure it does so in the most useful way.
As our CEO Prashant Srivastava has described, we approach this by walking in the shoes of the end user to best determine the types of products that will really help them achieve their goals. However, those products won’t solve much if the experience doesn’t make sense. If the experience isn’t intuitive and doesn’t align with the person’s thinking, meaningful outcomes disappear—and so does benefits love.
That’s why we find it critical to pilot our product experiences with unbiased testers. Among the many ways to go about this, however, we’ve chosen a specific method to ensure better experiences for everyone involved.
Focusing on the why
In the case of our initial user testing, before we did anything, we reviewed the overall goal of the product. What exactly were we trying to achieve? What specific problem were we trying to solve? The product we wanted to test was Evive Plan Choice: Its intent is to help people choose the optimal health plan for them and their families, and to make sure they avoid misinformed decisions in the typically complicated process that is open enrollment.
The next step in user testing is to decide whether you’re seeking qualitative or quantitative research. The latter is focused on objective numbers that ultimately test a particular idea or theory, whereas the former is more about subjective interpretations and reactions that influence the experience. For this product pilot, we were interested in qualitative research, as this was a new application that relied heavily on the users’ willingness to complete three quick steps. So we chose a user testing platform that not only could give us that data, but would recruit participants and show results within an hour.
It was important that we learned which details about the user interface were moving people forward in the process, and which details were confusing to them. In short, we needed to understand why people reacted—or didn’t react—in certain ways. This would ultimately help us learn what tweaks to make to ensure people could get to the finish line (and could reap the benefits of Plan Choice).
Prioritizing what to test
Unfortunately, you can’t just have a group of users aimlessly explore the app and see what they find, hoping they share helpful insights—because that’s all it will be: hoping. User testing needs to have intentional structure so you can make sure you walk away with the insights you want.
When we user tested Plan Choice, we carefully designed the experience with non-leading questions around measurable actions that would help us assess:
- Discoverability of key features: Were people finding important features easily?
- Clarity of content hierarchy: Was it clear which features were more important than others?
We assessed these things through a series of pass/fail tasks, written and spoken answers, and a post-test survey of the overall product experience. These findings led us to key design and text updates, including the visualization of in-network doctors and the phrasing of the “choose plan” button (which became “enroll in this plan” for more clarity). One observation in particular that informed some changes was that users were often just reading headlines and not all of the text under them, which isn’t surprising considering most people only read 28% of the words on a page.
Some of the questions we asked let us gain an understanding of the users’ thought processes. For instance, we asked how they currently select their health plan: 8 out of 10 said their decisions were based on cost. Later, when one large client used the product for their open enrollment, 72% of users had the same answer.
The right mindset for interpretation
We went into user testing knowing we couldn’t just jump at every piece of feedback. It was critical to review everything with the big picture in mind, and see where some comments might conflict with one another.
On that note, we knew it was important to avoid writing something off right away. For example, just because someone reports confusion on one feature doesn’t necessarily mean there’s something wrong with that feature. The source of the problem might, in fact, be the previous stage of the experience that brought the person to this element. Maybe that stage set up poor context, and they’re already frustrated with the experience by the time they arrive at the next stage.
We also tapped into our research on other consumer experiences to help assess our findings. We realize the danger of staying in our own bubble, and made a point of studying the aesthetics, messaging, and journeys of other consumer experiences—even, and especially, those in other industries. Some questions we asked users were: What sites or apps make experiences intuitive in ways you’re used to? Why is that the common way to achieve such a thing? Are there alternative ways to do this? After all, the target users of Plan Choice are consumers of other products, too! In the spirit of making healthcare consumerism easy and seamless like other consumer experiences, we kept this perspective top of mind. Clients and users alike have told us they want an intuitive experience, and we believe intuitive experiences are familiar ones.
It’s just the beginning
We’re proud of the results we’ve gleaned from user testing so far, and plan to continue it as we create—and enhance—other applications.
As eOS keeps fueling new solutions, we’ll keep testing them with people who can give unbiased views into the experience. Ultimately, it’s about making the right moves in product development that will bring every person closer to benefits love.