Nevro Rapid Prototyping
Using Rapid Prototypes to Explore and Test New & Unique Interaction Models with Patients
Redefining Patient Control in Pain Management
This case study details the user experience design for the Nevro Patient Therapy App (PTA) and Pathfinder applications. The goal was to empower patients with an intuitive, data-driven tool to manage their HF10 implant therapy, transforming a complex medical device into an accessible digital companion.
The Challenge
Design two distinct mobile apps (a manual PTA and an automated Pathfinder) to give patients seamless control, track their progress, and optimize their pain relief therapy.
My Role
As the UX Designer, I was responsible for the entire UX process, including creating the information architecture, user flows, wireframes, and detailed interaction documentation.
The Process
An iterative, sprint-based approach focused on IA definition, user flow mapping, wireframing, and refinement based on stakeholder feedback and user assessments.
The First-Time Experience: Onboarding & Pairing
A user's first interaction is critical. The onboarding process was designed to be a secure, reassuring, and step-by-step guide to get patients set up successfully. This flow covers everything from legal agreements and device compatibility checks to personalization, ensuring the user feels confident and in control from the very beginning.
EULA & Phone Compatibility
The flow begins with the necessary legal acceptance. The design ensures the user can't proceed without agreeing, using a clear modal to reinforce this requirement. Immediately following, a background check confirms the user's phone is compatible, preventing future technical frustrations.
EULA Screen
Decline Confirmation
Phone Compatibility Test
A background check runs to ensure the phone supports the required version of Bluetooth Low Energy (BLE) and is not on a known blacklist. This preemptive step is crucial for a stable user experience. If a phone is incompatible, a clear alert is shown.
Discover & Pair
This multi-step process guides the user in making their implant discoverable and pairing it securely with the app. The flow accounts for success and multiple failure scenarios, ensuring the user is never left without guidance.
Discover Mode Instructions
Device Found
Pairing Success
Pairing Error
App Personalization
To make the app feel like a personal health companion, the user is prompted to provide their name, confirm the pain areas their therapy targets, and select aspirational goals—activities they want to get back to doing.
Notification Settings
Notifications are essential for reminders and alerts. This step uses a clear value proposition before triggering the native OS prompt, increasing the likelihood of user acceptance.
App Security
To prevent accidental therapy changes, the final onboarding step is to secure the app. Users can choose between a simple PIN or their device's native biometric authentication (Face/Touch ID).
Daily Use: Core Functionality
The core of the application revolves around the home screen, where patients can easily view their status, turn stimulation on or off, and adjust their therapy settings. The design prioritizes clarity and immediate access to the most frequent tasks.
Stimulation Control
A simple, accessible power button opens a drawer for turning therapy on or off, a deliberate action to prevent accidental changes.
Home Screen States
The home screen uses a dynamic status card system to show the most relevant information, such as the current program being tested, an active favorite, or a prompt to take a survey.
Battery & Connectivity
Critical system states like low battery or a lost Bluetooth connection are communicated through clear in-app alerts and system notifications.
Feedback Loop: Interactive Surveys
Patient feedback is the cornerstone of therapy optimization. The survey module was designed to be quick, intuitive, and engaging. It captures key metrics on pain relief, specific pain scores, activity levels, and medication use, which directly fuel the Pathfinder algorithm or inform the patient's progress in the PTA app.
Static Pain Relief Scale Representation
This is a static visual representation of the Pain Relief survey question, as interactive elements require JavaScript.
Very Good
Visualizing Success: Progress & History
Empowering patients means giving them access to their data in a way that's easy to understand. The "My Progress" section provides multiple views—a graph, a body map, and a program history list—to help patients see trends and understand the effectiveness of their therapy over time.
Pain Relief Over Time (Static)
Body View
Maps daily pain scores onto a body diagram for at-a-glance insights.
Program History
A detailed log of every program used and its duration, which can be saved to Favorites.
Advanced Control: Settings & More
The Menu section houses all secondary features and settings. This includes critical, less-frequently-used functions like MRI Mode, as well as options to re-personalize the app, manage notifications, and access help resources.
MRI Mode
A safety-critical feature with clear warnings.
Notification Controls
Fine-tune alert frequency and timing.
App & Device Info
Access technical details and sync status.
Contact & Support
Direct access to support teams.
Introduction
A Case Study on the complete UX Redesign of EP SmartStart’s core features — leveraging data-driven insights learned from the UX Research phase that wrapped up just one month prior.
Goals
We wanted to reduce the friction points and completely redesign two screens that were the most highly-used in SmartStart: Roster List and Crew Drill Down. The Roster List contains Crew Member Offer Cards containing top-level details, actions, calls-to-action for a respective offer; the Crew Drill Down details every single term within that offer. Based on the research, I was going to be focusing most of my efforts on the following friction points:
A. Search, Filter and Sort UI + Microinteractions
B. Iconography and Approval Chain Status
C. CTA (Call-to-Action) Buttons
D. Inconsistent Information States
Summary of Research Efforts
The research methodology used to gather user feedback and pain points on the existing SmartStart platform included moderated interviews and the creation of an experience map. EP contracted with a 3rd party — Philosophie — to run these activities.
Moderated Interviews: Researchers conducted a total of 39 moderated interviews with EP stakeholders, client users, and crew users to understand the issues with the customer experience with SmartStart, SmartTime Classic, SmartTime Mobile, and Support.
EP Stakeholders (12)
Client Users (19)
Crew Users (9)
Experience Map: Based on the interviews and product walkthroughs, an experience map was created to document the end-to-end customer experience, which helped to identify key pain points and opportunities for improvement.
Please note: While I didn’t run the research phase of this effort, I sat in on many of the interviews and used the findings prepared by Philosophie and our UX Researcher to prioritize issues and ultimately inform my design decisions.
My Responsibilities
Redesign Roster List and Crew Drill Down
Leverage research insights to resolve friction points outlined above and inform design decisions
Apply UX best practices
Re-think the overarching interaction model and navigation within and between these two screens
Optimize underlying information architecture
My Team
Sr. Product & UX Designer (me)
Product Manager
Product Owner
Developers & Engineers
QA Engineer
Stakeholders
Current Design
Figure 2: (Left) Roster List. (Right) Crew Drill Down - View Full Page
UX Priorities for Redesign
Figure 3: UX Priorities for (Left) Roster List and (Right) Crew Drill Down.
A Search, Filter and Sort UI + Microinteractions - Placement of the Filter bar above the Search field breaks typical information hierarchy; they should be reversed. Additionally, the space taken up by the search, filter and sort functionality can be optimized to take up less, and microinteractions should be streamlined.
B. Iconography and Approval Chain Status - The icon collections used in different places within SmartStart negatively impact usability because:
They are ambiguous and lack static labels.
They rely upon tooltips to convey meaning — which takes time and is ineffective because the user has to hover, wait, read, and repeat for each icon.
Color by itself is being used to convey different statuses for some of the steps in the approval chain, but that also has a cognitive cost and runs counter to accessible design guidelines (tha
C. CTA (Call-to-Action) Buttons - The number and variety of CTA buttons (some shown above in the screenshot, and more to the right) is bad for usability because:
Their dark, saturated colors give each of them a lot of visual weight.
There is often more than one on a screen, which means they’re all competing for attention and drawing the user’s gaze to different sections of the screen all at once.
Their lack of proximity to one another makes it more difficult to compare, contrast and evaluate the full list of options for any given context.
D. Inconsistent Information States - Information is presented differently between the Roster List and Crew Drill Down screens, which means the user has to re-orient themselves to what information is displayed — and where — every time they click/tap back and forth between them.
Figure 3a. Variety of CTAs
The Process
Generally, the process for this and other projects would typically follow the following process:
Sprint 0: UX Research and Requirements Gathering
Sprints 1 – X Iterations:
Whiteboard & Sketching
Wireframe
Prototype
Weekly or Bi-weekly Reviews for Feedback
Frequency depends on how far along in the process a given feature is.
Generally involves product owner, product manager, other stakeholders, developers, and QA engineers.
Repeat
Milestones: User Testing takes place periodically throughout the design process, with frequency dependent on budget, user availability, and other factors.
Figure 4. Crew Offer Cards
Created a card header that’s now visually distinct from the rest of the card, and it contains the status flag selector (added a caret icon to convey that it’s actionable) and crew member’s name in the top-left, and all primary/secondary CTAs — with appropriate styling of fill for primary and outline for secondary — and the actions dropdown in the top-right.
Divided information within the card into three columns, each with a subheader and created a grid structure to improve alignment and spacing of data points.
Removed the offer status icons — previously horizontal — and provided a checklist-style of icons/labels in the second column under “Offer Status,” with each row containing either a step status icon and step name, or an approver.
Up to five approvers are listed individually by role and name(s), and if there are more than five, links appear in the subheading and list itself allowing the user to “Show More Approvers.” This list would dynamically expand in place, increasing the height of the card itself on the screen.
Explored and introduced a collection of four standardized status icons that more efficiently communicate the status of each step.
In the “Key Information” column on the right, kept the use of the warning placard icon, but changed its default fill color from red to marigold — with red only being now being used for “immediate action needed” data points.
Design Iterations
1st Iteration
Figure 5. Crew Offer Cards - Action Menu Open
Figure 8. A collection of cards in a Roster List — each illustrating a potential permutation based on its offer status, number of approvers, etc.
1st Round of User Testing
Because the redesign involved some fairly major changes, from information being re-organized to the row of circular icons so prominent before being removed, it was important to get feedback from actual users on productions in the real world.
Insights
While overall users liked the consolidation of the CTAs, the reduction of “seeing red” from the harsh alert icons, and overall thought that was being put into how information was being organized, the “Offer Status” column wasn’t resonating intuitively with them.
Key feedback on that last point was that name AND role for approvers wasn’t necessary, as production staff typically had everyone’s name and role memorized early on.
Additionally, some users missed the row of approver icons that had been removed, but after additional conversation to to get more specific feedback, admitted they were unintuitive and that at least having each approver STEP’s role name in text form was more helpful.
Three points I took from this — which I also confirmed through several conversations with team members and users:
Representing an approval chain visually was important.
Offer Status and the Approval Chain didn’t necessarily have to be intertwined into one long list of steps; in fact, the two progressions could potentially be represented separately.
Production staff typically had everyone’s name and role memorized early on, and they could quickly recall who a person was just from their initials.
2nd Iteration
Approval Chain Visual Exploration
Most approver chains represented visually took on the convention of circles with two-letter initials inside them — more or less borrowed from the use of circular avatars in social media sites that also used initials of a person’s name in the absence of an actual photo of them. Instead of using random icons whose meanings were ambiguous at best, having initials would at least be more usable.
I did some visual explorations of this concept, with circles evolving into rounded rectangles as I pressure-tested different permutations for spacing, scan-ability, legibility and even how to represent groups at an approval level (when more than one person at a given approver level is designated as an approver, but approval is only needed from one within that group). The progression is illustrated in Figure 9.
Figure 6. Crew Offer Cards - Approver Details Shown
The number of approvers in an approver chain on any given production vary. Typically there are 3 to 4, but some studios/production houses — e.g., Disney — have up to 15. It was important to design cards that could scale to support this requirement, but do so in such a way that still allows the user to see more than just one card at a time on the screen.
Figure 7. Iconography and sizing recommendations to visual design for the Offer Status and Key Information Line Items
Figure 9. Progression of approval chain visual exploration, which can be summarized by the following points:
Circles are more difficult to parse than square or rectangles, because each circle edge and its interplay with its neighbors is cognitively more costly than rectangular edges that fall into a predictable grid pattern.
Enumerating groups AND all their respective members is too much and quickly gets out of hand with the sheer number of rectangular icons becoming untenable.
Using a group icon would be acceptable, so long as there’s a way to see members of that group when needed.
Integrating Visual Approver Chain
Figure 10. At first, I dropped the visual approver chain into the Offer Status column.
Figure 11. Illustrating how 15 levels of approvers would look.
Figure 12. Separating the approver chain from the offer status allowed for more flexibility and efficient distribution of information on the card. Additionally, the checkmark icon was added to the approved state of each approval level’s button, since color alone should never be relied on to communicate state (success, fail, etc.).
Also added “Missing Uploads” functionality in the Offer Status column, and made both it and “GL codes” dynamically able to be shown and hidden so that users didn’t have to click into the Crew Detail screen for that information.