I was approached by a product owner to conduct user research on an early design for a new product — Licensing — used by childcare providers and administrative staff to issue, maintain, and manage the licenses required for a facility to legally operate. A core workflow in this system is creating and managing correction orders, which Licensing Specialists use to document health and safety violations uncovered during onsite inspections, ranging from sanitation issues to staffing concerns.

Correction orders are central to Specialists' work, yet the client's current system makes issuing and managing them frustrating, inconsistent, and error-prone. Because this workflow is both high-stakes and historically painful, the redesigned experience needed to better support Specialists' real needs while reducing the friction they face today.

Goals and success criteria

The team's primary goal was to determine whether the redesigned correction order workflow effectively supported the needs and mental models of Specialists. As I learned more about the limitations of the legacy system, I also became increasingly interested in whether a more intuitive experience could influence how Specialists approached creating and managing correction orders in their day-to-day work.

To determine whether the designs were meeting those goals, I evaluated:

  • Clarity of workflow: Specialists can easily see where to begin, how to move through the process, and how to complete core tasks.
  • Reduced friction: The redesigned workflow removes the pain points that previously made issuing correction orders feel burdensome or impractical.
  • Alignment with mental models: The structure and terminology match how Specialists think about violations, documentation, and follow-up, minimizing disconnects between process and technology.
  • Signals of potential behavior change: Specialists indicate that the updated experience could make correction order tasks feel more manageable and approachable.

Approach

After establishing the study's goals and success criteria, I created a research plan to align the team on scope, methods, and expectations. With that foundation in place, I moved into selecting the appropriate methods for evaluating the redesigned workflow.

Methods

To choose the right data collection methods, I considered the goals of the initiative, the stage of the designs, and the access I had to Licensing Specialists. Because the team needed to validate whether the proposed workflow supported real inspection practices, an evaluative approach made the most sense. I opted for moderated, synchronous sessions so I could observe Specialists' reactions in real time and probe when something didn't align with their expectations.

Given the population of 20 Specialists and the scope of the study, a small, focused sample was appropriate. I structured the sessions as a mixed-methods usability study that combined task-based evaluation with qualitative interview questions to capture both behavioral and attitudinal insights.

Participants and recruitment

Before finalizing my recruitment criteria, I initially planned to recruit Specialists who had some early exposure to the new designs. A few unconventional UI patterns in the designs could have been difficult to navigate without context, so this felt like a reasonable starting point. However, once I learned that only three Specialists had seen the designs, it became clear that recruiting solely from that group would have produced an unrepresentative sample.

In response, I shifted to recruiting Specialists with no prior exposure to the redesign. This ensured the findings reflected the broader population and allowed me to observe how first time users interpreted the workflow.

To ensure I captured meaningful data, I adapted my facilitation strategy, providing more explicit task instructions and guiding participants past task failures when needed, prioritizing insight over strict adherence to usability testing convention.

With that adjustment in place, I used a brief pre-screener to confirm that participants met the following criteria:

  • At least one year of experience as a Licensing Specialist, ensuring they had meaningful exposure to issuing and managing correction orders
  • No recent experience working at a licensed Provider and no experience as a Licensing Supervisor, which avoids cross-role bias
  • No prior exposure to the redesigned workflow, ensuring the sample reflected the broader Specialist population

These criteria guaranteed that the participants represented the real world users of the workflow and could provide grounded, experience based feedback on the redesigned flow.

Data collection

I conducted five one-hour usability sessions over Microsoft Teams using a Figma prototype. Each session followed a consistent structure designed to build rapport, observe behaviors, and probe into participants' expectations and reactions in real time:

  1. Warm-up and context setting (15 minutes): I opened each session with brief introductions and role-oriented questions to help participants feel comfortable and to understand their responsibilities, attitudes toward correction orders, and how they currently manage them.
  2. Usability tasks (30 minutes): Participants completed five tasks that represented the core steps of the correction order workflow. I opted for a low-fidelity prototype to focus the study on information architecture, navigation, and flow; I asked participants to think aloud so I could explore moments of confusion as they arose; and I followed each task with a short set of scripted questions to capture perceptions of clarity, difficulty, and alignment with real-world processes.
  3. Closing reflections (15 minutes): I asked participants to reflect on the workflow as a whole — what worked, what didn't, and what they expected to see that wasn't present.

After completing all sessions, I invited participants to complete an optional, anonymous satisfaction survey, which helps me refine my moderation style and improve future participant experiences.

Analysis

Moving into analysis, my first step was to bring structure to the raw data I had amassed. After transcribing each session recording, I extracted comments, quotes, nonverbal reactions, and observations relevant to my research questions. With this textualized dataset, I selected the following complementary analysis methods to understand both what participants did and how they felt doing it:

  • Task performance analysis to evaluate how participants navigated each task, which tasks were easiest, and where they encountered friction
  • Sentiment analysis to identify trends in their emotional response to key features within the workflow
  • Affinity diagramming to synthesize patterns across behaviors, expectations, and feedback

Together, these methods revealed which parts of the workflow were intuitive, which consistently caused confusion or frustration, and where participants' expectations of system behavior weren't met.

Key findings

Before creating design recommendations, I synthesized the study's results into four key findings that highlight where the prototype supported Specialists' needs and where critical gaps remained.

1. Specialists experience significant friction and mistrust in their legacy system

Although evaluating the legacy system wasn't a formal goal of the study, participants referenced it frequently when interacting with the new designs. They described their current correction order workflow as slow, cumbersome, and unreliable, with failures in notifications and status tracking that forced them to maintain parallel spreadsheets and manually verify information. This lack of trust made the process feel daunting and often led to incomplete correction orders or skipping documenting the violation altogether — reinforcing the hypothesis that a more usable, reliable system could meaningfully change their behavior.

2. Automation and clearer workflows in the new design reduce effort and support more consistent use

Participants responded positively to the automated features in the redesign, noting that the system handled much of the repetitive data entry that previously consumed their time. Even when certain workflows felt unfamiliar at first, they quickly navigated them more confidently in subsequent tasks, suggesting that the system's visual hierarchy and patterns supported learnability. These improvements indicated that a faster, more intuitive experience could encourage Specialists to issue correction orders more regularly.

3. The system's information architecture caused confusion and frequent navigation errors

Despite the improvements above, participants struggled with how some features were organized — particularly when two navigation bars appeared at once for different contexts. This dual-navigation pattern led to repeated missteps and task failures, signaling that the system's structure did not align with Specialists' mental models of how their work should flow.

4. Specialists expect integrated tools for Provider communication and support

Across sessions, Specialists expressed a clear expectation that the system should support ongoing interactions with Providers throughout a correction order's lifespan — a capability absent from the prototype. They looked for features that would streamline communication, reduce reliance on external communication channels, and provide visibility into what Providers see in the system. Many noted that the legacy system obscures critical information, making it difficult for Providers to understand what they need to do and when. These gaps underscored the need for the platform to function as a shared, transparent workspace between Specialists and Providers.

These findings clarified where the prototype aligned with Specialists' needs and where structural, workflow, and communication gaps remained. The following design recommendations focus on addressing the most critical barriers to task success and supporting end to end collaboration between Specialists and Providers.

Design decisions

Grounded in the insights from this study, I designed 15 recommendations aimed at reducing friction, strengthening Specialists' mental models, and improving the overall usability of the system. I prioritized these using an Impact/Difficulty matrix, prioritizing changes that would meaningfully improve Specialists' ability to navigate the system, understand context, and complete core tasks. While some recommendations addressed efficiency or “nice-to-have” enhancements, the highest-impact items centered on information architecture issues that were actively undermining task success.

Deep dive: Fixing the double-navigation issue

When Specialists navigated into a Provider's dashboard, a second navigation bar appeared at the top of the screen. This top nav contained links relevant only to the current Provider, while the persistent left nav contained administrative tools.

A simple wireframe showing the old double-navigation pattern.
The Specialist navigation is on the left in purple. The Provider navigation is across the top in pink.

The sudden appearance of a second navigation — each containing overlapping or similarly named items — created confusion and a disproportionate surge in task failures. Specialists weren't always sure which menu controlled which part of the system, and the competing navigation structures made it harder to understand where they were in the workflow.

Recommendation

I recommended collapsing the Specialist navigation whenever a user enters the Provider's context. This approach:

  • Minimizes visual and attentional competition between the two navigation systems
  • Signals a clear shift in context (“I am now inside this Provider's space”)
  • Prevents duplicate or overlapping menu items from competing for attention
  • Preserves access to administrative tools without making them visually dominant

To support recognition when collapsed, I added icons to each Specialist navigation item. Over time, Specialists may build familiarity with these icons, making it easier to navigate even when the menu is minimized. Hovering over the Specialist navigation expands it to include labels as an overlay of the Provider's context, signaling that clicking here removes them from the Provider's context without relying solely on iconography.

This solution reduces cognitive load while reinforcing the contextual cues Specialists rely on to understand where they are in the system — two issues that surfaced repeatedly in the study.

Expected impact

  • Fewer navigation errors
  • Reduced frustration, especially for first-time users
  • A clearer sense of “where I am” in the system
  • A stronger first impression for new users, bolstering adoption
  • Better alignment between system behavior and user expectations

While business constraints prevented further validation with participants, the logic behind this solution resonated strongly with stakeholders who work closely with Specialists.

Additional high-impact recommendations

I'll briefly highlight two additional recommendations that had meaningful impact:

Supporting communication between Specialists and Providers

Specialists expected certain communication features that were missing from the prototype. I recommended adding lightweight but meaningful features to streamline interactions with Providers while reducing reliance on external channels. These additions aim to help Specialists manage the full correction order process while maintaining a centralized, system-based record of communication.

Adding system feedback to build trust

Specialists often weren't sure whether the system had completed an action — for example, whether creating a correction order automatically notified the Provider. I recommended adding explicit feedback mechanisms (such as confirmation modals) to reassure Specialists that the system was behaving as expected. This matters because feedback fosters trust, and trust is essential for reducing off-system workarounds they're accustomed to with the legacy system.

Implementation

Once the recommendations were validated, I worked closely with the product designer through a series of collaborative design sessions in Figma. He brought deep knowledge of system constraints; I brought the research and rationale. Together, we shaped solutions that maximized impact without over-investing in polish — very much in line with my philosophy of prioritizing clarity, usability, and ease of use over aesthetics .

Impact

The most immediate impact of this work was the implementation of several high-value recommendations — most notably the improvements to navigation and system feedback — which bolstered the correction order workflow's usability before it ever reached development. But the deeper impact came from how the research reshaped the team's understanding of Specialists themselves. Many stakeholders were surprised by how strongly Specialists avoided working with correction orders due to the limitations of their legacy system, and this new clarity sparked broader conversations about communication, notifications, and the back-and-forth nature of the workflow. The study also led to the creation of a functional user archetype (a role-based alternative to a persona that emphasizes shared responsibilities over demographics), giving the team a grounded reference point for future design decisions and helping reduce reliance on assumptions or stereotypes.

On a personal level, this project strengthened my ability to navigate ambiguity with confidence. I entered unfamiliar territory — new users, new workflows, and a product still taking shape — and emerged with a deep, evidence-based understanding of a complex domain. The experience reinforced my belief that even in challenging environments, a disciplined UX process can cut through uncertainty and create clarity, alignment, and meaningful change.