Senior Product Designer, 2020-2023. As Lead Product Designer for Patient Experience + Growth, oversaw launch of the Patient Surveys Platform and led design for onboarding and core user experience. Later led product & service design for b2b clinical research platform.

My role

  • Oversaw the launch of PicnicHealth’s first enterprise web app
  • UX Research, Product Strategy, Prototyping, UX/UI Design, Data Visualization
  • Led a series of Design Sprint workshops to drive alignment among key functional leaders


  • Increased data quality through improved efficiency + precision of QC processes
  • Decreased “Time to Insight” for customers and internal stakeholders
  • Launched PicnicHealth’s first enterprise application, enabling a host of new enterprise products and services.


As a data vendor, one of the most important success metrics for PicnicHealth was “Time To Insight,” or how quickly a user can go from asking a question of our data to getting an answer.

This capability — to answer questions quickly and effectively — was absolutely critical, not only for the researchers working with our datasets, but for every single team at PicnicHealth involved in producing them.

For a long time, questions like these had been channeled through our in-house analytics team, and our talented data scientists had managed to keep up. But as our customer base grew, it became increasingly clear that this was a bottleneck. Critical components of our operation, such as quality control and customer success, were dragging dangerously and jeopardizing our growth.

To address these bottlenecks, my team was tapped to develop a suite of self-service analytics tools that would empower enterprise customers and internal users alike to answer their own questions, and work with our data more meaningfully.

In the end, our solution managed not only to streamline many aspects of our data production pipeline, but to improve the usability of our data for customers and drive a significant reduction in “Time to Insight.”


To focus our efforts, we began by conducting a task analysis with the data science team, aiming to identify the biggest areas of opportunity for a self-service data analytics tool. In particular, we focused on identifying the types of research requests that were the most time consuming, most common, and most feasible to productize or automate.

Through this exercise, we developed an Impact-Effort matrix and determined that a significant portion of research tasks (about 75%) could be readily productized in a self-service analytics tool.

While not all tasks were feasible to streamline, by enabling enterprise users and our own team to answer their own questions, we estimated we could reduce the Data Science team’s workload by about 50%, and drastically reduce the turnaround time for most queries.

From there, we translated each these tasks into a tech spec, outlining the features and functionality that would necessary to support each use case.

In the end, we identified two main capabilities that would enable us to address the most possible use cases upon launch, in a manner that would be accessible to technical and non-technical users alike:

In Reports, we envisioned a dashboard-type experience that would enable users to quickly understand get the “gist” of any cohort at a glance -- by surfacing key variables such as demographics, diagnoses, medication use, or survey completion.

In Querying, we wanted to support users’ ability to target groups of patients based on selection criteria such as demographics, onboarding date, and other health metrics.

Together, these capabilities would enable both enterprise users and our own teams to target and characterize any group of patients. Moreover this architecture would enable us to scale and version the product, by adding new reports and filtering capabilities as new use cases emerged.

Lo-Fi Prototyping

With our core concept sketched out, our next step was to validate that it could work in practice — once it was in the hands of expert users, would these capabilities be as impactful as our initial feedback suggested?

To answer these questions with any sort of rigor, we knew that even the most convincing Figma prototype wouldn’t suffice. We needed a prototype that would enable users to work with real data, answer real questions, do real work.

To test how this concept would perform “in the wild,” we developed a set of functional, lo-fi prototypes that could be used by our team in the course of day-to-day work.

Cohort Builder V1

Enables basic querying functionality, allowing users to target a specific patient population by progressively adding filter criteria.

  • Outputs a patient count ONLY
  • Real-time results, directly from DWH
  • No support for and / or logic, nested queries, etc.
  • Implemented in R Studio

Relevancy Dashboard

Displays a set of metrics describing a cohort, including general summary stats and condition-specific variables.

  • ONLY works with pre-defined sets of patients (i.e. an existing cohort)
  • No querying capabilities
  • Implemented in spreadsheets

While the fidelity of these prototypes was far below where we planned to end up, they enabled us to evaluate the tangible impact of self-service analytics tooling in the hands of real users.

User Testing

Despite their functional limitations, we soon rolled these prototypes out to a select group of “power users” at teams around the company. Through testing, trainings, and regular rounds of feedback, we were quickly able to identify opportunities to improve utility and usability in our next iteration.

Here’s a few of our biggest takeaways:

1. Confidence is the biggest hurdle to adoption

The biggest challenge we faced in rolling out a self-service analytics tool was getting users to feel confident in their ability to use it. When the results of your work might be submitted to the FDA, or to decide whether a multi-million dollar deal was viable, it was no surprise that trust and confidence was make or break for users.

Simple things like, knowing you’re looking at the right data, understanding what’s possible with tool before you begin, trusting both yourself and the tool to deliver a correct answer, would be critical to a successful rollout.

2. Balancing accessibility vs. expert utility

Though we’d set out to design a tool that would be powerful enough to support the complex use cases of expert users, but accessible enough to be of value to less technical users, through user testing we began to see recognize that there was tension between these two goals.

The very capabilities we needed to support many of the use cases we’d identified in discovery were inherently intimidating to less technical users. To avoid recreating the very bottleneck we set out to address, we’d need to be strategic in how we implemented these capabilities.

3. Synergy between reports + querying

While these capabilities had to be implemented separately in the prototyping phase, the dog fooding process reinforced that Reports and Querying together were much more powerful than the sum of their parts.

Separately, being able to characterize a set group of patients, and calculating the number of patients who met certain criteria, were somewhat useful. But to enable true self-service analytics for our own team and enterprise users alike, we would need to consider how each of these capabilities could be fully integrated in the final product.

My role

  • Led a project team of 2 designers, 3 engineers
  • User Research, Prototyping, UI Design
  • Oversaw zero-to-one launch of new core product offering.


  • Increased MAUs and survey completion rate
  • Used by XX% of PicnicHealth customers within X months
  • Increased demand, Drove $XXM in new ARR / XX% increase in ARR

As a data vendor, one of the most important success metrics for PicnicHealth was “Time To Insight,” or how quickly a user could go from asking a question of our data, to getting an answer.

This capability — to answer questions quickly and effectively — was absolutely critical, not only for the researchers working with our datasets, but for every single team at PicnicHealth involved in producing them.

For a long time, questions like these had been channeled through our in-house analytics team, and our talented data scientists had managed to keep up. But as our customer base grew, it became increasingly clear that this was a bottleneck. Critical components of our operation, such as quality control and customer success, were dragging dangerously and jeopardizing our growth.


Having validated the utility of self-service tooling in practice, we next turned to productizing our solution. With the feedback from user testing in hand, we began by exploring how these core capabilities — reports and querying — could be integrated into an end-to-end self-service experience.

As we planned for a phased rollout — first opening access to internal users, before launching to enterprise customers — we recognized an opportunity to expedite our build by leveraging the architecture of our existing internal tooling - a study management platform called FLEX.

From the earliest phases, our design process was expedited by the opportunity to leverage existing layouts, patterns, and components - as reflected in the sketches above.


From the outset, our foremost goal was to enable enterprise and internal users alike to answer their own questions. To accomplish that, the most critical component of our solution was Reports - a dashboard style experience that enables users to characterize any group of patients at a glance.

Through our discovery process, we’d already identified a set of key metrics and visualizations that would be immediately valuable to internal users and enterprise users. For launch, we prioritized two reports that were deemed the highest impact:

The Relevancy Report (above) presents key cohort summary statistics -- such as patient demographics, medication use, lab results, and other key health metrics.

The Surveys report (below) enables users to track PRO completion and performance over time. Since the launch of the Patient Surveys Platform, this dashboard had become a critical utility for enterprise users.

Recognizing that these were just two of many potential use cases for the reports, our intent was to create a framework could be easily adapted to a variety of applications as new use cases emerged. By displaying a set of key variables and visualizations that could be configured to a variety of use cases, the reports framework was designed with extensibility in mind.


Designed to be used in tandem with reports, Querying enables users to selectively target a group of patients to characterize, by applying a set of selection criteria.

By opening the “Filter” drawer in the dashboard interface, users can construct a query by selecting from a variety of filter cards, and see results of their query reflected in the report

As with reports, our intent was to build the querying experience with extensibility in mind, such that modules could be added as new use cases emerged. At launch the querying experience supported 11 criteria, designed to address the most common tasks identified in discovery.

In the prototype below, we explored a more conventional query builder, which— while it enabled more complex query logic, and was preferred by some of the more expert users — felt inaccessible to most of the users we interviewed.

Although we explored this and variety of approaches to the filtering experience, we found that a card-based UX was the most accessible for less technical users, while offering much of the same utility as a more conventional query builder.

Recognizing that confidence was going to be a sizable barrier to adoption for many of our target users, we opted to launch with the card based UX, while leaving the door open for more expert features to be included in subsequent releases.


As SCOPE was going to be PicnicHealth’s first true enterprise-facing web app, there were a few other facets of the platform that needed to be freshly developed to ensure a truly self-service experience.

Beginning with the onboarding experience, we needed to ensure that customers had a safe and secure way to access their data, whether they had been a customer for 5 years, or had joined the platform only recently.

Cohort Navigation

For customers managing multiple studies with PicnicHealth, we needed to offer users a clean and intuitive way to navigate between projects.

Though we’d explored a range of approaches for file navigation — from tables to cards to more conventional file explorers — given that most of our users were managing fewer than 8 projects, we opted for a simple card design that would enable users to glimpse project metadata upfront, including date last updated, number of patients, and collaborators on the project.

File Sharing

Though SCOPE was our first foray into full self-service tooling, for many of our enterprise users who were used to regular project updates, reports, and handoffs from our team, we recognized there was value in consolidating all of these resources into a single location.

While somewhat auxiliary to SCOPEs core utilities, the file sharing capability served as an effective onramp - a point of entry for customers who were used to engaging with our data primarily through resources prepared by the analytics team.


Lastly, the renewed focus on our enterprise user experience revealed a troubling gap: for many of our customers, our existing documentation was inaccessible, difficult to use, and severely outdated.

We saw this rollout as an opportunity to enhance and relaunch our documentation, complete with new modules covering the utilities now available through SCOPE.

Launch + Impact

By Q3 2022, we had successfully deployed SCOPE to teams across PicnicHealth and launched to a handful of enterprise customers. Both internally and among customers, the impact was immediate and pronounced:

Team Productivity

Deploying self-service analytics has increased productivity for teams across PicnicHealth.

  • Reduced avg. turnaround time for research tasks by 60%
  • Decreased analytics backlog by 80% within 4 weeks.
Customer Success

SCOPE has enabled enterprise users to work with research data more meaningfully: 

  • 15% increase in Customer Satisfaction Score (CSAT) for participating customers.
  • Reduced inbound Customer Support tickets by 40%
Streamlined Data Production

Increased data quality and reduced bottlenecks across our data production pipeline.

  • Caught 70% more errors through augmented QC
  • Reduced avg time to QC by 30%

Today, SCOPE is used not only by teams across PicnicHealth to facilitate critical aspects of our operation (e.g. data production, customer success, sales & marketing) but by some of the world’s leading Life Sciences companies to facilitate groundbreaking clinical research.


Though we discovered that perceived utility of surveys was a big motivator, we were advised that the liabilities of communicating survey results back to patients without the opportunity for a provider to interpret them were too great to pursue this opportunity at the time. Instead, this opportunity became the basis for PicnicCare, that would be pursued later. Wasn’t feasible to pursue in the original timeline.

Given the above, we decided to focus on:

  • Reducing friction / perceived burden of task
  • Compensation experience
  • Altruism - sense of contribution (highlighting community insights, key contributions, studies contributed to, research updates, etc)