Journey Explorer

STRATEGY,  UX/UI DESIGN,  RESEARCH,  PROTOTYPING
 
JE-Poster-New.png

Overview

The Journey Explorer is qualitative data dashboard that aggregates and quantifies customer feedback from multiple databases and transforms it into easily segmented data for any team to use in analyzing their products.

The initial concept, leveraging our custom taxonomy and customer journey map, was the winning project at the 2019 Autodesk DHX Hackathon, and I went on to lead the project as both Designer and a second Product Owner, working closely with our cross-disciplinary team.

01    About This Project

Why is qualitative data sometimes perceived of as harder to analyze than quantitative data? it often comes in the form of anecdotal comments or cherry-picked verbatims without a large sample set to back it up, so it can feel impossible to gain ‘real’ insights or generalize from these customer comments.

The concept of Journey Explorer rests on the idea that we can leverage models, frameworks, and platforms already being developed at Autodesk to aggregate and segment unstructured data into an organized format. The new Journey Explorer is a qualitative data hub unifying seven sources of customer feedback and is accessible to all Autodesk employees in a user-friendly UI.

After implementation, access to internal qualitative data was democratized, increasing data access for over 10,000 employees. By removing the need to buy extra licenses for services like Salesforce, Qualtrics, and PowerBI, costs were reduced and employees were encouraged to use qualitative data in addition to analytics. The self-service functionality made it easy for individual employees to use, and gave our data analysts time to work on more complex projects.

 

I cannot demo the actual tool due to customer privacy concerns. Everything you will see is comps and dummy data, not the real data hub. Mathematical accuracy is not guaranteed.

 

Problem Statement
Qualitative data is some of the most insightful and useful data we gather, but is rarely used due to the volume and randomness of comments we receive across multiple data sources. How can we compile and organize this data so it is usable and searchable?


Project Details
Researched, designed, and led project to create a qualitative data hub incorporating seven different data sources. Utilized Autodesk’s customer journey map, custom taxonomy, and sentiment API to create an intelligently organized, searchable dashboard comprised of all customer comments.


Project Title: Digitizing the Customer Journey
Time Frame
: February 2019 - May 2020
Skills Used: Research, interviews, strategy, wireframes, interaction design, testing, workshop moderator, project planning, JIRA stories

 

My Role

 

Project Lead
As the person who conceptualized and advocated for the project, I had the opportunity to drive the project and strategy, coordinating the work cross 12 team members.

UX Research and Testing
Conducted research and testing for two different iterations of Journey Explorer. I lead workshops and interviews with over 50 Autodesk employees from different verticals within Autodesk.

 

Lead Designer
Designed a scalable data hub with methods for organizing and sorting qualitative data in meaningful ways, as both individual sources and in aggregate form, within a framework that is extendable to any number of data sources.

Development UAT
Oversaw continuity between the design and development phases, using Zeplin, JIRA, and a weekly standup to make sure the core vision was translated into the final product. Conducted final UAT, filed bugs, and had final approval over all development work.

 

02    Background

 

1. Customer Journey Map

I worked on the creation of DPE’s customer journey maps. In addition to mapping customer paths by persona, we also identified customer pain points along the journey through a manual data classification process.

2. Digitizing the Journey

To streamline the process, our new goal was to create an automated dashboard to sort case data into the top pain points and issues. We created a taxonomic classification to identify and categorize support case data.

3. Winning the Hackathon

I had a theory that the taxonomy we were using could be extended to other data sources. I used a hackathon to run other data sets through the taxonomy, demonstrating the concept. After winning, the project was expanded to include more sources.

 
JE-JourneyMap-Shadow.png

Autodesk DPE Customer Journey Map 2017

Where We Started

Phase I: Digitizing the Journey Map
When I made my proposal, a dashboard that used Autodesk’s taxonomy to tag and organize data was already being developed. This version of the digitized journey map (dubbed Journey Explorer) only included volumes for support case data.

JEPh1.png
 
JE Screens

Where We Ended

Phase II: Journey Explorer 2.0
While the first iteration only included support case data, Phase II combines 3 different sources split into 7 different source domains. Data can be broken out by Phase, Subphase, or viewed across time.

Users can filter on any data fields shared by all sources like date, source, taxonomy tagging, sentiment tagging, and product.

All imported data is both filterable and searchable, giving users several different options for narrowing results. Filtered data is downloadable to excel for users to further manipulate or chart the data.

 

03    Exploratory Research

 

Discovery & Workshops

I conducted individual interviews and workshops with over 50 employees as part of the discovery phase. I wanted to create a data tool that was useful cross-company while still adhering to DPE’s business goals. Research activities focused on user needs, pain points, and feature ideation. It also required cataloging data sources, database ownership, and correlating attributes across sources.

 
 
Brainstorm Digitizing Journey Map.png
Digitizing Journey Map2.png
 
8316a6fb-a5ae-437f-b94e-99bfe7bbe2ee.png

Problems to solve

Issue #1
Limited Data Access
Qualitative data was scattered across platforms with different levels of access. These platforms included Salesforce, Qualtrics, PowerBI, and Lithium among others.
Issue #2
Knowing the Source Exists
Through my interviews, I realized that access was not the only issue, but with no centralized organization, you first had to know the data even existed.
Issue #3
Quantifying the Qualitative
Quotes are seen as anecdotal, not illustrating the breadth of an issue. We need to see the number of customers talking about a specific topic in order to guage importance and relevance.

Issue #4
No Standardization Across Sources
Each of our potential data sources contains unique fields and field names. We need to be able to aggregate data while still displaying any unique attributes.

Issue #5
Identifying Customer Issues
To improve customer satisfaction, we need to identify their problems. Good and bad comments are all mixed together, so we need a way to sort positive and negative content.
Issue #6
Displaying Useful Data
It's not enough to just have all qualitative data in one location. That data needs to be meaningful. Data must be easy to absorb, as well as organized, searchable, and sortable.

04    Data Strategy

 

Create a Universal Data Set

Shared Data Fields
While our taxonomy can be applied to any qualitative data, we needed a way to present disparate sources in a cohesive manner. We used shared data fields across sources, and created extra commonalities with additional algorithms.

Unique Data Fields
I didn’t want to hide a data field containing important information simply because it wasn’t universal across sources. Therefore, we retained all relevant information.

 
Image Name
 
Image Name
 

Sentiment Tagging

One problem with adding more data sources is that unlike support cases, where something is wrong, other sources like surveys or forums can contain both positive and negative comments. This is less than ideal when you want to find pain points.

Our solution was to introduce a sentiment tagging API. To test its accuracy, we ran survey comments ranked with our Customer Engagement Score against the sentiment API. Our data analysis showed a fairly consistent correlation between the API and CES.

05    Prototypes

 

Intern Prototype

Our summer intern created the first iteration of Journey Explorer as her summer project. She followed her own research and design process, with consistent oversight and feedback from myself. I created user stories and a feature list for her to follow, and we conducted multiple rounds of critique to arrive at the first iteration of Journey Explorer 2.0

Internprototype.png
 

Second Iteration

Axure was chosen as the prototyping tools for validation and testing because we needed to mock up complex user interactions, where a single selection affected multiple elements on the dashboard. While not great for visual design, Axure gives extra control over different variables for conditional interactions.

Interactive Prototype: https://zsruou.axshare.com

06    Final Concept

 

Three Views

To accommodate the different ways the data can be viewed and interpreted, users can look at aggregated comments by volume, sentiment, or across time. Each data view is organized into a separate tab. The trends tab allows the most customization with 9 different data plots.

The other sections of the dashboard— Filters, Comments, and Taxonomy— remain constant across all tabs. Switching tabs allows the user to see their data displayed by phase or trend without having any filters reset.

 
 
ezgif-6-658909ff4529.gif
 
 
 
 
 

Universal Filters

By using taxonomy and sentiment tagging, we were able to create a universal data set where all filters applied to all data sources. This allows users to alter any filter without creating errors or throwing “no results” pages. Users can filter results through the left-rail filters, taxonomy filters, or by clicking down in the chart.

The combination of different filtering mechanisms allows users to sort and filter data in the way that best works for their research group or product team.

 

Volume Tab

The volume tab is a stacked bar chart that displays data from forums, surveys, and support cases. Data is displayed by source and volume, organized by Autodesk’s Customer Journey phases or subphases.

Users can toggle between phase and subphase views, as well as click down in the chart to see the subphases under a specific phase, and click down again if the want to isolate a single subphase. They can navigate back up using the back button or breadcrumbs.

 
 
 
 
 
 
 

Sentiment Tab

This tab aggregates all selected data sources and shows the sentiment across the different phases and subphases of the Customer Journey. Functionality is identical to the Volume Tab.

Each bar adds up to 100% showing users the percentage of comments that are positive, neutral, mixed, negative, or unclassified.

Below the chart, individual comments are color coded to show their sentiment. Users can filter down to a specific sentiment using the sentiment filter in the left-rail.

 

Trends Tab

The trends tab displays the same volume and sentiment data over time instead of by journey phase. This tab is unique in that it allows users to plot individual data sources or products for comparative assessments over time.

Two drop downs at the top control the chart contents, and the filters on the right control what is displayed. For example, the second dropdown set to “by Product” combined with the product filter would control what products are plotted.

The filters selected for Date view and Date range control the timeline. Selecting multiple years or quarters shows each one along the x-axis. Selecting a single year shows the year month by month. Selecting a single quarter shows that quarter week by week. Users can click down in the chart to go from a multi- to single- view.

 
 
 
 
 
 
 

Search & Comments

Search is a keyword search that can search up to four different data fields. Clicking the gear icon allows users to customize search settings and choose an and/or boolean search.

Each search term appears in its own pill as an indicator that multiple search terms are possible, as well as make it easier to add and remove different words.

When search is executed, the search term(s) is highlighted in the comments section which is expandable to encompass the full screen. Longer comments and metadata is hidden but viewable by clicking “show more” underneath a comment.

 

Improving Algorithms

We realize that no data algorithm is perfect, especially at the beginning. We need users to help us improve our sorting and tagging, so if data is wrong or missing, users can flag specific data fields and suggest a correction in the comments section.

 
 
ezgif-3-44ce208d25a3.gif
 

Main Screens

Interaction Patterns

 
Image Name

07    Conclusion

 

Proposing, advocating for and building a dashboard used by hundreds of Autodesk employees every month was a rewarding experience but with many challenges. The Journey Explorer is an ongoing project and there are many features and tweaks I wish I could still do. However, the necessity of working with an outside development firm limited our scope due to budget constraints.

The next focus of our interdisciplinary team is a big push on adoption and demo-ing the data tool to teams across Autodesk.

As a UX designer and researcher, I truly enjoy working with data and being a proponent of qualitative data. I think it is severely under-utilized in the tech industry. Autodesk’s recent push to be a data-forward company gave me the opportunity to conceptualize and build a really unique tool that takes full advantages of the updates we are making to our data infrastructure.