The Summary
Users - give us an easy to use place where we can monitor our tactic performance, make adjustments, and understand the ROI of our Terminus dollars.
The Business - lets upgrade the oldest page in our app so it will continue to function as we scale.
Results - Led to an average 42% reduction in time on page (10.43 min -> 6.12) as users more easily discovered insights on the streamlined page.
The Problem
ROI is one of the best ways to demonstrate the value that your product brings to a company, and one of the most common ways to do that is with an analytics page. The Terminus Tactic Details page is one of the oldest pages on our site, designed to tell that visual ROI story. Its time had come. Our users frequently complained about slow load times and poor defaults that led to lots of repetitive clicks. From an engineering perspective, that page was getting more expensive as we scaled due to its tight coupling to the backing database. The rest of the product was adding new metrics (conversions) that the old rails backend couldn’t know about, so the product team wanted to update the page in a way that could be extended in the future. As we dove into the project, I worked to understand what users really wanted from this page, and worked to bring that in line with the realities of the engineering and product costs.
The Research
Right away this project was off to a strange start. Both the UX Lead and Product Manager working on the project left the company, so when I picked up the project it was already mid flight technically. My first priority was to immerse myself in the research that had already been done to make sure I could clearly articulate the customer’s position. First I took the page and looked at our analytics with Pendo, physically marking up a printout with the quantity of clicks to understand what our users were doing quantitatively. I also listened to all the customer research calls and made an affinity diagram on the wall with the findings. As you can see it turned into quite the art project, and had the hidden benefit of being a fantastic visual aid to conversations that happened at my desk with engineering and product. Being able to walk over to the wall and point to a sticky note or a statistic overlaid on the product helped remind everyone who we were doing all this work for.
It’s important to understand the high stakes for this project. Looking at the analytics for our page views via Pendo, Tactic Details far and away the most visited page on the site. It was more important than ever to truly understand how customers were using this page, and if they had any hidden workarounds that needed to be accounted for as well.
The Lake
The biggest challenge for this page was the major technical pivot that was a requirement from engineering. Previously, all of our reporting was backed by a single Relational Database (RDS). This database segmented out the data by marketer and by tactic, and that was what was sent to the user when they viewed the page. Any time an impression was served by our partners, we would write the change to that RDS, which would be available the next time a user came to the page. It was an extremely powerful system, but was quickly becoming untenable as we scaled: Terminus handles billions of cookies and millions of impressions, and the database was simply running out of space. Instead they chose to move to an AWS Data Lake. This meant that when an impression was served it would be written to the Data Lake. Engineering would break down a page like Tactic Details into a series of queries that could be run against the Data Lake. The two primary benefits are no storage cap, and much greater flexibility when combining disparate types of data. However this had huge implications on the customer experience.
By design, the Data Lake is not a fast data source. This means that queries can take anywhere from 20 seconds to 10 minutes. Of course a user is not going to wait around for this, so that meant that I had to design and test new paradigms so that these constraints would be thoughtfully abstracted away from customers.
As an example, consider the initial page load. The initial approach engineering took when testing the back end was to have the frontend start by requesting the query report from the backend. If the report was older than six hours, the backend would start the process of getting a new report. Meanwhile the frontend would display the most recent available report to the user. While this allowed the user to interact with the page while the long running query was executing, we didn’t have a single customer understand that the page was showing old data when they navigated to the page. This was made even worse by the fact that the data could be days or weeks old. To address this, I designed a notification bar that would help convey the system status to users, communicating when the data was ready to be refreshed. If the data was too old, I chose to instead use stencils to communicate that the data was being calculated. I worked with Engineering to put instrumentation in place to detect when these scenarios were encountered, so that we would know how often it occurred and start to optimize the system to generate the reports just before customers needed them.
The Design
The initial research led to a number of actionable insights. I created an enumerated list of required features and workflows that needed to be supported by the new page. This served as our guide as we moved from low fidelity whiteboard mocks to high fidelity Sketch and Axure mocks. This also helped us understand what could be removed. One example from the original page is a visual which tried to show how accounts moved through the matching process. We consistently heard that customers did not find value, and that the numbers didn’t always match up with the reporting, so we cut them from the design.
We also found that customers primarily used this page for reporting, and only occasionally needed to manage tactic settings. In the old page, the settings took up almost half the page at the top, and most times customers were simply scrolling past it to get to what they cared about. I iterated on a couple different designs to solve this problem, and chose a slideout drawer to hold these settings. The drawer could either be opened on demand, or left open depending on what the customer wanted. I used axure to prototype the interaction in high fidelity while keeping the content in low fidelity. You can test the options yourself here. After more testing with customers, the drawer emerged as the best solution to the constraints.
As usual with a project this size, we made hundreds of design decisions and countless iterations with internal and external stakeholders throughout. In fact when we launched, we counted over 35 pieces of individual feedback that was addressed by the design improvements. The graph is highly interactive, the date range filters the whole page, the table contains more data points that customers were looking for as well as a graph per account and more.
The Reflection
While the release made a big splash in the market and was positively received overall, it also had its fair share of negative feedback after. Truly that feedback is some of the best, but it is important to be intentional with what feedback is implemented. For example, I got internal feedback that the drawer was hard to find, but after digging into the report we found it was only two customers out of our entire base reporting it. To confirm that, we went into our analytics and saw that over 90% of our customers had actually clicked the button to open and close the drawer. That said, we also used this opportunity to explore how we messaged new features in app to customers, and made plans to better cover more hidden features in guides and training in the future.
We started with the goal of reducing complexity for our customers, and from a design perspective we definitely succeeded. However, much of the work post launch has stemmed from confusion around how and when data will be updated due to the data lake. In future projects, I would strive to get a better sense of what these changes would look like with real data and test those scenarios with my designs more rigorously.