Expenses Back Office

  • Created a new interface to get high volume finance workers out of the inbox
  • Worked with AI developers, product managers, and developers to craft a feasible MVP
  • A successful release that is full on early adopters and in the middle of a second round of design

This work happened during a time where several managers came and went in rapid succession, before and after COVID. During this project I was alternately a design lead, a project manager, and a team lead. Our highly collaborative team included a researcher, several product managers, and a rotating cast of junior designers.

Expense partners are tasked with saving companies money by reviewing hundreds of expense reports and preventing fraud, abuse, and duplicate expenses. They are high volume users who spend most of their day in the software.

Diagramming the current experience.

We were tasked with finding a feasible solution to enable better management of their work, one that would layer ML based insights in to determine the risk of a report needing attention. We were asked to determine if we could use the existing Workday inbox (which they currently) and suggest upgrades or if we should start from scratch with a new interface. The inbox is a one-size-fits-all interface built with casual users in mind. We began by interviewing Workday’s internal expenses team and with a heuristic audit.

an image of an annotated screenshot describing usability issues

One of several screens from our heuristic audit.

We also began meeting with the ML team to understand their process and what the model would be able to output. We held several in person workshops right before COVID hit and sketched and diagrammed our outputs. There was also a round of lightweight competitive research to look at other patterns.

A concept drawing of how ML could integrate into the interface.

A diagram explaining when we scan for anomalies.

After doing our research we were beginning to hypothesize that the inbox was inadequate. It was not built to handle sorting or structuring the quantities of data a partner had to get through everyday. And there were other ancillary features such as team work queue management and documentation of decisions for auditors that could not be added to the inbox anytime soon.

With that said, our stakeholders still needed convincing. So we made prototypes of different options, including in the inbox and a table / feed view, and tested them.

Our inbox based concept

Our table-based concept.

User testing with customers illuminated a few things. First, the inbox was a no go : a feed based on a standard grid allowed for the sorting and dimensionality expense partners needed.

It also helped identify which ML patterns users preferred. The consensus there was that simple confidence levels : high, medium, low (for example) were better than more complicated displays. The caveat was that users need to configure their thresholds as a preference, since high risk means different things to different people.

We ended up working through a series of MVPs, building our way up in fidelity and feasibility. Early prototypes included all the features our users had asked for.

An image of our table view with commenting built in.

As we worked through our process we began talking to teams working on the functionality we’d need to incorporate. We would repeatedly told we couldn’t get time on their roadmap. Certain features we wanted to use needed security upgrades to be usable in financial settings but weren’t a priority to be built.

An image of our table view with commenting functionality, which we couldn’t get on the roadmap.

A mockup of the maintain risk levels page.

And advanced prototype of the maintain risk levels page.

A mockup of an expense report.

The final design for the expense report.

So what happened? Well, after successfully proving the interface would be usable in tests with customers, our team found itself working on a series of incremental updates. Development was years behind where it would take to build what we had created. Morale was flagging.

I helped align the stakeholders on the idea that we should rotate off the project and come back when it was farther along. Somewhere along the line the project had lost sight of its success criteria. I resurrected our original design brief, and we discussed the fact that we had met the original goals. A sticking point was that our team was small and UX resources were hard to secure. They didn’t want to lose us for fear they wouldn’t get us back.

Two years later we returned to the project. An MVP has been successfully deployed and though being hampered by legacy frameworks, has maxed out early adopters. Many of our original hypothesis have proven to be correct.


  • Successful MVP designed and deployed
  • A laundry list of future features for roadmap improvements
  • Led design through a rotating cast of managers and the onset of a global pandemic