Alana Vaccaro

🏠 Back to home

Duration: April 2022-February 2023
Role: Product Design & User Research
Primary User: Carriers
Timeline: Shipped January 2023

Instant Bidding

A new feature in the Convoy app that helps carriers save time planning their schedules and maximize their earning potential

Overview

At Convoy, I had the pleasure of working in the carrier marketplace, on projects that directly impacted carriers and how they interacted with Convoy on a day-to-day basis. I worked on many different aspects of the app, but primarily on how carriers found, bid, & booked loads.

In January of 2022, our UX Researcher, Janell Rothenberg, asked me to join a project called Next Gen auctions.

In previous research, Janell identified critical limitations in Convoy’s current bidding and booking experience. The carriers’ number one complaint about bidding was that they felt that Convoy didn’t value their time and they spent too much time waiting for a response back on their bid. For a carrier, waiting for a response means that they aren’t able to plan effectively and thus,everything comes down to the last minute. With the new proposed auction, carriers would receive an immediate response after placing a bid. Once a carrier placed a bid, they would either receive an instant acceptance, instant rejection, or counteroffer. The concept was simple, but I later learned that giving an instant response was no easy feat.

What are the carrier problems this project aims to solve?

1.

Sentiment towards our bidding and booking process impacts whether or not carriers want to do work with Convoy, and negative or mixed sentiment towards this process is common among our largest (and least engaged) segments.

2.

Most carriers who use Convoy have lower preference for finding work with us than our competitors. They also are far less likely to see our bidding and booking processes as better than our competitors’ ways of bidding and booking.

Project Goal

Reduce the amount of time carriers have to wait by providing an immediate response after they place a bid.

My Partners: UX research, data science, product management, engineering, marketing, branding, & support from my fellow design team

Biggest Learning

How to use and weigh qualitative and quantitative data to inform design decisions.

When making design decisions, it's crucial to evaluate the strengths and weaknesses of both qualitative and quantitative data, on the specific design problem at hand. It is important to create a design that is scalable, starting with a basic experience that is based off of initial user research and assumptions from data. This is the base to develop a comprehensive research plan to collect qualitative and quantitative feedback through experimentation.

Research

For the initial round of research, I partnered with our UX researcher.Due to time constraints, the research had to be done quickly. As I look back, I am curious if the luxury of time was afforded to us, maybe we could have avoided some of the initial problems in the first experiment.

Research with carriers was always the biggest hurdle in all of our projects. Carriers were notorious for being hard to recruit for studies. We would either receive a low response rate, no-shows to the sessions, last-minute rescheduling, or technical difficulties due to participants who aren’t as tech savvy or joining in from their trucks.

For this project, we conducted research with 11 participants to ensure we had a carrier to represent each of the segments (activity in the Convoy app and carrier size/role). We conducted moderated research sessions for multiple reasons. First, it was easier to get insights from carriers if we were able to speak with them vs. sending an unmoderated test. Second, there was a big disconnect from organization to the carrier, and qualitative data allowed us to bring empathy to the research. We were able to hear about the lives of carriers and their struggles. We could share recordings of the sessions with other members of the project so that they could develop a deeper connection to the project and the meaning behind the work they were doing.

Qualitative research is exploratory in nature, making it well-suited for the early stages of a project when the goal is to gain insights, identify problems, and explore new concepts. It helps researchers form a solid foundation of understanding before moving into more structured and hypothesis-driven approaches. Although, most of our insights held true, in hindsight I think we would have like to have had more time to conduct initial research and concept testing with a larger group of carriers. I think having more time to conduct the research, could have helped us avoid some of the issues in the first release like adding the timer and holding the bids.

Goals

Finalize the designs for bidding with instant response by:

  • Testing our assumptions Do users think slow response is a problem? Why or why not?
  • Testing our UX goals:
    • Do users think the new design makes it easier to use Convoy to plan their schedule than the current design?
    • Are the rules and expectations of the new design easier to understand than the current design?
  • Testing for preference Which of the two designs do users prefer?
  • Identifying iteration opportunities: What, if anything, is confusing, frustrating, and/or worrisome, or a source of questions in the new design?

The end goal was to evaluate whether or not this is plausible, and likely from a UX perspective: Faster (immediate) feedback on bidding → each carrier will be able to evaluate more shipments → the likelihood of finding the next shipment goes up → carriers will come to Convoy more often.

Method

We conducted interviews and moderated concept testing via Zoom with 11 users according to the two types of sampling criteria: 1) carrier segments based intents 2) carrier sizes (driver/dispatcher, small carrier, large carrier)

First Explorations

I mocked up some simple concepts for our concept testing. The goal was to gather feedback on the idea of receiving instant feedback and carriers’ initial reactions. The initial concepts conveyed an instant acceptance, instant rejection, and instant counteroffer.

Existing Auction Experience

Initial Concepts For Testing

Interviews with 11 Carriers

We conducted 11 moderated, 45-60 minute Zoom sessions with carriers across our segments. First, we interviewed these carriers about booking loads in general and with Convoy. Second, we shared our design concept to evaluate first impressions, expectations, and questions/concerns.

What did we discover about how carriers book loads with Convoy and in general?

Insights:

  • All of the carriers were pleased by the concept of an “instant” decision from Convoy
  • Most carriers were delighted with the concept of “counteroffers” coming to the Convoy App
  • Most carriers expect a “counteroffer” loop to not end before they can counter the counter, and were not satisfied with options to only reject or accept offer
  • Most carriers wanted to know what would happen to their bid if they rejected the counteroffer(s) and preferred their bids still be considered even if they couldn’t be instantly accepted
  • Half of the carriers wouldn’t bid again on a load after (1) rejecting a counter or (2) have their bid rejected, and would be more likely to go elsewhere. Bidding again felt like starting over and undervaluing the effort of bidding in the first place

With these insights, we partnered with science to look at the implications of adding multiple rounds of counteroffers, holding a bid, how long we should hold a bid/counteroffer for, and verbiage around book now.

Design Concepts

Carriers like working with Convoy because it takes out the middle man, the broker. While Convoy is a digital brokerage, the automated process makes carriers feel like they don’t have to hassle with a broker to find loads. The irony of this is that carriers also like the ability to speak with someone and they want to negotiate like they would with a broker. As a designer, it was my job to simulate a broker interaction but without the negative experience.

What We Did

Topic #1: Should we do more than one round of counter offers?

We did not allow the carrier to counter our counteroffer but instead encouraged them to come back and bid again if they didn’t want to accept the counteroffer. This was one of the critical decisions we had to make. Carriers wanted a back and forth negotiation, but from a science perspective, this wasn’t feasible. Science believed we should do a single counter offer because repeated counter offers with the same $ value would do more harm than good.

If we aren’t able to meet carriers in the middle and provide them with the exact $ value, a carrier will feel that it is not a negotiation. By not providing a back and forth “negotiation”, we reduce the risk of causing carriers to feel frustrated.

Design - Counteroffer

Topic #2: Should we allow carrier’s to think through and respond to our counteroffer after some time v/s immediately?

One of the big questions was if we should we allow carrier’s the ability to think through the counteroffer before immediately accepting and if so, what does that look like.

The pros:

  • Allows them to ‘work out logistics’ before they can commit
  • Allows other parallel tasks on the app
  • Creates urgency and gives them a feeling of full control

The cons:

  • Should we display a timer and if we do does that make the carrier think that we are locking the bid for them
  • If we give a window and don’t display a timer, how do we communicate that?
  • A small chance to price compare

We allowed a 15 minute acceptance window for logistics. We did not provide a timer but instead adjusted the booked now rate to the counteroffer price and showed expired under counteroffer info item.

Design - Window

Topic #3: Should we ‘soften’ our rejection?

Pros of not ‘Softening’:

  • Encourages to bid closer to true cost by creating urgency
  • Quicker resolution, encouraging participation in other loads
  • Eliminates the risk of double ‘acceptance’ and hence difficult decisions for the future

Cons of not ‘Softening’:

  • Final rejection can imply a failed negotiation and wasted time
  • Perception that they can never win this load (while they can bid and they can win it)
  • Trust buster if we indeed reach out at a later point in time

We chose to not soften our rejection.

Proposed Flow

Broker Board Updates

Now that we had the proposed concepts, it was time to bring parity to the broker board. The broker board is an internal tool that our brokers use to match carriers with loads. In other auction types, brokers manually go in and match loads with carriers based on a variety of factors: carrier performance, rate, time to pickup. With our instant auction, the goal was to automate the process. However, if a load was not matched through IA, brokers could go in and manually accept loads. To begin, I needed to understand the internal tool a little bit better.

First off, there were no resources dedicated to this tool. That meant no designer, no dedicated engineer, no figma files, docs, etc. To make the changes, I had to create a file from scratch.

The second part of this was learning about the Convoy brokerage operation. First, I met with some of the brokers and learned about their day to day operation and their frustrations with the tool. I also completed the brokerboard training that they use to onboard brokers.

My goal on the broker board was to show the following:

  • Indicate which loads were IA
  • Display bid history
  • Carrier bid (true price bid)
  • Last event
  • Timestamp of the last carrier interaction
  • Number of bids the carrier has placed on the load
  • Last, identify some small UI changes that I felt were necessary ( broker board was a visual mess)

Broker Board Concepts

Rollout: Phase 1

After months and months of back and forth, we were finally ready for the initial rollout. To track the performance, we closely watched carrier feedback via customer support calls, in app carrier feedback, brokerage calls, and the metrics of the matched bids. Overall, we received positive feedback on receiving a quick response. However, we received a lot of negative feedback on our rates that caused some major concerns. Additionally, loads weren’t getting matched and we wanted to discover why.

Follow-up Research

I conducted calls with 10 carriers to learn about their experience with IA. These calls were meant to be open-ended qualitative calls to learn what carriers liked/disliked about IA. Our findings would determine if we should rollback the experiment.

Heres what we discovered:

  • Rates were too low and carriers perceived IA rates to be lower than other auction types
  • Our counteroffers didn’t feel like a negotiation because the counteroffer rate wasn’t significantly different then the book now or would be $20 less then their bid.

Insights

  • In aggregate, carriers prefer Instant Auctions over Timed and Legacy auctions because of lower transaction costs of waiting that helps them to plan better. Instant Auctions sends a definitive message that allows carriers to be decisive.
  • Carriers understand that they need to bid again if they’re interested in the load. Most carriers do not bid as they cover their truck elsewhere. Most carriers were discouraged to bid again as they deemed the counter offer in the first bidding session too low.
  • The counteroffer workflow is not creating an urgency to accept the offer. While the counter offer value might be a part of it, the aspect of urgency is not present even in acceptable counter offers.
  • Counteroffer amounts are either too close to accept now prices or are below market prices according to carriers.
  • According to Carriers, with our initial conservative pricing, we are encouraging them to wait until the “the last minute”/”very late” until we are pricing these loads appropriately.
  • Some carriers were getting frustrated that we weren’t communicating that we were holding their bids and wanted the ability to cancel it. Carriers who received a rejection or didn’t accept a counteroffer were receiving a bid confirmation.

One of the challenges with introducing a new bidding mechanism during a soft market is that carriers are already frustrated with the brokerage process. Rates weren’t reflecting inflation and carriers were frustrated with the bidding process because they felt like they were getting cheated. In an automated bidding flow, we had to be sensitive to the current freight environment but also consider the business. My goal was to translate these findings into a new flow that would hopefully leave the carrier less frustrated.

For the initial concepts and research, the goal wasn’t necessarily matching carriers with loads but reducing the amount of time a carrier spent waiting to hear a response. If a carrier placed a bid and decided not take it, then they could move on to another load. While matching loads wasn’t the main goal, in turn it would effectively allow us to match loads quicker and not closer to the load pickup time.

When we learned that we weren’t matching loads in the first rollout we wanted to adjust the experience to encourage carriers to submit another bid if they didn’t accept our counteroffer or if their bid was rejected.

Rollout: Phase 2

While science worked on the matching mechanisms, my goal was to simplify the flow and reduce complexity. The lesson I learned from the initial rollout is that we tried to solve too many issues at once and not develop a foundation that could be built upon.

Biggest Learning

The beauty of experimentation is that we can learn and use the data to inform design decisions. It is better to start with a simple experience that meets the basic requirements rather than a complex design based on initial assumptions.

Explorations

To create a sense of urgency and eliminate confusion about how long the carrier had to accept the bid, we removed the 15 minute acceptance window. I partnered with our product marketer to work on copy that would set expectations on what would happen if they didn’t accept the bid.

These insights allowed us to remove the counteroffer info item on the load details screen and the counteroffer section in my bids.

Proposed Flow

We also partnered with brokerage to create a new mechanism for accepting “stale” bids. Instead of automatically accepting, we would ask them if they are still interested in the load.

Quantitative Follow-up Survey

I partnered with science to draft a quantitative phone survey that would determine if IA should rollout to 100%. Our goal was to conduct 100 phone calls with our sample group and determine if the new experience was successful - do carriers prefer instant auctions over timed auctions. To conduct the phone survey, we partnered with our customer service team.

Survey goals:

Enable a judgment-based decision on the full rollout of Instant Auction

Evaluate the constructs:

  • Satisfaction of IA experience
  • satisfaction of non-IA experience
  • Preference for IA/non-IA (including open-ended qualifier)
  • Importance of quick bid response (including open-ended qualifier)

Metrics

The headline results for the IA experiment were:

  • Carriers loved it (86% of surveyed carriers indicated preference for IA)
  • Significant 70% increase in intents/hour
  • Significant 11 percentage point increase in zero-touch automation rate
  • No significant change to buying power (CI -720 to 250 bps)
  • Significant degradation to metrics related to service quality (e.g. severe falloffs), but without impact to service quality metrics that matter to shippers

A Win Win

Overall, our findings from the survey were very successful and enabled us to roll out IA to 100%. However, the survey provided us with another big win. By developing the survey and process for it, we helped build a standardized process at Convoy for collecting quantitative data. In the past, we struggled as an organization to find a reliable way to collect data. The new process helped evolve how research is done by providing a consistent and replicable method of data collection. The new survey process can be administered to a large number of participants quickly and efficiently, by utitizing internal customer support teams. This allows Convoy to collect a larger sample size and achieve more statistically significant results.

Instant Auctions Launch 🎉


Booking Loads Made Faster with Instant Bid Responses | Convoy

Metrics

For the matched loads, Convoy expected over a 5% program margin for IA vs other other auction types.66% of IA matches were through counteroffers.

“This is the best of the best. In the past, I couldn’t always accept Convoy loads that I had won, because I’d booked another load while waiting for a response. Now, I know right away if I’m going to haul it, so I don’t need to search for another load. This significantly reduces the time it takes me to plan and book my schedule!” - Ghareeb Nawaz, Stryker Trans

“I’m able to make smarter bids because I’m getting instant feedback. I can submit as many bids as I want and sometimes Convoy provides a counteroffer. This is a huge benefit to carriers like me.” - Josh Rickards, Rickards Transportation Services LLC

Retro

One of the biggest challenges in this project was balancing out qualitative vs quantitative data/research. As a data driven organization, decisions were primarily informed by quantitative data and my role as the designer was to advocate for the user and bring empathy to the design decisions. Quantitative data can provide more generalizable insights into user behavior and preferences, but it can also be limited by the specific metrics that are being tracked. Additionally, it may not provide insight into the why behind user behavior.

My role in this project was to propose a solution that weighed out the potential risks while considering the carrier and business objectives. In the cases where the ideal user experience wasn’t feasible from a business perspective, like providing carriers the ability to the counter the counteroffer, we collaborated as a team to develop solutions.

This meant collaborating: 1) with science to set up safeguards to make the first counteroffer more desirable, 2) with our product marketer to write copy that would set expectations and encourage the carrier to place another bid if they didn’t win the load, 3) with our quantitative researcher to look at how we displayed prices, 4) with brokerage to develop a new process to accept “stale” bids that wouldn’t leave carriers frustrated

This project was very rewarding and challenging. Some of the challenges didn’t lie in the project itself, but with the two reorgs that the company underwent during this project. The reorgs impacted teams in various ways, including people shifting into new parts of the org and valuable team players getting laid off. Reorgs are strange and challenging to navigate. I was extremely lucky to have collaborated with so many great people on this project and despite all the challenges we faced, we were successful in launching an highly impactful feature.