Vacasa.com Conversion Rate Optimization: Strategy and Methodology

My role: Product Manager leading the Vacasa.com product and engineering team (7 engineers, 1 designer, 2 analysts).

Background: Vacasa.com is the largest direct booking site for any vacation rental property management company in the world, where guests can make reservations across a catalogue of over 40,000 vacation rentals in the US, Canada, and Mexico. Over 40 million users visited Vacasa.com in 2023, generating over $380 million in rental revenue. Vacasa.com is a strategic asset and differentiator for Vacasa’s business model. While other major property managers are dependent on Online Travel Agencies (OTAs) and vacation rental marketplaces such as Airbnb, VRBO, and Booking.com for connecting customers with their homes, direct bookings on Vacasa.com accounted for over 1/3rd of Vacasa’s revenue, providing a key hedge against the whims of OTAs and boosting profits for our homeowners.

The vacation rental industry has seen explosive growth over the past decade and has matured into a crowded and competitive vertical. In order to compete with the likes of household names such as Airbnb, TripAdvisor and Expedia, the Vacasa.com team would need to find continual conversion rate wins via new and improved features and user experiences to grow direct bookings on the site. As the Product Manager for Vacasa.com, I was responsible for defining the strategy and methodology behind our approach to conversion rate optimization, as well as ownership and execution of the roadmap.

Knowing Your Users - Site Analytics & Journey Mapping

In order to give directions, you need to know the lay of the land. That means understanding how your users find you, what they’re looking for, how they navigate through the conversion funnel, and where and why they either make the decision to convert or to leave. Data comes first: prior to any optimization efforts, my first step was partnering with the analytics team to map our our user journeys / flows / conversion funnels across the site.

An example of a conversion funnel diagram for users landing on the homepage.

Mapping our users’ journeys started with:

1) Where did they come from and where did they land?

How your visitors found you and where they enter the site is one of your best clues into understanding who they are and what they’re looking for. As someone with years of experience in SEO strategy, I know that the conversion funnel starts well before a user lands on a page in your domain. Careful analysis of traffic mix by landing page can reveal all kinds of insight into your users, including their familiarity with the brand, whether they’re high-intent or a casual shopper, and which product offerings might be a best fit. A user who entered your homepage URL directly into their browser can be entirely different from a user who clicked on a paid ad from a category-type search or a user who found a specific product page through a long-tail Google search.

Mapping and sizing entry points is also a key input into prioritization order. In my experience, many optimization programs focus on high traffic + high conversion funnels, when it’s often high traffic + low conversion journeys that can yield the most value through optimization.

2) How users progress through the funnel

Once a user lands on the site, where do they go? What do they or don’t they interact with? What path do they take down the funnel, and where do they drop off? For each step of the funnel, you should know the share of users who completed the previous step, and the share of users who proceeded to the next step. In most cases online shopping follows a sequence of actions (search, browse, compare, buy) in order to complete a conversion. We call these discrete actions ‘microfunnels’. Mapping out these patterns of behavior allows you to identify locations of friction within your funnel that are ripe for optimization.

Note that this often involves cataloging and enhancing your site analytics. Partnership with analysts and martech to ensure you’re tracking everything you need to is essential to success.

3) What specific behaviors are associated with conversion rate - both positive and negative?

Once you’ve mapped out all of the discrete actions users take through your site, the next step is to correlate these behaviors with whether or not they ultimately convert.

We saw, for example, that the earlier a guest entered dates in their search (rather than initially searching a location without dates, and then entering dates later), the better they converted. There were multiple opportunities to enter/edit dates in our conversion funnel, from the initial search on the landing page to the search results experience to the individual unit page, and while all of these were positively associated with conversion rate (indeed, they were required to complete a booking) users who entered dates on their initial search were substantially more likely to convert than those who did so later down funnel. This led us to the hypothesis that the earlier we can provide a user relevant results (in this case, homes available for their dates), the more likely we will be to convert them.

4) Segmenting User Behavior

Finally, it’s also important to be able to segment your funnels by user attributes including device, traffic source, and new vs. returning.

In the pervious example (search dates), we saw that user behavior was significantly different between mobile and desktop, with desktop users being much more likely to enter dates on an initial search. This allowed us to hone our hypothesis: by improving the initial date entry on mobile devices, we will encourage more users to enter dates earlier, increasing their likelihood to convert.

From Observations to High-level Hypotheses

Now that we’d thoroughly mapped out our users’ behavior through the conversion funnel, we began to see patterns that informed high-level hypothesis about where there might be friction in our funnel or how we might encourage users to convert. These were less specific feature recommendations than directional goals toward changing user behavior.

Some examples:

  • Observation: We saw that the earlier a user entered dates in their search, the more likely they were to convert — and that there was a large disparity between desktop and mobile users

    • Hypothesis: We should test variations of the date entry / calendar feature in order to encourage more users to enter dates on their initial searches, particularly on mobile, thus increasing conversions.

  • Observation: Users who interact with the map feature or view images on our unit (product) pages are more likely to convert than those who do not

    • Hypothesis: By making the map and image features more accessible, we will encourage more users to interact with them and convert.

  • Observation: the first page of our two-page Checkout flow sees a high drop-off rate

    • Hypothesis: given that users who begin checkout tend to be high-intent, we must be doing something wrong on the first page.

Building a Testing Backlog: Competitor comparison, analogous inspiration, and user research

Now that we had our high-level observations and hypotheses, we needed to develop a plan for how we would test them. In other words, we knew what we wanted our users to do, but how exactly could we acheive that change in behavior? What specific new features or UX treatments would drive the desired behavior?

For example, we hypothesized that encouraging more mobile users to enter dates on their initial search would drive more conversions, but what specific change or changes would we need to make to do so. We needed to not only brainstorm a list of ideas, but have a way to vet and verify that these were the tests we wanted to run before spending the costly engineering time to build them.

We pursued the following process to build a vetted backlog of test ideas:

  1. Learn from the best — Competitive Comparison

    Many of our competitors were much bigger and had been at the optimization game longer, so to a degree we had a ready-made roadmap of various new treatments to test. Working with Vacasa.com’s UX Designer, we thoroughly catalogued the experiences on competitor sites, with an eye for 1) what they were doing differently, 2) how different the experience was from our current experience, and 3) are there common patterns shared across different sites that evolved to best fit the experience? It was also important to know whether the competitor was a test-centric business and what they were testing. Some organizations have a testing philosophy baked deep into their bones, where everything is always being tested. Others clearly don’t prioritize optimization to the same degree. One easy way to tell if a competitor tests and what they’re testing is to use a cookie manager to clear your cookies continually across multiple visits to their site (when the experience changes, you know that the competitor is running a split test). Note that this is not the same as copying or stealing from competitors — very often, what works for a competitor won’t work for you, so it’s important to be aware of the nuanced differences between your business models and user base. Airbnb was the leader in our industry, but their business model and user base were so different from ours that many times, what was the right approach for them wasn’t right for us.

  2. Analogous Inspiration

    Inspiration should not be limited to your own vertical — if so, you’re always going to be playing catch-up. It’s important to look at analogous shopping experiences with similar journeys and flows. Travel was rife with these, but often a better source of outside-of-the-box inspiration was real estate sites like Zillow, Trulia, RedFin, and Apartments.com. The funnel on real estate sites shared a lot of elements with our own: search, browse, location landing pages, unit comparison, etc. Once again, it was important to identify sites that pursued a rigorous experimentation program.

  3. User Research

    The best way to generate or vet ideas for test is to ask your users themselves. In order to hear directly from our visitors, the UX Lead and I would partner for a series of moderated and unmoderated user interviews (largely conducted via UserTesting.com). Hearing a user narrate their own decision making process through the shopping funnel is invaluable qualitative data. While interviews can be valuable early in the brainstorming process, we found that they were most productive as a last step to vet our backlog of ideas. Having mock-ups of various treatments ready for interviewees to interact with led to more substantive insight. Also, user interviews can be time consuming and costly, so we only wanted to invest that effort into our biggest problems.

Using the three sets of observation-hypothesis from earlier, here were the specific test ideas:

  • Observation: We saw that the earlier a user entered dates in their search, the more likely they were to convert — and that there was a large disparity between desktop and mobile users

    • Hypothesis: We should test variations of the date entry / calendar feature in order to encourage more users to enter dates on their initial searches, particularly on mobile, thus increasing conversions.

      • Test: our initial calendar input was not built from a mobile-first perspective, and does not follow responsive design best practices or patterns seen on competitor or analogous sites. We should test a full-screen, scrollable calendar implementation on mobile devices.

  • Observation: Users who interact with the map feature or view images on our unit (product) pages are more likely to convert than those who do not

    • Hypothesis: By making the map and image features more accessible, we will encourage more users to interact with them and convert.

      • Test: 1) We should make it easier for users to find these conversion-associated features using sub-navigation bar on this page. 2) We should improve the interactivity of our map, to make it easier for users to explore, by implementing a click-to-open map feature. 3) Our image browsing experience does not follow the patterns established on competitor sites. Instead of an image slider, we should test a full-screen, scrollable image gallery experience that allows users to more easily navigate through all unit photos.

  • Observation: the first page of our two-page Checkout flow sees a high drop-off rate

    • Hypothesis: given that users who begin checkout tend to be high-intent, we must be doing something wrong on the first page.

      • Test: Our first page of checkout involves several discrete interactions/steps for the user to complete. By breaking these apart into multiple pages, we will reduce friction through the flow.

Putting it All Together: Prioritizing a Testing Roadmap

At this point we have the following:

  • A series of data-backed observations about how our users behave on the site and what outcomes those behaviors tend to lead to

  • Hypotheses for how changes to those behaviors will result in the desired outcomes

  • A list of specific ideas to test these hypotheses

If you’re just getting started, this can be a huge list. So, how do we prioritize what to go after, so that we’re having the biggest impact as soon as possible? Our prioritization rubric took the following inputs:

  1. Effort - How much work would it be to design and build this test? T-shirt size of engineering and design

  2. Estimated impact - How big of an impact might it have on your KPIs? This can be difficult to size, because you don’t actually know how much of an impact it will have until you test (if you did, there wouldn’t be a reason to test it). Using the behaviors we gleaned from our funnel analysis, we’d apply a range of potential impact to that specific behavior and use the data to size the outcome. For each hypothesis, what would a 2% change in that specific behavior net to on the conversion level? What would a 10% change? We knew that, for example, a 2% improvement to our unit page’s start checkout rate would result in many more conversions than a 2% increase in our checkout completion rate.

  3. Confidence - how sure are we that this is worth testing? High, Medium, or Low. This is the most subjective input, and it can be difficult to accurately assess without relying entirely on personal opinion. Some important considerations were: how does the qualitative data from user research support this test? Is this a pattern being used by test-centric competitors or analogous sites? Does it follow UX best practices? Did previous tests in this area yield results?

To take the example of the calendar input observation, we would see:

  • Observation: We saw that the earlier a user entered dates in their search, the more likely they were to convert — and that there was a large disparity between desktop and mobile users

    • Hypothesis: We should test variations of the date entry / calendar feature in order to encourage more users to enter dates on their initial searches, particularly on mobile, thus increasing conversions.

      • Test: our initial calendar input was not built from a mobile-first perspective, and does not follow responsive design best practices or patterns seen on competitor or analogous sites. We should test a full-screen, scrollable calendar implementation on mobile devices.

        • Effort: M (engineering), L (design)

        • Estimated Impact: A 2-10% increase in mobile search conversion would result in a +1.5%-6% increase in overall CVR and $XX in annual revenue

        • Confidence: High, our current approach is significantly different from competitors, does not follow mobile best practices, and

After applying this rubric to our backlog of testing ideas, we used a simple formula to prioritize between them.

Instilling an Experiment-centric Culture

Successfully instituting an optimization program requires building a test-centric product culture. That means that all collaborators (analytics, UX, engineering) and stakeholders (in our case, Marketing, Sales, Revenue Management, and Customer Experience) are aligned on the following:

  1. Everything must be tested. No non-essential updates go out to the site that aren’t behind a test.

  2. Always be testing. Testing should be a continual, iterative process and you should always have tests running so that you’re always learning and improving.

  3. Follow the prioritization rubric. Everyone always has ideas for testing, but in order to vet and prioritize them they all must be supported by the prioritization rubric, meaning that they are 1) based on observations from the data, 2) have a hypothesis on how behavior will change, 3) have concrete ideas on the approach based on research, and 4) are sized with design and engineering and estimated based on impact and confidence.

Test iteration and Re-prioritization

A successful optimization program is all about iteration. Discoveries build on discoveries and wins build on wins. Observing how a test actually impacts behavior is one of your biggest clues for what to test next, and the size of impact future testing might yield. Tests that lose aren’t wasted time, but are valuable insight into your users and your testing process that can inform, correct, and refine your assumptions. That’s why it’s more about the process than any individual test.

Previous
Previous

AI / LLM User-Submitted Review Processing

Next
Next

Vacasa - Guest Chat