Start at the End Summary and Review

by Matt Wallaert

Has Start at the End by Matt Wallaert been sitting on your reading list? Pick up the key ideas in the book with this quick summary.

When a new product or service is successful, it doesn’t just make a lot of money; it changes the world. That might sound dramatic, but consider the iPhone when it debuted. It didn’t just move a lot of units; it fundamentally changed the way we interact with our mobile devices, integrating them into our everyday lives in many different ways. In other words, it transformed our behavior. 

That’s why behavioral modification should be at the forefront of our minds when we approach the task of designing new products and services. Instead of just thinking about how we can sell stuff, we should take a step back and ask ourselves, what’s the behavior we’re trying to promote? Or to put it a little more poetically, what kind of reality are we trying to create? 

From there, we can work backward in designing new products and services. That’s what Starting at the End is all about. Drawing on behavioral science, these book summary offer a systematic approach to product and service design.

In this summary of Start at the End by Matt Wallaert, you’ll learn 

  • how to sum up your vision of a new reality into a single, powerful statement; 
  • how to create a map of the influences on your potential customers’ behavior; and 
  • how to use that map to modify those influences and thereby transform the world. 

Start at the End Key Idea #1: To begin the Intervention Design Process, you need to identify and validate a potential insight. 

The process of assessing how we can modify our potential consumers’ behavior is called the Intervention Design Process, or IDP. It begins with a special type of observation called a potential insight. When you have one of these insights, you’re perceiving a gap between the world as it is (the real world) and the world as you want it to be (the ideal world). 

To make this somewhat abstract definition more concrete, let’s dig into an example. Back in 2012, when the author was helping Microsoft develop the search engine Bing, he and his team had a potential insight: it seemed like children weren’t using search engines at school anywhere near as much as one might expect. 

At first glance, children, schools and search engines should have been a perfect combination. After all, children are brimming with questions, schools are supposed to foster their curiosity and search engines can help them answer nearly any question they might have. In an ideal world (at least from Bing’s perspective), they would be conducting numerous online searches per day (preferably on Bing, of course). But the team suspected a gap between the real world and the ideal world; something was driving a wedge between the two. 

Now, the keyword here is “suspected.” At this point, no one at Microsoft had any empirical evidence to support the notion in question; it was just a hunch. That’s why it was a potential insight; it hadn’t yet been confirmed as an actual insight. 

For all anyone at Microsoft knew at the time, their hunch could have been wrong, and the gap they were perceiving could have been a figment of their imagination. In that case, it would have been a waste of time, money and energy for the company to throw resources at solving a problem that didn’t exist. That’s true of any potential insight, so it’s crucial to test your insight. 

To do that, you need to seek out quantitative or qualitative validation of your insight. For the author and his team, that meant collecting data about children’s internet usage at school – an example of quantitative validation. It also meant going to schools and watching kids’ computer usage in person – an example of qualitative validation. 

It turned out their hunch was correct: on average, each student was conducting less than one search per day. The gap between the ideal world and the real world was proven to exist; their insight was validated.

If the same can be said of your insight, you can now proceed to the next step of the IDP. 

Start at the End Key Idea #2: The next step of the Intervention Design Process is drafting a behavioral statement. 

So far, you’ve identified the gap between the world as it is and the world as you’d like it to be. Your next step is to write up a formal description of the ideal world you want to create. 

This is called a behavioral statement, and it can be broken into five components. The first is the behavior you’re trying to promote. Usually, it boils down to purchasing and using your product or service. For example, when Uber launched its ride-sharing service in 2009, the behavior the company wanted to promote was, well, taking an Uber. 

The second component is the population whose behavior you want to change. Sometimes, that population can be very broad. Uber’s target population was everyone. Usually, however, your target population will be a narrower demographic, such as a particular age group. 

The third component is the motivation behind the behavior you’re trying to encourage. For Uber, that was simply the desire to get from point A to point B. In the company’s ideal world, any time anyone wanted to do this, they’d take an Uber. 

But when thinking about your ideal world, you have to be realistic. Even within your target population, your product or service usually won’t be usable or desirable for people in every situation. There are certain limitations, or preconditions, on when and how people can use it. These are the fourth component of your behavioral statement.

Here, you’re noting the preconditions that need to be met for people to engage in the behavior you want to promote. For example, to take an Uber at the time of the company’s launch, people needed to have a smartphone equipped with a mobile internet connection and an electronic form of payment. They also needed to live in San Francisco, because the company limited its initial rollout to the high-tech city in which it was based. 

In the fifth and final component of the behavioral statement, you define the data by which you will measure whether or not the behavior you want to promote is taking place. For Uber, the metric was pretty simple: the number of rides people took with their service. 

Now all that’s left to do is to take those five components and fuse them into a single sentence. For Uber, that would be as follows: “When people want to get from Point A to Point B, and they have a smartphone with connectivity and an electronic form of payment and live in San Francisco, they will take an Uber (as measured by rides).” 

Start at the End Key Idea #3: Now it’s time to map out the pressures influencing your target population’s behavior. 

Thanks to your behavioral statement, you now have a precise description of the ideal world you’re trying to create. The next step is to start figuring out how to make it a reality! 

Here, we begin with two basic premises. First, there must be certain factors holding people back from engaging in the behavior in which we want them to engage; these are inhibiting pressures. Second, there must be certain factors encouraging people who are engaging in the behavior; these are promoting pressures

Your task now is to identify these pressures – a process called pressure mapping. Let’s illustrate how to do this with a rather mouth-watering example: M&Ms. 

Imagine you’re working for Mars, the company that makes M&Ms, and your goal is to encourage the behavior of eating the chocolate candy. What are the promoting pressures for this behavior? Well, to start with the obvious, M&Ms are tasty. They’re also visually attractive, coming in an array of vivid colors. 

These colors provide an example of irrational promoting pressures. They don’t actually affect the taste or nutritional content of M&Ms; the colors are completely superficial. A completely logical, M&M-chomping android couldn’t care less about them. Nonetheless, such irrational pressures can be very powerful. Just imagine what would happen to M&M sales if you changed their colors from cheerful shades like red and green to shades that reminded people of vomit and urine! 

Now let’s turn to the inhibiting pressures. Here’s a big one: availability. The harder the M&Ms are to get, the less likely people are to eat them. Imagine you had a bowl of M&Ms right next to you, right now. You’d probably be pretty tempted to grab a handful of them. Now imagine they’re 20 feet away from you in a cabinet, then 20 meters away in a vending machine and then 20 blocks away in a convenience store. Notice how the temptation gets weaker and weaker? 

Now, there’s one obvious pressure we haven’t talked about so far: calories and sugar content. Those are another inhibiting pressure, right? Well, as you’ll learn in the next book summary, the matter is a bit more complicated than that, and it illustrates why pressure mapping is a little trickier than it might seem at first glance. 

Start at the End Key Idea #4: The pressures that promote and inhibit behavior are fluid and complicated. 

If you’re conscious about your health, the high calories and sugar content of a pack of M&Ms are probably going to be a pretty significant inhibiting pressure, dissuading you from eating them. But if you’re feeling hungry or experiencing a blood sugar crash, they could also be a promoting pressure, tempting you to gobble them down. 

Here we encounter an example of a counter-rational pressure – a pressure that can go in either direction, contrary to what a rational observer might expect. These counter-rational pressures are context-dependent. For example, M&Ms have a playful, lighthearted branding that features anthropomorphic candy characters. In the context of a kid’s party, that branding is a promoting pressure – but at a romantic dinner, it would probably make them feel pretty out of place; their branding would then become an inhibiting pressure. 

The more you analyze them, the more you realize that all pressures are context-dependent. For example, take something seemingly as cut-and-dried as the cost of M&Ms. At a couple bucks per pack, they might seem pretty cheap to you – another promoting pressure. But imagine you’re a six-year-old boy who needs to save up his allowance for a week to buy one. Or imagine you’re one of the majority of people on Earth who lives on less than $2.50 per day. Now they’re expensive – an inhibiting pressure. 

Well, sometimes. Expensiveness can also be a counter-rational promoting pressure. For example, the high cost of jewelry is a part of its appeal; the expense conveys quality, status and a sense of luxury.

Point being: promoting and inhibiting pressures are complicated. They shift with context, and they can often be either counter-rational or outright irrational. 

That’s why we shouldn’t rely on our intuitions or preconceptions when we’re trying to identify and map out those pressures. The reality of them may be different than our perceptions or assumptions – and there may be pressures we haven’t even considered. To do pressure mapping well, you need to base it on empirical research and validate it with evidence – collecting data, conducting interviews and so forth. 

But here’s the good news: if you did a solid job with the insight-validation phase of the IDP, you’ll already have plenty of evidence on which to draw when you reach the pressure-mapping stage. 

Start at the End Key Idea #5: A pressure map gives you a full picture of why your target population is behaving the way it’s behaving. 

Let’s say you’ve finished mapping out your target behavior’s inhibiting and promoting pressures. On a whiteboard, you may have drawn a picture of the behavior in the middle, surrounded by downward arrows for each of the inhibiting pressures and upward arrows for each of the promoting pressures. 

You can now see at a glance the balance of forces surrounding the behavior. The more the promoting pressures outweigh the inhibiting pressures, the more likely people will be to engage in the behavior. Conversely, the more strongly the inhibiting pressures push down against the promoting pressures, the less likely people will be to engage in the behavior. 

For example, when the author was working with the Clover Health insurance company, he and his team had an insight: people in the black community were getting flu shots at a much lower rate than the general population. After validating this insight, coming up with a behavioral statement and doing some research, they mapped out a number of promoting pressures and inhibiting pressures. 

One of the main promoting pressures was the idea that a flu shot is good for your health. This turned out to be fairly weak among black people in the United States; many of them responded to it by asking, “Why do I need a shot? I’m already healthy.” 

Meanwhile, on the other side of the equation, there were several strong inhibiting pressures. For example, many black people were bothered by the fact that the formula for the flu shot was changed every year. 

Health care providers do this for a good reason; it helps to keep the flu shot effective. But for some black people, it might seem like there’s some sort of medical experimentation going on. That taps into some very real and very painful history, such as the Tuskegee Syphilis Study. Conducted by US government researchers from 1932 to 1972, this study involved denying antibiotic treatments to black research subjects, killing many of them in the process. 

Understandably, then, there was a whole lot of distrust coming from multiple directions at once. The strength of these inhibiting pressures was overpowering the weaker promoting pressures, resulting in a low rate of black people getting flu shots. 

By understanding the pressures behind this phenomenon, the author and his team were now in a position to start thinking about how to change it, which brings us to the next step of the IDP. 

Start at the End Key Idea #6: Think about how to change the pressures that are influencing your target population’s behavior. 

At this point in the IDP, you’ve mapped out the inhibiting and promoting pressures that surround your target behavior. As a result, you now know the balance of power between the two sides. 

To promote your target behavior, you need to alter that balance of power to make it more favorable to the reality you want to create. You can’t directly force people to engage in the behavior in which you want them to engage – but you can indirectly nudge them in one direction or another by manipulating the pressures that surround them. 

There are two basic ways of doing this: decreasing the inhibiting pressures or increasing the promoting pressures. Either way, you’ll need to take some sort of action to accomplish these objectives. By taking that action, you’re making an intervention into the existing reality. So your task now becomes a matter of coming up with ideas for possible interventions. 

To illustrate these ideas, let’s return to the example of the author’s work with Clover Health. Remember, many members of the black community were unconvinced of the health benefits of flu shots (a weak promoting pressure), as well as being concerned about their resemblance to medical experimentation (some strong inhibiting pressures). 

The author and his team came up with 20 possible interventions. That’s a pretty typical number at this stage of the IDP. It’s also way too many ideas to test, so now the task was to narrow them down to a more feasible number, like five. 

One way to do this is to combine as many interventions together as possible. For example, the author and his team reasoned that the black community’s skepticism and concerns about the flu shot all involved a feeling of distrust in one way or another. They knew that for many members of the community, their most trusted sources of guidance were the leaders of their churches. So the team had an idea for a possible intervention: reach out to those leaders and convince them to talk to their congregations about the benefits of the flu shot and to reassure them, thus intervening in multiple pressures at once. 

Those are the sort of win-win, combinatory solutions you’re looking for at this stage of the IDP. Once you’ve identified a handful of them, you’ll be ready for the next step. 

Start at the End Key Idea #7: Next, you need to conduct an ethical check on the behavior you’re trying to promote. 

Now that you’ve come up with some possible interventions to promote your target behavior, you’re probably going to be feeling pretty eager to go out into the field and put your ideas into action. 

But hold on – before you move forward, you need to hit the pause button and conduct an ethical check. After all, what you’re setting out to do is to influence people’s behavior. Now, that can be a good thing; just think of anti-smoking campaigns. But it can also be a very bad thing, like the ads that glamorized smoking in the first place.

To conduct an ethical test, here’s the first question to ask: What is the behavior you’re trying to promote and does it match the goals and motivations of the population to whom you’re trying to promote it?

If it doesn’t match their goals and motivations, then it would be unethical to move forward with your interventions. For example, if you were a tobacco company using advertising to convince people to smoke, you’d be trying to manipulate them into doing something that’s contrary to their paramount goal in life: staying alive. 

Now, you could try to get around this problem by making the following argument: “Sure, no one wants to be unhealthy or dead – but lots of people want to look cool, and they think that smoking helps them accomplish that objective. So in this respect, the behavior does match their goals and motivations. We’re just giving people what they want.” 

This brings us to our second ethical question: Do the benefits of the behavior outweigh the costs? Even if we grant that “looking cool” is a benefit, surely it’s outweighed by the cost of lung cancer and cardiovascular disease that comes with smoking, so promoting this behavior remains unethical. 

If you wanted to, though, you could wiggle out of this ethical corner. Just write up a whole list of supposed benefits that come with smoking, and then cherry-pick or maybe even fund some studies that downplay the costs. 

To counteract this possibility, we can add a third ethical question to the mix: Are you being transparent about your motivations and research methods? If you need to be sneaky to make the benefits look like they outweigh the costs, it’s a sign that you’re being unethical. 

Start at the End Key Idea #8: You also need to conduct an ethical check on the interventions you’re contemplating. 

Let’s say you’ve done an honest ethical check and you’ve confirmed that your target behavior matches your target population’s goals and motivations, that the benefits of the behavior outweigh the costs and that you’ve been transparent in making this determination. All’s well and good; now you can move forward with your interventions, right?

Well, not so fast. It’s not just your target behavior that you need to check; it’s also the interventions themselves. Remember the old caveat: the ends do not justify the means. Even if your target behavior is totally laudable, that doesn’t necessarily mean your interventions are ethical. For example, you could try to convince people to get a flu shot by sending them scare letters telling them they will die if they don’t get one, but that would be a pretty unethical intervention. 

To conduct an ethical check of your possible interventions, you just need to ask the same questions about them as you did with your target behavior. For example, would an untruthful scare letter be in sync with people’s goals and motivations, and would the benefits outweigh the costs? No – people don’t want to be lied to or scared unnecessarily, and the emotional cost of the anxiety that might result from the letter could arguably outweigh the benefit of convincing more people to get flu shots. 

Of course, if you wanted to, you could manipulate your appraisal of the scare letter to make it look like the benefits outweighed the costs. That’s why you also need to be transparent about your methods and motivations when conducting an ethical check of your possible interventions. The same rule of thumb applies here as the one you had with your target behavior: if you need to be devious to make the benefits look like they outweigh the costs, then it’s a sign you’re being unethical. 

So there you have it; that’s the ethical check. Assuming both your target behavior and your possible interventions have passed it, you’re now ready for the last step of the IDP, where you’re finally going to start testing out your ideas. 

Start at the End Key Idea #9: Now it’s time to conduct some pilot studies of your interventions. 

When you reach the final step of the IDP, you enter it with a handful of possible interventions that you think might foster the behavior you want to promote. Now it’s time for the rubber to hit the road and see if they work. 

But you don’t want to spin your wheels too fast here. In theory, you could take one of your possible interventions and roll it out right away on a nationwide or even international scale. But what if it doesn’t work? That’s a lot of wasted time, energy and money. Plus, what about the other four possible interventions you might have come up with? You probably don’t have the resources to try out all of them on a large scale. But if you don’t test them all, you’ll never know which one is the most effective. 

To get around this problem, you need to start out much smaller, with a series of pilot studies, or pilots for short. With each pilot, you’re testing out your intervention with a very small sample size of your target population – and you’re doing that in an operationally dirty manner. That means you aren’t trying to implement the intervention in the most efficient way possible – a way that would scale-up well. Instead, you’re just trying to test the intervention with as little disruption to your organization as possible. 

For example, in a fully operational, scaled-up version of your intervention, it might be much more efficient to use an automated computer system to take care of a task. But that could be a pain in the neck to develop, and it might be a lot of work for nothing if the intervention proves to be a nonstarter. So it would be better to do the job manually at this stage. Remember: you can always develop the fancy computer system later on if you discover the intervention is a winner and you want to scale it up. 

The whole point of the pilots is to see if the interventions work, so the question now becomes, How do you show that? Well, if you’ve caught the drift of these book summary so far, you’ll already know the answer: look at the data. But there’s an important caveat to add to this answer, which we’ll look at in the next and final book summary. 

Start at the End Key Idea #10: Finally, you need to conduct more formal tests of your interventions; then you can decide which ones to implement. 

Assuming you’ve collected data for one of your pilot studies, you and your team now have some sense of whether or not the intervention you’re looking at is a promising one. Your sample size will have been pretty small, so you can’t be very certain of your results at this stage. But with at least some degree of certainty, you can now make a statement like, “this intervention seems to increase people’s engagement in our target behavior by 20 percent.” 

In scientific parlance, the degree to which you’re certain about a statement like this is called your data’s p-value. Counterintuitively, the lower the value, the higher the certainty. If your p-value is 0.05, that means there’s a 5-percent chance your data are wrong and therefore a 95-percent chance they’re right. 

In academia, scientists normally look for a p-value of less than 0.05 to feel enough trust in their data. But you’re designing products and services, not doing hallowed research in the ivory tower, so you can afford a greater degree of uncertainty. A p-value of less than 0.20 should do. Remember, that means there’s still an 80-percent chance your data are correct, and those aren’t bad odds to bet on in the real world. 

Once you’ve finished piloting all your possible interventions and figured out which ones show promising results and pass under the 0.20 p-value hurdle, it’s time to conduct a more formal test for each of them. Here, you’re going to try out your intervention with a larger sample size to make sure the pilot data wasn’t just a fluke. 

When you conduct the test, you’re also going to try implementing it in a way that’s much more operationally clean than your pilot. For example, remember that automated computer system you eschewed at the pilot stage? Well, now would be a good time to start developing it – because you want to test not just the effects of the intervention, but also the overall feasibility of it. Even if the results are fantastic, they might be too costly or cumbersome to achieve. You want to figure that out now before you decide you want to scale up and fully implement your intervention. 

That decision is the final step of the IDP. To make it, you need to engage in some good ol’ fashioned cost-benefit analysis. Looking at the data for the possible interventions, which ones have positive results – and out of those interventions, which are most worth pursuing, once you factor in the costs? Those are the ones you want to scale and implement. 

If all goes well, the rest will be history. 

Final summary

The key message in these book summary:

The Intervention Design Process is a step-by-step approach to the design of products and services that begins with the behavior you want to promote and then works backward to make that behavior a reality. First, achieve an insight into the gap between the way you want people to behave (your ideal world) and the way they’re actually behaving (the real world). Then write up a behavioral statement formally describing that ideal world. Next, map out the promoting and inhibiting pressures that are preventing that world from coming into being. Then you can start thinking about possible interventions to modify those pressures. Once those interventions have passed an ethical check, you can start trialing them. By the end of this process, you will find one or more interventions that can be scaled up and implemented.