We discussed how to use customer interviews, which is a relatively cheap way of getting information about your idea. We're going to talk today about surveys, which is another way of getting information about whether your product, service or venture idea is any good before you actually spend a lot of money launching it. Surveys are interesting. They should always be done after interviews and that's because you'll learn more about what questions to ask. They can be very powerful. I do a lot of survey work. I've surveyed hundreds of thousands of people and I still get things wrong all the time. So, doing this right is hard, doing it badly is very easy. You can think about surveys as serving two purposes, one better than the other. One method is, as you can see from the still from the famous four out of five dentists agree commercial, that they should chew a particular brand of gum, that one way of using surveys is to convince people things are a good idea. Four out of five people we surveyed said this is a great product, 90% of people said they would use it. This is something you would use in your pitch to venture capitalists or to other customers and that's marketing. I'm much more interested in how you use surveys to learn about your own assumptions, analyze your own business, and come with more successful results. I'm going to give you a few tips for thinking about surveys. It's, again, a complicated topic, but this will hopefully be a useful introduction for entrepreneurs. The first thing you need to do is find a sample, and I think this is the only math I have successfully shown in my lecture so far and you don't have to worry about it too much. That is how you calculate sample size, it is not something you need to worry about hugely but what you need to know is that the more people that you survey, the better your estimate of the true value of a particular number is. You've seen this in presidential polls before, that the more people you survey, the shorter the confidence interval, the more certain you are of a number. If you survey 100 random people, you have a confidence interval of plus or minus 10%. That means if 50% of people like your product, that true number could be somewhere between 50% and 60%. If you survey 267 people, that interval becomes plus or minus 6%. So, it goes from 44 to 56% is a possible range, if 50% is the number the number you get in your survey. If you survey 384 people, that becomes plus or minus 5% and so on. The lesson here I think more than anything else is, survey 100 people, or you're not really getting numbers that are anywhere close to something that's useful for you. Where do you get the people you're surveying? There's really three sets of options. The first are what we call convenience samples. This is the most common kind of surveying and often the least useful. In a convenience sample, you're surveying the people you already know around you. Friends through Facebook, through school, through Twitter polls and trying to get answers back. The problem with this, of course, is rarely is your group of friends the best representative of customers. The best counterexample I know is a successful startup that came out of my class called Common Bond that has raised over a $100 million. What they do is they refinance student loans, especially student loans from top business schools. They surveyed their friends at Wharton, that was actually useful convenient sample because that was their end customers. But in most cases, your customers are not the same thing as your friends, so convenient samples need to be used with caution. A second option you do is purchase a sample. This means buying, basically, answers from more random group of people that are more representative. There are two pretty good ways to do this, one of this is [amazon on Mechanical Turk. Mechanical Turk is a service that has tens of thousands of people working for it, they work from home and for small amounts of money, they do short tasks online on Amazon. Mechanical Turk is used for a lot of different things. If you ever are doing something that seems almost impossible for a computer to do, it probably is because it's not being done by computer, it's being done by Mechanical Turk. If you take a picture of a plate of food and you're getting a calorie count back, in most cases, that picture is being sent to Mechanical Turk. Somebody, somewhere in the United States or around the world, is figuring out the calorie count and is sending it back to you. For very complex tasks that need to be done by humans, you can do Mechanical Turk to do these tasks very easily. For Mechanical Turk, you can expect to pay for very, very short surveys, $0.25 to $0.75 per survey. For more complicated surveys, you'll spend more. Think about this as minimum wage at a premium but you can pay for a bunch of people to take surveys very easily on Mechanical Turk and get a much more representative sample. Google also does a survey tool that's relatively inexpensive and you get some free credits for it with your Gmail account, so it could be very useful to do. They generally let you ask one question but you can pick a very narrow group of people to answer this. Google knows a lot about you, so if you want to pick a particular income level, a particular region, Google can ask people just from that area. Again, you get a great sample that way. If you're looking to spend a considerable amount of money but you need answers that are a very narrow area, like CTOs of Indian technology companies or people who are interested in cyber-security who are buyers. There are a number of organizations that will let you hire professional panels and you can just search for how to get a professional panel. You'll pay each person involved $50 or $100 plus some fee to get the panel of people you want to get together to survey. Those are all approaches to getting purchased samples. You could also use ads to get samples. You could put ads out on LinkedIn, Facebook or use Google AdWords. All are relatively inexpensive to actually advertise a survey. You could advertise it with a prize and get people to come that way. These are ways of getting samples. Convenience samples are easy and free but not necessarily representative. Purchase samples are much better representative samples, but they might be too broad for you if you're interested in a narrow area. And targeted ads can be very good, but it can be hard to get enough interest, you have to spend money on the ad side. Some brief tips on how to do surveys. The first of these is about question types. Demographic questions, questions about gender, questions about ethnicity or about educational background, you might want to actually put those ones early on in the surveys, especially if they're not particularly offensive questions. You might want to do that because those demographic questions often set people at ease and they let you compare later on, the profile people who took your survey to wider profile of people in the United States. so that could be a very useful set of tools. You generally don't want to ask yes or no questions. You want to ask questions that have multiple kinds of answers. You use Yes/No questions only to qualify people. If you're interested in contacting only people who bought sweaters, and we'll talk about a subscription sweater service as an example throughout the questions you'll see here. We might want to ask people up front, have you ever bought a sweater before? If they say no, we drop them out of the survey. If they say yes, then they take the full survey. Only use Yes/No questions to sort people into groups and be very careful with open-ended questions. Open-ended questions are questions that give you an essay box to answer. It might say something like, what do you like about sweaters? And they give you a box. There's a few reasons why open-ended questions are risky. The first is that when I write surveys, I tend to think of surveys as having a cost but that cost isn't monetary, the cost is the attention of the people who are taking the survey. Every question you ask cost a little bit of their attention and nothing cost more than open-ended questions. People hate writing essays, and so if you give them a box to write a sentence or two, they're going to drop out of your survey at a much higher rate. Additionally, most of the data you get from open-ended questions is pretty low quality. You'll look through an open-ended question you thought was very clever and it'll read something like sesame cat, I like cheese, and you're like I don't know what this means. Why did someone write this in? It's very hard to interpret these sets of questions, so open-ended should only be used very carefully. At the same time, you can use those open-ended questions for sensitive areas that you may not cover. For example, you're asking about gender, you'll probably want to include a male, female, and an other box where people can give you their own gender identity. Even though these boxes aren't being used very much, they can be very powerful sets of tools to let people feel like they have choices, and they aren't being put in particular boxes. I want to go through a couple kinds of questions and tell you good and bad examples of them. I want you to take a look at this question. Again, we're asking about sweaters because we're interested in this example of starting a subscription sweater business. Not my best idea but probably not my worst either. The question is, given the state of the economy, where do you buy your sweaters? Answer a, Amazon, b Mass merchandisers, c, Clothing stores, d, Other online sites. I'll give you a second to think about what's wrong with this question. There's a bunch of issues here. One of them is we're asking a leading beginning, given the state of the economy. So that's putting people to think about the state of the economy. That's not really relevant to our question. We are assuming they buy sweaters. We're not given the option of not buying sweaters. We're not given the option of thinking about other choices that don't fit into these buckets. People may not know what a mass merchandiser is. And we have a bunch of different questions to answer a different kinds of stores with no examples. How could I make this question better? How about this one. Where have you bought the most sweaters from in the past 12 months? Now, we have bounded the question. We're asking the same thing but we're asking over particular time span. Our answer is now Amazon, Other online sites, so they're right next to each other as comparisons. Physical mass merchandisers such as Costco, Walmart, etc, Physical clothing stores such as GAP, Lands End, etc. We're giving the option they haven't bought sweaters in the last 12 months and we're letting them choose other options that might work for them. This is a better question because we've now specified exactly what we're asking. We've improved it in terms of the kinds of questions we're asking. So this is an improved versions, maybe not perfect, but better. Here's another kind of question I see fairly often, which is a rating scale. It is hard to find the right sweater, rate how much you comparison shop before buying a sweater: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. What's wrong about this? Again, we are being leading here, we are saying it's hard to find the right sweater, that makes people feel bad that if don't comparison shop so that they give up this. It's hard to know what the question is asking, what does it mean to say rate how much you comparison shop? What is how much you comparison shop? A one to ten scale is a very large range and that could be a problem because people go through end point avoidance. They don't like to answer one or ten. They'll end up reserving one for a sweater experience that involves them being stabbed and ten for a sweater experience that involves literal angels singing their praises. So, they'll tend to not answer one or ten. Also, it's not clear what one and ten mean here. What does it mean to be one, what does it mean to be ten? What's better, what's best? A better way to answer this would be using what's called the Likert scale. How often do you comparison shop before buying a sweater? One never, two rarely, three sometimes, four most of the time, and five all the time. That agreement/disagree scale with that range is the Likert scale. It's a much better way of gaining the answer. You'll notice also we have a middle point here. It's very important for a scale to have a middle point. Three, in this case, sometimes, which is a neutral point. You're trying to measure differences from that neutral point which can be really powerful way of asking these sets of questions. Those are two better ways of asking common questions. The other thing I get asked a lot is the thing about pricing, in surveys for pricing, so take a look at this example. How much would you pay for a great lnew sweater delivered to you every month, $5, $10, $50 or $200? You'll notice there's a few problems. First of all, there's the fact that I haven't shown you what the sweater is, there's no real world example, it's not quite clear what sweater you're getting for this. But then think about the pricing area. You may think that part of the problem is that the prices are too spread out and that's right, the pricing is kind of odd here. But the bigger issue is that you can't just ask people pricing and expect them to give you an honest answer. The problem is that people will, again, no one wants to seem like a sucker in these deals. So no one is going to pick $200, even if they'd spend $200 on a sweater. As soon as, again, you've shown them pricing, it becomes a negotiation. They're trying to figure out what price they should pay, they want to answer correctly, they're not showing you what they really want to spend. It's been shown that this kind of question is not useful for figuring out pricing at all. In fact, most easy methods to figure out pricing don't really work. I'll give you one technique you can use without a lot of math, which is called monatic pricing. Monadic pricing, we're avoiding setting expectations for price, which is one of the big problems we're giving people. Once you tell people it could be $5, it's very hard for them to be willing to spend $200. So, monadic pricing, we ask questions. In this case, it's slightly improved. How willing would you be to subscribe to a service for $20 a month that sends you a sweater every month like the one below, and we can show pictures, so the question's clear. But what we're also doing, is that $20 a month, which is the pricing, everybody who takes the survey is randomly shown a different price. There might be $20 a month, $50 a month, $100 a month. People aren't anchored because they're only being shown one price each, in this case, that $20 price. This is a way you can start understanding how demand changes when half the people that take your surveys see $20, half see $50. How much for the loss in terms of the interest do you find as you move from $20 to $50? That's monadic pricing that works reasonably well. What you can't do, our other approach, you can't do a pricing ladder, that does not work very well. A pricing ladder would be one where you ask people would they spend 200 for the sweater. If they say yes, you say would you pay 500? If they say no, then you say would they pay 250? Again, you have the anchoring problem that does not work well. The Van Westendorp approach, which is one of my favorite names ever, asks four pricing questions. How much would this have to cost for you to think it's a bargain? At what price would this start to seem like it's too good a deal and something's wrong with it? At what price would this seem high? And what price would you never buy? That has problems that makes it very hard to implement and it's very hard to ask an open-ended question too, where you would just say how much would you pay? Again, that ends up being problematic. There is a better approach and if you take the Wharton class on marketing analytics, you'll learn about conjoined analysis. It's fairly complicated. There's no easy way to do it. But conjoined is a best way to get pricing information, it's just more complicated than most people engage in. Even people in my class who know conjoined tend to use it only for rare circumstances. I'd ask you to think about conjoined, if pricing is a big deal and feature sets are big deal and you want to survey about that, that's the technique that you would want to use but it's not something we'll go into hugely here. How do you know that you're asking good questions, aside from using the techniques we just talked about? The most important thing to do is pretest. You need to give people the survey and ideally, you're going to sit down with a couple people who will take the survey and ask them to take it while you're there and ask them to narrate as they go. This will give you a sense of when as they're filling out questions, do they think a question is good or bad? Do they understand the question? How long is it taking? After a while, you'll send out a survey to a small sample of people and get responses from them. What you're looking for in results is to tell you whether it's good or bad, is interesting. The first thing you're looking for is variance. What I mean by variance is you want some people to be answering ten to a question, and some people answering one. The reason you want to do that is, if everyone's telling you they like your product, you're not getting any insight into why some people might like it and some people wouldn't. So, you're probably asking a question that's too leading. You want questions with multiple kinds of answers. You want to understand if people are comfortable with answering the sets of questions. Are they answering things in the right kind of way? Are questions making them nervous? Are they frustrated because they want to answer a different option than what's available? And you want to look for issues of annoyance and bias. Like I told you before, one thing I learned about was asking about gender questions. I did these fairly large surveys, and the first surveys I did, I asked gender, and I asked whether people were male or female. And I got some frustrated replies back saying that I don't identify as either of these genders, I don't want to take your survey. And that's fine, it's well within those peoples' rights to do that. By adding in a box that said other and letting people fill that in, my response rate actually went up. Not only did my response rate go up, even though very few people actually filled out that box, most people felt more comfortable that I was asking the right sort of questions without bias. You also want to figure out timing. How long is it taking people to do the survey, so you can give people an accurate estimate. When you're done, you need to think about your response rates. When you analyze the results of your survey, if you have less than 20 people responding, you need to think about biases in response. If over 20% is the general rule of thumb, this is not for every field but in entrepreneurship, I wouldn't worry about too much. You can use those census questions you asked about gender and about geographic area and income and education to compare the sample that took your survey to the general population to figure out whether or not your sample's representative. And you could just use census data which is freely available to see whether or not there's a representation there. The last thing I'd urge you to do, again, there's other classes we teach at Wharton on this that you can find online, go beyond just reporting numbers, just reporting that 50% of the people said this, 20% of the people said that. Actually think about running or ratio analysis, to figure out what the causes are of the various kinds of answers that you're getting. Surveys are a really powerful tool but doing one badly doesn't give you much information, it just annoys a lot of people that you're sending it to. Spend some time thinking about surveying, they're a very powerful tool in the toolkits of entrepreneurs. They're not always intuitive, doing it badly take very little time. Doing it well takes a little bit more but you can get really great results that are very powerful predictors of customer behavior in the future.