In the first part of this series, I made a very bold statement: To the extent that you aren’t using the scientific method as part of your learning work, you’re missing out on the true power of learning to have profound impacts on business results. Using the scientific method in your efforts to design and deploy effective learning programs is the surest way to get superior results, but very few learning professionals see it that way. The Scientific Method in Learning Initiatives, Part II

In the example from the previous article, you saw how important asking the right questions was in order to get at the root cause (lack of product knowledge among underperforming salespeople) of the business problem being addressed. Then you had to figure out how to measure that in order to come up with a very specific hypothesis: Raising the product knowledge of underperforming salespeople by 30% will boost company sales by 15%. Now you’re ready to go on with the remaining steps of the scientific method:

Test the hypothesis through an experiment. This is the part where most business people fail to follow through. The bigger your company, the more important this step in the process becomes, and the easier to execute as well. You’re going to develop a learning program to boost product knowledge among your salespeople, but how are you going to know if it works, and even more importantly, how are you going to know that it will deliver the desired business results in terms of boosting sales? Right now it’s just a hypothesis, a guess, although hopefully a very educated guess due to all the background research you conducted. The idea here is to develop the learning program you think will achieve the desired results and then test it out by conducting a true experiment. You’re not going to just roll the program out to the entire sales team because you don’t know for sure that it’s going to work, and there’s nothing worse than rolling out a program company-wide and having it be a big flop. For this experiment to yield useful results, you’ve got to have a control group in order to set up a comparison. You’ll have a group of underperforming salespeople who receive the product-knowledge learning intervention, and all the other underperforming salespeople who do not receive the intervention. This is the only way for you to be able to definitively determine if your learning program will deliver the expected results.

Analyze the results and draw a conclusion. Utilizing whatever tool you developed to accurately measure salespeople’s knowledge of the product, you’ll get a new round of baseline data on everyone – those who will and will not receive the learning program. Then you’ll deliver the learning program to the targeted group of underperformers, who represent just a portion of the overall pool of underperforming salespeople. Immediately following the learning intervention, you’ll take another round of measurement. Theoretically you should no or little change in the scores of those who did not receive the intervention, and hopefully you’ll see a substantial increase in the product knowledge scores of those who participated in the new program. For the sake of illustration, let’s say the people who received the additional training saw an average boost in their product knowledge scores of 35%. That’s good news, because you were aiming for at least a 30% increase, so you’ve managed to exceed that target.

But the story doesn’t stop there because you still need to see how it’s going to impact actual product sales, which is what you really want to see. The tricky part here is not knowing how long it will take for the increased product knowledge to translate into an impact on product sales. You decide to track product sales by the week and at the end of four weeks post-training, you take a look at the numbers. You see that the product sales among the control group that didn’t receive the training have stay flat, which was what you expected. For the group that received the training, over the course of those four weeks, their product sales have increased by 25%. Good news! You conservatively estimated a 15% boost in sales, which is what the higher brass wanted to see, and you’ve outstripped that expectation by 10%.

Communicate the results. Now you can go to your higher ups and report on your experiment. Based on what you’ve seen in the experiment, you can confidently say that rolling out this new learning program to all the underperforming salespeople will result in a 25% boost in product sales. Needless to say, everyone is pleased with these results.

What you’ve seen in this two-part series is how to leverage the scientific method to improve the business results of your learning efforts. I believe it’s absolutely essential for learning professionals to up their scientific game by engaging in this kind of controlled experimentation. I believe you and your company will appreciate the results.