PROCESS FOR BUILDING A COHORT REPORT
In many ways, the process for pulling out cohort data is very similar to the way data is pulled for FashioningChange’s Daily Reports. The difference for cohort reports is that for every value we’re interested in, we want to look at it for only a subset group of users. The cohort reporting tools we’ve established start from a particular week, pull out the cohort of users that signed up during that week, and then go through each week after that point calculating the metrics for that group of users only. The tools then go back to the beginning, move forward one week, pull out the new cohort, and repeat.
For example, looking at the behavior of customer retention, the cohort report is fairly simple. Our systems record each day that a user visits, so all that the cohort report needs to do is look for each cohort and week and which users within the cohort visited during that week.The process is pretty straightforward once you’ve done it a few times. And, as with Daily Reports you don’t need to be technical to establish your own cohort reports because tools like KissMetrics automate this process for you.
INCLUDING A/B TESTS IN COHORT REPORTS
An important element of any A/B test is to determine the metric you’re interested in. This metric can be as simple and direct like a click-through rate on a particular link or button. What a cohort report allows you to do is investigate the long term impacts of that metrics on a variable like retention rate after 6 weeks (funzies and awesome ). The details of implementing an A/B test vary with the experiment that is being run. Some tests are very complicated and require deep changes in the structure of the application, while others are as simple as testing a different set of wording or color and can be done without any technical changes using a tool like Optimizely. What makes setting up an A/B test within a cohort report unique is that in order to be able to test the long term impact within a cohort the group any particular user is in needs to be recorded. We set ourselves up to do this by including the experiment group within a hidden set of ‘preferences’ which we store in association with each user. This allows us to generate cohorts based not only on time of signup, but also experimental group.
STATISTICAL RELEVANCE
I have a bad habit of saying “data doesn’t lie.” I say it’s a bad habit because if an experiment doesn’t have statistically relevant results the data can throw even the smartest of people in the wrong direction. We will typically run an experiment until we either see relevance, or we have several thousand data points with no observed difference, at which point we conclude the experiment has no impact. Whatever the metric you’re measuring you need to make sure you compare between your experiment groups using a statistical measure of relevance. We typically use a chi-squared test and look for a p-value less than 0.05, which translates to a 95% chance that observed differences are not due to random chance. The more data you have, the smaller the difference can be and still be determined to be real.
What tools do you like to use to build cohort reports? Share them below.
**Special thanks to FashioningChange Co-Founder and CTO, Kevin Ball, for sharing insights to put together this blog post.**
No Comments