[Data] = amazing, but merge[Data],[Context, Reasoning] = Impactful
Updated: Jun 11
For all those who did not get my pseudo code blog title, my sincere apologies, I meant to say the below. Data is amazing, but combining it with context, and reasoning can make it Impactful. To some, this might seem obvious after they faced and navigated through short-coming of either approached in isolation and benefitted from combining them. To others, it might seem like a complex combo of contrarian approaches. When I was a new Data science professional, unfortunately, I myself did feel fully convinced to use the former with the latter approach, but I am totally on board now and I am here to explain why you should too.
“The thing I have noticed is that when the anecdotes and the data disagree, the anecdotes are usually right. There is something wrong with the way that you are measuring it” - Jeff Bezos, Chairman & Founder - Amazon
Data exposes patterns but can't implicitly tell stories
Data is a great way to validate/invalidate assumptions. It drives discoveries and helps frame a new hypotheses for investigation. Fact-based approaches reduce human bias in inferences but they often fail to tell you the most important thing, the "Why".
No matter how hard we try to narrate patterns, it often does not communicate an influential, impactful story that changes everything. The problem is never in data as it is always the source of truth, but sometimes an incomplete source of truth. It can tell you what & how extremely well, but it may not be able to tell you "Why".
Does that mean, we stop emphasizing enough on data? Not at all. With recent advancements in AI, data collection and aggregation systems its easier than ever to double- down on data. But don't forget to combine this with reasoning & anecdotes can often complete the puzzle with the missing pieces.
On the least, Reasoning/Anecdotes and Data complement, reinforce and validate each other. If there are contradictions between the two, then let's investigate but if they support each other, we can make decisions with a lot more confidence.
Enough said, time for some demonstration.
Product experiment demonstrated contradictions b/w Data and reasoning
Hypothesis: We want to see if design changes ('Post' Variation Vs 'Pre' Variation) can drive a lot more repeat views on a specific page, which was not getting much user attention?
Data: 'Post' Variation of an app page has 400% more repeat-views than 'Pre' Variation in Pre Vs Post Retrospective analysis; These variations did not substantially different.
Initial inference: Product lead affirms "Variation B is far more successful than Variation A"
Context + Reasoning: This only happened because the devs made 2 changes & not 1 for this experiment. While the new page had a different design (As the 1st Meta change), they had also changed the config of the app, with the new default page (post-login) being Variation B. So the 400% views could be mostly due to this re-wiring and can't be attributed to a better Design.
Corrective Action & New Inference: They then ran an apples to apples A/B test and tested both variations in parallel with the same config, to realize that the 'Post' variation only has 20% more repeat-views than 'Pre' variation. So now we have a winner, but the results are better aligned with ground reality and what could have been reasonably expected from app user behavior.
Insights: The team had 3 takeaways from this experiment, which was both joyful and impactful:
Experiment design needs to be carefully thought & executed, with each person collaborating and sharing notes with the team to raise awareness on the experiment.
Intuition is often right; If you see something unusual, at least an investigation of data is a useful thing to do. If you were right you made a measurement and if you were wrong then you made an important discovery.
To track the efficacy of a change in an experiment, the comparison needs to be Apples to Apples or the competing variations should only have 1 change that wants to be tested, else we will be conflating the results and making faulty conclusions.
Economic Incentives accelerate revenue growth for consumer companies
Hypothesis: XYZ company felt that if they offer Incentives then users are more likely to complete Signups & specific product goals Vs when not offered Incentives.
Data: Data tracked b/w the date when the incentive was offered and the next 2 weeks demonstrated that Incentives led to a significant increase in Signup Rate.
Initial inference: Marketing team affirms "Incentives have helped accelerate Signups, which enables us for faster growth"
Context + Reasoning: Team did secondary & primary research for the industry they operated in to see how these one time incentives change a consumer relationship over a long course of time. The realizations were surprising!
On probing the data, the team realized that it was mostly the people who anyways were planning to Signup, Signed Up faster. The other 'Not so keen' users either did not demonstrate any change towards signing up or only signed up to acquire the incentives and then would churn out soon. So besides changing avg signup time from 2 weeks to 1 week amongst those 'Keen on signing up', Incentives had no Positive impact at all towards increasing Signup Rate. The increased signups were mostly low-quality signups, who would churn out soon. Plus the team now also had to pay up for Incentives which only dropped their profit margins.
Corrective Action & New Inference: The team dropped Incentives for future campaigns or resolved to Emotional incentives rather than Economics to drive the right behavior for no cost for the marginal gains. Incentives did not increase sustainable signups!
Insights: The team had 2 takeaways from this experiment:
A holistic understanding of short-term & long-term effects have to be considered to evaluate the success of any experiment. This takes time and is much more than a basic experiment analysis
Besides just looking at the data to make inferences, it becomes increasingly important to get deep to get a qualitative & behavioral grasp of changes/incentives on user behavior to be able to decide if the corrections we are driving are sustainable and impactful.
There are countless more examples that exist and further strengthen the point. Getting context & qualitative Insights was always important to make a great decision. Headstrt wants to be the platform that brings empowerment to users to share & discover Insights and turn them into symbiotic conversations.
Thanks so much for reading and help us in spreading the word.