Analysis has a fundamental problem, and that problem is us. Humans aren’t naturally good at it. Cognitive biases are a natural part of our thinking and no one can avoid them. Humans are good at snap judgments and those are usually right; but, we struggle with rationally evaluating evidence in a fair way, especially in qualitative research and analysis where the thinking processes used for snap judgments don’t always apply. Government intelligence agencies have historically led the way in developing techniques to improve analysis and reduce bias. These techniques generally are focused on human thinking writ large, not any given application.

5517457.jpgIntelligence analysis, be it geopolitical, military or otherwise, and market research could essentially be considered the same craft with different subject matters. Both have the same goal: to inform a client (either internal or external) and reduce uncertainty, enabling them to make better decisions. Both rely on imperfect information, and are often in fields which are not easily quantifiable, and both rely on analysts to make judgement calls to interpret available data and come to accurate conclusions about gaps in the information.

Market research methods for gathering data are refined and discussed continually, yet the interpretation of that data rarely is mentioned. The intelligence community has decades of experience facing congressional hearings when they get it wrong, but market researchers often only have an angry client who will not communicate why they didn’t award the next contract. In addition, both are influenced by cognitive biases that can, and do, impact the analysis and the conclusions drawn. These biases are usually neither intentional nor malicious, but are the work of cognitive shortcuts the human brain takes that are right most of the time. For more background on Cognitive Bias, David McRaney’s blog You Are Not So Smart is an excellent introduction, and Richards Heuer’s book Psychology of Intelligence Analysis (free online!) is an incredible, in-depth look at these issues.

One of the simplest techniques to combat cognitive bias is to conduct Analysis of Competing Hypotheses (ACH). Developed by Richards Heuer, it consists of a matrix with possible hypotheses (or scenarios) across the top, with each individual piece of evidence or information going down the side. This can be done with 10 pieces of information or hundreds. The most important point is that each piece of evidence must be evaluated individually against each hypothesis, and market as “consistent,” “neutral,” or “inconsistent.” By going “across” rather than “down” the matrix, it helps analysts to think critically about each piece of evidence, as opposed to thinking about each hypothesis. This method can generate a numeric score by tallying “inconsistent” ratings, but the analyst needs to be able to make a judgement call regarding the importance of each piece of information. Although it can be applied to quantitative and qualitative research, this method itself is NOT a quantitative method. Ultimately, each hypothesis is evaluated by how much “inconsistent” evidence is in its column, and this is compared to the analyst’s prior judgments. This method is best used to identify hypotheses which are problematic, not to diagnose a hypothesis which is most likely.

ACH Table

ACH has three main benefits:
It creates an analytical paper trail. An analyst, when challenged by a customer or supervisor, has a visual tool to help explain his reasoning, and in the event that he is wrong, he is able to back up why he reached the conclusion he did.
It facilitates discussion. ACH is used best in a team environment, where each piece of evidence rated against a hypothesis is debated by multiple people.
Finally, it forces an analyst to confront his own biases. When a favored hypothesis has more inconsistencies than others, the analyst is forced to make a choice between doubling down on the analysis and giving a thorough explanation of why he thinks that hypothesis is right given the evidence available, or reconsider the judgments for bias and look at other possibilities.
Below is a simple example of an ACH matrix done (using Excel, as for individual users like me buying an expensive ACH software package isn’t worth the cost) loosely based on work we did for a chemical company interested in producing another chemical using the by-products of another process at the same facility. Obviously, if we were going to share this with a client or use this internally it would be much more thorough and detailed, but it serves to demonstrate the basic concept.

Our original hypothesis was that full-scale production would be the best course of action, as the company controlled the raw materials and the inputs would be at a minimal cost. What an analysis like this shows is that too much weight was given to those factors without considering a multitude of other evidence suggesting that we may have been clinging too tightly to our initial impressions (also called “Anchoring Bias”).

Anchoring Bias

And, if you disagree with how we evaluated some evidence in the matrix against a hypothesis, good. ACH worked just like it should, and now you (and my boss) can question my basic reasoning abilities more accurately and we can argue about the right things.

In the next post in this series, we’ll be discussing Linchpin Analysis, and how to tell if you’re asking the right questions. Please let us know if you have any questions or want to see anything covered in future blog posts.

More News