The Hidden Fabric of Knowledge

— Part 1 of Modeling in Problem Solving —

Jonathan Kahan
15 min readAug 7, 2022

Introduction

I have spent most of my working career consulting in one capacity or another, having worked approximately on forty projects. I started my career in marketing, so my first projects were in the marketing strategy and branding space; I then worked in a traditional management consulting firm on strategy, innovation and operations projects; and finally at a global design firm, where I mostly focus on product strategy, innovation and go-to-market.

In all of these years doing consulting projects, I’ve identified several interesting patterns:

  • Good projects don’t start with good data, but with good frameworks or theories. The data follows. Some consulting disciplines acknowledge this (management consulting), some less so (design thinking), falling prey to empiricism, more on which later.
  • Good projects are those in which the main framework we end up using is NOT the one we started with, but an elaboration, an iteration. In the very best projects, the evidence is observed through different, opposing frameworks, before drawing conclusions.
  • So what is a good framework? A good framework is one that is specific enough to enable me to say something interesting about a given situation, but not so specific as to “overfit”, losing any wider applicability. A great, classic example being the endlessly popular BCG matrix. Additionally, I’ve found that taking a framework from one discipline and applying it in a different context often yields interesting, unexpected results. For example, while a value stream map is typically considered an operations framework, its use in service design can both add important layers of information that are missing in traditional blueprints, and systematize the way information is being visualized.
  • Consulting projects are powered by rational thought. Thought seems something incredibly abstract and shapeless, but I found most of the thinking we do in problem solving is one of three kinds: top-down/breakdown/deduction; bottom-up/clustering/induction; and lateral/metaphorical/parallel thinking. This will strike some as a tautology, and I don’t want to claim this is fully exhaustive. But as a matter of practice, when I’m stuck with a concept and don’t know how to proceed, the spatial metaphor helps me move on: which way should my thought move? Should it move up, down, or to the side?
  • Across subjects and sectors, there are patterns in reality that, if recognized, can be used as heuristics hinting on how to solve the problem: aggregations of discrete, random events will always give you a normal distribution; if the events are not discrete but build on top of each other or have feedback loops, you’ll have a long tail and the Pareto principle applies. Any decision taken in the context of limited resources can be modeled as effort vs. reward, whether it’s us prioritizing an action or a user evaluating whether to download an app. If with limited resources you are looking at multiple actors with different agendas, you’ll likely run into patterns like the tragedy of commons, prisoner dilemmas, etc.

In order to systematize these insights, I started working on a set of principles, that capture what I learned about the relationship between data, frameworks and thought. This very practical goal of systematizing the way we do consulting projects led me to explore existing theories on how knowledge is created, from Popper’s epistemology to Gadamer’s hermeneutics. On my way, I collected hundreds of useful frameworks with varying degrees of applicability, and thought about ways to classify them.

What follows is the current state of the project. There are many open ends and loose threads, and I appreciate any feedback.

Design Thinking and the empiricist trap

Priscilla is a Professional Problem solver. The client, an international bank, tasks Priscilla and her team with creating the ideal account-opening journey for their private customers, one that maximizes customer satisfaction and minimizes costs for the bank.

Theoretically, Priscilla has a number of tools and frameworks at her disposal to start tackling the problem: from value stream maps to theory of constraints, from Cynefin to systems engineering.

Realistically, the recipient of this brief will most likely be a design thinking and service design practitioner. And as such, Priscilla and her team will immediately jump to interviewing clients. This is because the first phase of the design thinking framework as traditionally visualized is “empathize”.

A classic formulation of the Design Thinking methodology

The assumption is that one comes to a problem-solving project with an empty, clear mind, and start by empathizing with the research subjects. This is problematic for three reasons:

  • The empiricist approach is false: from a purely descriptive perspective, we never actually simply start with data: any project is approached with pre-existing biases and mental models. Whatever knowledge I have before the project starts will shape the way I think about it, starting from data collection, feature engineering, all the way to ideation and recommendations.
  • The empiricist approach is biased: in a more normative sense, if empathizing/data collection is the first thing we do, we will inevitably fall prey to availability bias and just work with whatever dataset we have available. The dataset will likely display sampling bias and will have been sliced in a way that doesn’t necessarily serve our project goal. For example, Priscilla may assume that level of income is an important variable, and look to survey people in different income buckets. A non-empiricist approach would have revealed that other variables should be prioritized instead.
  • The empiricist approach is incremental: to paraphrase Henry Ford’s famous saying, if we start by listening to clients, all we’ll hear is that they want faster horses. If Priscilla starts by interviewing customers, she is likely to hear that people want faster service, less bugs, a wider variety of options. In short, no one will say anything truly new. This approach can be useful and should be used in many context, but it can quickly turn into a liability if what we are after is true innovation.

This is not to say that data and observation are not important, nor that we should force-fit our pre-existing models of the world on empirical data (known as the Golden hammer fallacy). Rather:

as a matter of observation, whatever intellectual or physical instrument we use to observe reality is the result of our previous learning; and while in many cases the “lenses” we use to look at reality are invisible to us, being able to see them and switch them at will is the key to de-biasing our observation and to solving problems in truly novel ways ¹.

I am picking on Design Thinking here, but the empiricist flaw plagues other worlds in a similar way:

  • In the social sciences it is called “P-hacking” and it’s considered a cardinal sin.
  • In business, the “data driven organization” orthodoxy can easily translate to a paradigm where all kind of data is collected and analyzed without a clear strategic framework determining what kind of data we care about, what data is proxy for what variable, etc.
  • Much of machine learning as a discipline takes datasets as a given and extracts insight in a bottom-up way. This can sometimes be disconnected from essential higher-level, context knowledge. In 2008 Chris Anderson wrote an article titled “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete”. While some of the wording may be itself a bit obsolete, the principle stands today more than ever, especially for disciplines like deep learning: modeling is not done in a top-down, theory-driven way, but in a bottom-up, often black-box way.
  • More generally, the way we think about solving our day to day problems. Too often we think the answer to our issues will come if only we collect more information about the problem, when in fact this is only a way to procrastinate the actual work on a solution².

Data is great, but it’s never first. First are always mental models.

A more effective problem solving method has to overcome empiricism and take a good look at our thought process before we encounter the project.

So how can this be done?

Starting with the a-priori

To embrace a non-empiricist approach we need change the way we think about problem solving.

The argument that innovation should start “inside out”, by theorizing rather than by collecting data has been made many times before, most importantly in a scientific context by Karl Popper, who argued that knowledge is created by proposing conjectures based on existing theories of how the world works, and “falsifying” them when we have better explanations. In the words of David Deutsch:

“We never know any data before interpreting it through theories. All observations are, as Popper put it, theory-laden, and hence fallible, as all our theories are.” — David Deutsch, The Beginning of Infinity

In the design and business spaces, this approach has been championed among others by Roberto Verganti, whose work discusses at length how significant innovation comes from shifts in meaning rather than outside-in observation.

In order to root a problem-solving approach in theory rather than observation, we need to change the way we look at the problem in two key ways:

The whole before the part: If we jump to looking at the trees, we can easily miss the forest. The logical place to start when solving a problem involves asking ourselves the question “which similar problems have I solved before”, or “what category of problems does this problem belong to”. For example, if my brief is to help Acme INC. increase its profit by 20% over the next 4 years, the first thing that a good consultant will instinctively do is to recognize this as a specific instance of a profitability problem, even before knowing anything about Acme INC. specifically.

The a-priori before the empirical³: this is the core of what Popper means by “theory-laden”, and should be taken as both a descriptive and a normative claim: there is no observation of the empirical world that is not charged with our biases and filtered through our mental models; and conversely, being conscious of the mental models and biases we are coming to a problem with ensures more rigorous problem solving.

Introducing the Problem Solving Map: from the part to the whole, from the empirical to the a-priori

Problem solving as recursive reinterpretation

Above is a depiction of the Problem Solving Map (PSM), a tool we introduce here to help us navigate the world of problem solving. The plane of the Problem Solving Map ranges from a-priori to a-posteriori on the x axis, and from parts to wholes on the y axis. These delimit what we can call the project space, where problems, solutions and problem-solving activities come together. It is an idealized system which contains all the entities, relationships and agents relating to the problem, its potential solution, and the problem-solving team itself.

The project space is an abstract entity that has real impact on the real world, which it intersects on the right side. The area of the real world intersected by the project space contains all-real world elements that constitute the problem and have the potential of being part of the solution. The left side of the PSM, on the other hand, contains all the a-priori knowledge and tools that are not a part of the problem itself, but are brought in by the problem-solver.

On the PSM, we can identify four sub-spaces:

  • Framing the chart, the problem statement, or in other words, the intention with which we approach the problem, defines the placement of the chart itself, defining what is the macro-phenomenon that we consider our “whole” in need of explaining, predicting or solving.
  • The space of a-prioris, or the space of frameworks, which contains everything I know before the start of the project, from the most generic ideas about the world (like the fact that a large number of uncorrelated occurrences will tend to be normally distributed), to very specific domain knowledge (eg. the revenues of the top three players in the chrome-plated steel bars market), as well as any cognitive biases I may be carrying with me
  • The space of representation, where my framework, operationalized by variables, encounters real-world data. This is the space of modeling, a lot more on this later.
  • The space of observation, which contains my data, an intentional abstraction of a real-world observation, as well as my variables, a further abstraction or operationalization of my data.
  • And finally, the actual world, where we actually observe and build stuff.

We have already seen that problem solving starts with frameworks. The question now is, how do a-priori frameworks come to interact with real world data to create project-relevant knowledge?

Here the philosophical school of hermeneutics offers an interesting direction.

The hermeneutic circle

According to hermeneutics, meaning is extracted out of a text by the subjective process of integration of the part — the specific passage — into the whole- what we know about the text as a whole, the author and his time, etc.- and conversely of contextualization of the whole in light of the part.

This dialectic applies way beyond texts, arguably to all forms of knowledge, including the knowledge-acquisition part of problem solving. In a non-empiricist fashion, we start with our pre existing ideas about the world, our biases, our consulting toolkits and frameworks, our domain knowledge. Through these lenses, we look at empirical data acquired from our research and we start building knowledge about the project itself.

The hermeneutic loop represented on the Problem Solving Map

To better understand how the hermeneutic circle works in problem solving, let’s take again our friend Priscilla the Professional Problem solver. She may start a product strategy project for a global healthcare client with the idea that a secret future app needs to cater to a certain audience, and she may want to use user personas as a framework to model the problem.

Priscilla’s (hopefully conscious) decision to use personas immediately dictates some key variables in her project: demographics will play a part (age group, gender, city, etc.) as will some behavioral elements (shopping habits, number of apps downloaded, etc.). She will then proceed to contextualize her framework with real world data: She will collect data based on the variables dictated by her framework and observe them through that lens (eg. demographic vs behavioral data). She will then zoom out again and integrate her data into her framework and the project as a whole: does the result make sense?- she will be asking. Does it help me create good explanations and predictions about people’s behavior, such as our secret future app needs as its solid foundation?

Turns out that it doesn’t. Priscilla’s personas end up mainly just describing themselves, and- let’s assume- really don’t tell us much about how we should be building our apps. What happens then? Priscilla may do one of two things:

  • Either she intervenes at the level of variables, and decides that, while personas is a useful framework, she may have to scrap demographic variables and additionally shift from using the number of apps downloaded as a proxy for digital literacy to formulating a questionnaire and ask the users about their habits.
  • Or she can be more radical and intervene at the level of the framework: Personas may not be the way to go after all; she may want to start with mindsets instead, or reframe the problem completely, from user-centered to market-centered, and start with a market sizing and segmentation. If so, she will then restart the circle and contextualize this new framework, let’s say mindsets, with new data, for example the number and type of users of a competing app.

Multiple concatenated loops

Let’s take this model one step further. What if even replacing personas with a new framework such as mindsets didn’t yield useful results? Should Priscilla conclude that the problem statement itself is poorly formulated and rephrase the problem as a whole?

As every consultant knows, this is indeed often the case. The problem we receive as a brief is more often than not not really the thing the client has to solve in order to achieve the goals they have in mind. Once we look at some key variables of the problem through the lenses of a couple of framework and realize that something’s not right, we often suggest reformulating the problem statement.

So it becomes clear that our loop does not only bind together data and frameworks, but we can actually identify several loops that tie different wholes to different parts.

Wholes affect parts in a top-down motion:

  • How I formulate the problem affects everything else.
  • The framework I choose affects how I pick the variables and engineer them, sample the data, slice and dice them
  • The way I frame the variables affect how I end up tagging or coding my dataset

Parts affect wholes in a bottom-up motion:

  • A mismatch between data and labels will make me question my variables
  • The realization that my model requires additional variables, or that some variables have to be dropped (eg. due to correlation) will make me question my model
  • If my model underperforms in terms of predictive or explanatory power, I will question my framework (more on this later)
  • And as we said, if a few frameworks don’t withstand the impact with real world data, I might be out to solve the wrong problem.

The PSM below summarizes some of these interlocking loops:

Interlocking reinterpretation loops in problem solving

Let’s then summarize the main points discussed. As we have seen, according to this hermeneutic model, problem solving is a process of recursive reinterpretation, in which we create loops between frameworks, variables and data to get closer and closer to a correct (read: useful — more on this later) model of the problem. Conversely:

The cardinal sin of problem solving is taking the framework for granted and jumping to research: it’s the recipe for dull insights, biased incremental solutions.

Naturally, there’s another iteration loop in problem solving, and that’s the one around solutions: once I have a satisfying model, I will want to prototype and test its solutions in the real world, and error-correct based on feedback. This second loop can be called “poietic”- the one in which we create stuff- as opposed to the first, which is explanatory or predictive. Most of what’s coming will focus on the first loop, but we will come back to the second later.

The full hermeneutic model of problem solving on the PSM

In the next installment, we’ll look more in detail at the main entity at the center of this model, namely… “model”. In particular, we’ll look at what models are, how they can be categorized, and how they interact with variables and data. Stay tuned!

This is the first instalment in a four-part series on modeling in problem solving, or the relationship bewteen data and frameworks.
Here is a quick navigator:

- Part 2: Model, Framework, Data
- Part 3: A Latticework of Mental Models
- Part 4: Practical Modeling — in Projects and in Life

Notes

¹ To further clarify this point, here are a couple of situations in which one would think empiricism is the norm.

  • A preliminary data analysis (“EDA” or exploratory research) is a common approach in many problems in which the dataset is a given. This is not wrong. However, it’s important to realize that when we are exploring our dataset, we are never doing so as a blank slate: depending on the discipline we come from, the techniques we have learned and the problem statement, we’ll be either looking at distributions and data types in given columns, or trying to identify behavioral patterns, etc. Increasing our awareness of the “lenses” through which we are looking at data is key to reduce bias and stimulate innovation.
  • Similarly, while frameworks such as Cynefin and Ooda loops recommend an “empiricist”, “sensing-first” approach to novel, complex or chaotic situations, we need to realize that again, this does not happen in a vacuum. The things we’ll be looking for in any sensing activity, or the set of possible pre-emptive actions to be implemented in chaotic unpredictable situations, will depend on what we already know: what have we learned to observe? What is our repertoire of possible actions in this kind of situation? Where does our muscle memory take us? Hence the importance of having a solid “latticework of mental models”, even when modeling as an activity stays mostly implicit — more on this in installment 3.

² Management consulting admittedly is less prey to this empiricist tendency, but it has other shortcomings, as we’ll see later.

³ By a-priori I don’t mean necessarily a-priori from any kind of experience, like Kant’s categories. I mean concepts that precede our experience of the specific reality which will become the problem system.

[Edited on 24/08/22 to name and clarify aspects of the Problem Solving Map]

--

--