Practical Modeling — in Projects and in Life

— Part 4 of Modeling in Problem Solving —

Jonathan Kahan
10 min readOct 9, 2022

This article will make much more sense if you read installment one, two and three first!

Modeling and problem solving in practice

In the previous installments of this series we have explored all the different pieces of a “model of modeling”, and have seen how it all comes together in the Problem Solving Canvas.

The Problem Solving Canvas

We can finally run through one example of the Canvas in use, resorting one more time to Priscilla the Professional Problem Solver as our guide. Additionally, we’ll look at how we should think about model validity, and add some concluding remarks.

Let’s now follow Priscilla in her day by day work:

  • Day 1: Priscilla meets with the client, a local bank, and gets the brief: “Create a micro-loans app for teens to help the bank acquire the clients of the future”. The client’s team already has dozens of ideas for features and they are excited to start.
  • Days 2–4: Priscilla talks with different executives at the bank and places together more pieces of the brief puzzle. The CMO tells her that Gen-Z is expected to make up to 50% of the financial services market by 2030, and right now brand awareness among teens for the bank is near zero. The COO tells her that savings accounts are currently being serviced manually, but you know which part of of the business really takes huge internal effort? Savings accounts. The CEO tells her that the initiative to make an app is part of a large scale effort for the bank to raise an investment round at high-tech level valuation multiplier, rather than at financial services one. The bank needs to show investors it can create scalable financial solutions for the clients of the future.
    Some other stakeholders vent their concern about the Bank’s actual technical maturity and the feasibility of automating loans; and some have ethical concerns around the idea of giving teenagers easy credit and getting them used to taking out loans.
Initial challenge mapping
  • Based on all this information, on the following week’s steering committee meeting Priscilla proposes to recontextualize the brief and reformulates it as “how might we help the bank acquire the clients of the future”.
    Her implicit intellectual maneuver is the following: Given 1. substantial obstacles downstream of our main challenge (technical, ethical, market concerns) and 2. the fact that our main challenge does not necessarily follow as the only way to tackle the goals which lie upstream from it (acquiring the clients of the future; fundraising with a high multiplier; etc.); it may be useful to move upstream and center our challenge on the more strategic goal of acquiring future customer segments.
    One way that Priscilla and her team have identified to achieve that goal is to develop a savings app for teens, rather than a lending one. This give rise to a host of new sub-challenges which, after careful weighing, are deemed more feasible to tackle compared to the alternative.
Reformulated problem statement
  • The second week of the project is spent searching for the right frameworks to tackle the problem statement, and simultaneously collecting data that can tell us if our framework works or not. In other words, Priscilla is trying to create an effective model to tackle the problem.
    As we can see in the canvas below, an intuitive place to start is by translating the “sub-challenges”, the obstacles standing in our way to a solution, into the key variables of our model. For example, we had identified teenager’s typical short term thinking as an obstacle to the adoption of a savings app; so we know that our users’ motivations for saving are one of the key variables in our problem.
    Priscilla thus start her circling through the hermeneutic loop by asking herself: what are frameworks that model motivation? After some thinking, she lands on BJ Fogg’s behavioral model, which plots action as a result of a given motivation and ability.
From obstacles in challenge mapping to variables to frameworks

It’s a good starting point, but will it actually help us model the problem in an effective way? In order to answer this question, Priscilla has to collect data to “fill in” her framework.
So she first completes a mapping of the other variables that are relevant to her selected framework, adding other possible motivations besides investing, and adding ability drivers such as level of income and a culture of saving in the family.

She then looks for both qualitative and quantitative data that can tell her something about these variables. For example, motivations are typically best identified in interviews and validated through surveys; whereas data on teenager’s level of disposable income can be found in national census data.

Identifying variables for a framework and collecting data

The data collected is used to model the actual problem based on our framework. In our case, this yields a hypothesis for validation, eg. that for mid-income teens with some culture of saving, investing in the future (as opposed to buying yourself a bigger treat in a few weeks) can provide a strong motivation to save. This hypothesis can then be validated with tools ranging from a survey to a mockup to a full product.

Obviously this is but one limited example: a full project will contain dozens of loops like the one described, resulting in many interlocking and nesting models.

Evaluating and innovating with models

How do we know if our model works? It’s not as simple as saying that it depends on whether our output hypothesis is validated. If our hypothesis is confuted, this may be due to issues in any of the problem solving levels:

  • Our data might be flawed, limited or biased
  • Our variables might not be the right ones, might be overcorrelated or insufficient
  • The way we put the model together from the variables might be wrong
  • Or we may be misusing the framework.
  • Or we may have poorly formulated the problem statement.

Political scientists talk about governments having input legitimacy when they come to be following the rule of law within a system recognized as fair, and output legitimacy when they manage to rule the country effectively and get things done.

Similarly, models have one input way to be evaluated, and two output ones.
A “good model” has to work in the following ways:

  • In an input sense, it needs to follow the structure of a logical argument by having validity, i.e. its consequences need to follow from its premises.
    If we realize our model does makes invalid arguments, what we can do is to retrace our steps to the main framework and rethink how we are breaking it down into sub-frameworks and variables: is any of our steps necessary? Are our logical subdivisions MECE? How do we know that there is a causal relationship between two entities?
  • In a primary, “downstream facing” output sense, the results, explanations or predictions it produces need to be sufficiently sound and complete, or as the concept is called in data science, have precision and recall: meaning, our model needs to avoid misclassifying ANY thing, and it needs to classify ALL things correctly.
    If we think our model is lacking in accuracy, we may have been using a small dataset or, more qualitatively, relying on anecdotal evidence. Trying to think of more examples may, in the next hermeneutic loop’s iteration, yield a more solid model.
  • In a secondary, “upstream facing” output sense, our model has to have problem solving power. I.e., it has to advance us in our understanding of the problem statement as we have formulated it and get us closer to a solution. Or in other words, if at the beginning of our process the possible solutions were near infinite, how much has our model restricted the space of possibility in which our possible solutions live?
    This is very hard to evaluate objectively, but it could help to go back to the problem statement and even to our motivation for solving the problem in the first place. Does our model point to responses to our initial “how might we” question? Does it at least rule out some possibilities?

It is interesting to note also that the level of “goodness” of a model does not always need to be the same. A “medical grade” model needs to be extremely precise, and the academia dictates standard significance thresholds that models for academic publishing have to uphold. In the business world, things tend to be ore flexible, and that’s ok:

it is often preferable to spend two weeks devising a plan that is 80% likely to be right than two years to get a 98%.

So we have seen some ways in which, by iterating on different levels of the canvas, we might create better models. But this process should not be thought of as mere error-correction. Rather, as hinted at before, this is exactly where innovation lies.

Every level of the canvas can be iterated and improved on: datasets can be changed or enriched, variables can be added, removed or reengineered, models can be put together differently, frameworks can be replaced and problem statements can be reframed.

Sometimes better datasets create innovation, but this will tend to be incremental. Large scale, disruptive innovation, tends to come from using completely different frameworks to think about our problem, or even changing the questions we ask of the evidence.

Possible “movements” in each problem solving space

Concluding thoughts

In this series of articles we’ve been trying to get to the bottom of the relationship between data and frameworks. We learned that our understanding of the world is created by iteratively intertwining the two in what we called the hermeneutic circle of problem solving, giving rise to models that, despite being flawed, give us our best shot at understanding the world around us.

We have learned that, while it is important to rely on good data, we often forget just how important frameworks (aka mental models) are. Looking at reality through a large number of frameworks is what helps us give data its meaning rather than seeing them as a disconnected puzzle; and looking at reality through different mental models simultaneously gets us closer to the “truth” as an intersection of independent lies.

What we might add here is that this is not just true of problem solving, but of life in general.

Once I’ve heard someone say “all old European churches look the same”. For people who have acquired the right mental models, eg. “Typology of Western architecture”, it is easy point to the Romanesque and the gothic elements within a cathedral; one might even be able to associate a particular pulpit specifically with flamboyant or perpendicular gothic, and they may enjoy the surprise of not being able to categorize a Manueline portal into any gothic style they know, thus admiring this style’s uniqueness.

Further lenses may yield even deeper views of reality. A marxist, a feminist or a psychoanalytic interpretation for example can turn the stones and bricks of a church into something much bigger than themselves: under the right lens, they will become symbols of broad historical trends or of the innate impulses in human nature, such as the oppression of peasants during the Ancien Régime, patriarchy, a ruler’s will to power or his Oedipus complex. The initiated in engineering may marvel at the technical feats of building an unsupported dome or at the clever configuration of buttresses. And of course, for the believer, stones and bricks can have true spiritual significance. And while it’s easy to look at each of these as an over-interpretation, as reading too much into something quite simple, “the intersection of independent lies” provided by different frameworks points to an ultimate truth about how humanity as a whole looks at reality, and even more importantly, it makes a church visit a much more enjoyable, thickly textured experience to be savored in each of its layers.

The same person, on the other hand, may be completely ignorant of the mental models required to give meaning to a walk in the forest. The uninitiated will just see a bunch of trees, missing all the depth that is apparent to the initiated in the fields of botany, biology, poetry and literature, etc.

As we have seen several times throughout this article: we may all mostly have access to the same data, but it’s our frameworks that make the difference.

To those who have acquired the right frameworks, every experience in life (including, of course, problem solving) presents itself as an intricate web of meaning to be waved into the fabric of reality, to be explored and savored, pulling back the curtains from each layer of meaning and level of analysis.

Thank you for reading so far, please don’t hesitate to reach out with comments, questions and feedback.

This is Part 4 of a series on Modeling in Problem Solving. A Part 5, illustrating some useful modeling patterns, may be coming, stay tuned!

--

--

Jonathan Kahan
Jonathan Kahan

Written by Jonathan Kahan

Strategy consultant, entrepreneur, curious person

Responses (2)