---
product_id: 301234708
title: "The Model Thinker: What You Need to Know to Make Data Work for You"
price: "Rp107399"
currency: IDR
in_stock: true
reviews_count: 8
url: https://www.desertcart.id/products/301234708-the-model-thinker-what-you-need-to-know-to-make
store_origin: ID
region: Indonesia
---

# The Model Thinker: What You Need to Know to Make Data Work for You

**Price:** Rp107399
**Availability:** ✅ In Stock

## Quick Answers

- **What is this?** The Model Thinker: What You Need to Know to Make Data Work for You
- **How much does it cost?** Rp107399 with free shipping
- **Is it available?** Yes, in stock and ready to ship
- **Where can I buy it?** [www.desertcart.id](https://www.desertcart.id/products/301234708-the-model-thinker-what-you-need-to-know-to-make)

## Best For

- Customers looking for quality international products

## Why This Product

- Free international shipping included
- Worldwide delivery with tracking
- 15-day hassle-free returns

## Description

desertcart.com: The Model Thinker: What You Need to Know to Make Data Work for You (Audible Audio Edition): Scott E. Page, Jamie Renell, Basic Books: Books

Review: Conceptual models are the key to understanding how a mathematically-described universe functions. - I regard Scott E. Page's The Model Thinker to be the singular 'must read' reference work and learning aid for anyone looking to master conceptual models of the world we live in. For the most part, these are dynamic models, most frequently mathematically described, which are essential to thinking about data; interactions, including conflict, strategies for resolving conflicts between individuals and their respective social groups, institutions, and generally speaking, civilized society itself. A single work of this type cannot include every single permutation of conceptual model; but Professor Page accomplishes a great deal in the 420-odd pages comprising this masterful work. As Professor Page observes, early on, "we live in a time awash in information and data". At the same time, our innate abilities to comprehend and deal with the mental load that those data represent, and how to respond appropriately, requires us to acquire new cognitive skills in order to make any headway in dealing with the implications that those data represent. Professor Page identifies seven uses of models: (1) to reason, meaning to identify conditions and deduce logical implications; (2) to explain, meaning to provide testable explanations for empirical phenomena; (3) to design, meaning to choose features of institution, policies, and rules of behavior that will allow particular models to behave as intended; (4) to communicate, meaning to relate knowledge and understandings about the natural world, and human-developed institutions, as they interact with one another; (5) to act, we need to guide policy choices and strategic actions to achieve predictable results; (6) to predict, meaning to make numerical and categorical statements about future and unknown phenomena, based upon historical data or extrapolating from current data, where the trend lines are predicting where we will end up at some point in the future; and finally, (7) to explore, in which we may need to investigate possibilities, hypotheticals, including counterfactuals. The models that Professor Page presents are not reality, but the functions that they explore are found in reality if we look closely enough to find them and are capable of understanding what they are telling us and where we are heading. They are simple, and yet sophisticated. They employ depictions of both physics, and psychology, and a great deal more. In 29 chapters, Professor Page limns the universe of conceptual models and their mathematical and-graphical representations, how they work, and the logical inferences that can be drawn from each of them in terms of the types of outcomes that can be expected when those models are employed to explain numerical, financial, organizational and sociological phenomena. The mathematics described an input-process-output scenario where they simulate natural processes that behave stochastically as if they were governed by predictable mathematical rules. Professor Page repeatedly emphasizes that understanding phenomena, and the data that describes them, requires a knowledge of many models, each one pointing out a specific facet of the whole, and none of them completely describing what the whole actually looks like. T In many respects, these models are counterintuitive, meaning that left to our own devices we might come to different decisions based upon criteria that fall well short of rational calculation. Nowadays, the science of behavioral economics has, through extensive research, established that in fact, decision-makers are subject to cognitive biases that distort our perception of reality, importance, and value. We perceive and think in much the same way that our ancient forebears did, before formal learning took hold. We as a species are risk aversive by a factor of two over what we might gain from a prospective course of action. We act on the basis of unfounded assumptions and cognitive biases. And this is why dynamic modeling become so important in order to strip away the cognitive tricks that our minds play on us when we encounter new and unfamiliar situations. Using these models, we apply rule-based behaviors that have been time-tested to optimize our chances of succeeding in whatever endeavor we are then engaged. Using dynamic modeling techniques, we also learn to adapt our behavior, based upon the behaviors of others, by applying principles of probability that allow us to calculate the likelihood that certain events will occur, or will not. We learn to adjust our behaviors based upon what happened in the past, but understanding that our revised calculations are still educated guesswork. We also come to understand, as Professor Page teaches us, that different models lead to different outcomes: equilibrium; cycles; randomness or complexity. If a rule-based model generates a random outcome, the rule itself serves only to alert the user that the outcome of the action taken is limited in its predictability; nevertheless, randomness itself can be studied, and the lessons applied, to rule out situations where precise calculations would be necessary to achieve the model user's objectives. Moreover, as Professor Page instructs us, some models generating what at first glance may appear to be randomness may actually be recurring patterns that derive from an extended number of cycles. Now that we can use computers to simulate actions taken over time, the regularity of those secular outcomes can readily be seen. If a model produces an equilibrium, we know that certain actions will ultimately succeed, and others will fail. Other models produce complexity, both internally, and through their interaction with other actors above and below the level at which the model operates. We learned that those complex interactions generate what is referred to as 'emergent behaviors', sometimes higher levels of sophistication, but also phenomena that could not have been anticipated at the operational level at which those models work. This is how models, and those who use them, adapt to changing circumstances. We can also see where particular models favor selfish outcomes at the expense of the community at large, and where the community response to its disadvantage in the economic sphere by changing the rules of engagement through public policy changes that are intended to restrict or deter overreaching by those considered to be bad actors. More on that later. At the core of all models are the notions of statistical distributions, and the way in which probabilities are to be calculated. They are part of the core knowledge base for any modeler, because they determine the ways in which data are perceived and handled. A working knowledge of distributions is necessary to measure inequalities in power, income and wealth, and to perform statistical tests. Professor Page addresses the matter of distributions over two chapters, the first dealing with normal (Gaussian) distributions, often referred to as the Bell Curve. The next chapter deals with power law distributions, i.e., long-tailed events. Distributions mathematically capture variations in diversity within certain types, representing them as probabilities as defined over numerical values or categories, with the bulk of them ordinarily clustered around the statistical mean or average value, with parameters of difference represented as deviations from the mean in a standardized format. Power distributions differ from normal distributions, whereby positive feedback loops augment and reinforce the action or trend that the distribution explains or describes. Two models frequently appear where power laws occur; the first is described as preferential attachment that captures trending preferences about where people choose to live (i.e., cities); which things to buy (i.e., books that become wildly popular simply as a function of either their subject matter or word-of-mouth advertising); certain places on the Internet (i.e. websites such as Facebook), and so on. The second model may consist of self-organized criticality in the form of default choices that lead to predictable consequences, and where under conditions of entropy, where a power law distribution maximizes uncertainty hovering around a fixed mean, but with a proximity to large events that, although they occur infrequently, can have devastating consequences. Along these timelines, normality is correlated in the minds of many observers with frequency of occurrence, a conclusion justified by the number of smaller events clustered around the historical averages during the relevant time periods in the run-up to the present day. It would be as if we are traveling on a highway accelerating as we go, and viewing the highway through the rearview mirror. It is but a small leap of faith to jump to the conclusion that the future will not be markedly different from the past; and the historical record often validates that assumption. Using the normal bell curve to justify that conclusion is really asking the wrong question, because it inherently incorporates the so-called 'survivors bias', those who have averted or avoided catastrophe and lived to tell the tale. At best, it is a psychological pick-me-up that reassures us that whatever happens in the future is survivable, because we have done it before. Well, not entirely, because by focusing on survivable events (because we, or most of us did survive), instead of looking at the risks we are incurring now, with fewer resources and less resilience to withstand future shocks. We have come to realize that statistical probabilities of those large, infrequently experienced events, as represented within a nominal ‘normal distribution’ tend to understate the likelihood that those events will ever occur. In recent decades, for example, turmoil within the financial markets was regarded by many as ‘unforeseeable’; well, those events had been foreseen, and warned against. The argument against foreseeability was simply that it was impolitic, unfashionable, and unprofitable for many to acknowledge the obvious risk that major financial services institutions were running at the time. Accounting for that potentially large downside risk simply did not fit their model, based upon what had gone before. From the standpoint of statistical analysis, there are many who view normal distributions of data from of hindsight from a frequentist perspective, comparing the most recent data in the context of historical averages. Therefore, the past historical record is often be seen as prologue as to what will happen in the future. This is linear thinking where scaling remains within fixed ratios. Contemplating event probabilities that are known to be distinctly nonlinear using linear thinking is apt to draw erroneous conclusions, because the underlying forces are differently scalable from what we experienced in the past. Within my lifetime we have gone from mechanical calculators to cloud-based supercomputers, with results in computational capacity and productivity that could not have been imagined 60 or 70 years ago. We are doing business in ways that were inconceivable and unattainable only a few decades ago, using instantaneous worldwide communications that accelerate the size and pace of commercial and financial transactions in real time and ways that far exceed our collective abilities to design and manage them. By virtue of our enhanced technologies, we have opened the door to effects of scale that could not have been contemplated when the institutions we created to manage the forerunners of those now-enhanced processes were created. We are figuratively 'the sorcerer's apprentice', playing with magic, using those newfound powers in ways that we can only guess at learning how to control, and which could do us all great harm if misused. Philosopher, teacher, writer (and former stock trader) Naseem Nicholas Taleb coined the phrase ‘Black Swan’ to describe those extremely rare events that for most purposes are ignored. In a power law distribution, the probability that an event will occur is proportional to its size raised to a negative exponent. The size of the power law’s exponent determines the likelihood and size of large events, meaning that the probability that the event will occur is inverse to its size; when the exponent equals 2, the probability of that event is proportional to the square of its size. For exponents of 2 or less, a power law distribution lacks a well-defined mean. The mean of data drawn from a power law distribution with an exponent of 1.5 never converges; it simply increases without limit. Thus, the larger the event, the less likely it is to occur; but if it does occur, the potential size of the event can be catastrophic. Decreasing frequency is coupled with exponentially larger magnitude of effect, with the obvious corollary that building robustness into the enterprise is a practical necessity. And yet, in the years since the debacle of 2008, what we have seen is increasing pressure to allow those who were responsible for the near collapse of the world economy, to behave much the same as they did before. Models such as those described above are supposed to be teaching tools; but apparently those who were in charge before have learned nothing from the event since then. Time and space do not allow for a broader exposition of Professor Page’s excellent analysis; but if the crisp and well-reasoned presentation he makes in his discussion of the effect of power laws on the probability of real-world events is to be taken at face value, the balance of his book is equally well done. Nevertheless, as with any work of substantial size and effort, the sheer weight of the output, inevitably leave patches here and there that would warrant reappraisal and updating in a future edition of this extraordinary book. Two points come to mind: in discussing Game Theory in chapter 21, Professor page does not elaborate on what competition between an established business firm and a new competitor seeking entry into the market would look like. This competition depends upon the ability of the competitors to differentiate themselves, in accordance with customer preferences, and which can be any of the following – • Price/price competition. • Price/quality competition. • Price/services. • Quality versus service, with price a dealbreaker. I cannot recommend Professor Page’s book too highly. If I could, I would tuck it into the book bag of every STEM student in high school and college undergraduate. I would also recommend it highly for students who do not have plans that include cultivating prowess in mathematics and science. In point of fact, I would use whatever strategy I could conjure up to induce non-science students to learn as much as they can from what Professor Page provides in his teaching syllabus. This material should be part of every high school curriculum, and educators owe it to their students and their parents to make this information, in some form, available to their students, because without it those students will be treading water getting nowhere, as the world they live in becomes overrun with data they are unable to use.
Review: Nice High level primer - Helped simplify some big ideas and what’s coming next in the area

## Images

![The Model Thinker: What You Need to Know to Make Data Work for You - Image 1](https://m.media-amazon.com/images/I/71kY72y0HDL.jpg)

## Customer Reviews

### ⭐⭐⭐⭐⭐ Conceptual models are the key to understanding how a mathematically-described universe functions.
*by A***N on March 4, 2020*

I regard Scott E. Page's The Model Thinker to be the singular 'must read' reference work and learning aid for anyone looking to master conceptual models of the world we live in. For the most part, these are dynamic models, most frequently mathematically described, which are essential to thinking about data; interactions, including conflict, strategies for resolving conflicts between individuals and their respective social groups, institutions, and generally speaking, civilized society itself. A single work of this type cannot include every single permutation of conceptual model; but Professor Page accomplishes a great deal in the 420-odd pages comprising this masterful work. As Professor Page observes, early on, "we live in a time awash in information and data". At the same time, our innate abilities to comprehend and deal with the mental load that those data represent, and how to respond appropriately, requires us to acquire new cognitive skills in order to make any headway in dealing with the implications that those data represent. Professor Page identifies seven uses of models: (1) to reason, meaning to identify conditions and deduce logical implications; (2) to explain, meaning to provide testable explanations for empirical phenomena; (3) to design, meaning to choose features of institution, policies, and rules of behavior that will allow particular models to behave as intended; (4) to communicate, meaning to relate knowledge and understandings about the natural world, and human-developed institutions, as they interact with one another; (5) to act, we need to guide policy choices and strategic actions to achieve predictable results; (6) to predict, meaning to make numerical and categorical statements about future and unknown phenomena, based upon historical data or extrapolating from current data, where the trend lines are predicting where we will end up at some point in the future; and finally, (7) to explore, in which we may need to investigate possibilities, hypotheticals, including counterfactuals. The models that Professor Page presents are not reality, but the functions that they explore are found in reality if we look closely enough to find them and are capable of understanding what they are telling us and where we are heading. They are simple, and yet sophisticated. They employ depictions of both physics, and psychology, and a great deal more. In 29 chapters, Professor Page limns the universe of conceptual models and their mathematical and-graphical representations, how they work, and the logical inferences that can be drawn from each of them in terms of the types of outcomes that can be expected when those models are employed to explain numerical, financial, organizational and sociological phenomena. The mathematics described an input-process-output scenario where they simulate natural processes that behave stochastically as if they were governed by predictable mathematical rules. Professor Page repeatedly emphasizes that understanding phenomena, and the data that describes them, requires a knowledge of many models, each one pointing out a specific facet of the whole, and none of them completely describing what the whole actually looks like. T In many respects, these models are counterintuitive, meaning that left to our own devices we might come to different decisions based upon criteria that fall well short of rational calculation. Nowadays, the science of behavioral economics has, through extensive research, established that in fact, decision-makers are subject to cognitive biases that distort our perception of reality, importance, and value. We perceive and think in much the same way that our ancient forebears did, before formal learning took hold. We as a species are risk aversive by a factor of two over what we might gain from a prospective course of action. We act on the basis of unfounded assumptions and cognitive biases. And this is why dynamic modeling become so important in order to strip away the cognitive tricks that our minds play on us when we encounter new and unfamiliar situations. Using these models, we apply rule-based behaviors that have been time-tested to optimize our chances of succeeding in whatever endeavor we are then engaged. Using dynamic modeling techniques, we also learn to adapt our behavior, based upon the behaviors of others, by applying principles of probability that allow us to calculate the likelihood that certain events will occur, or will not. We learn to adjust our behaviors based upon what happened in the past, but understanding that our revised calculations are still educated guesswork. We also come to understand, as Professor Page teaches us, that different models lead to different outcomes: equilibrium; cycles; randomness or complexity. If a rule-based model generates a random outcome, the rule itself serves only to alert the user that the outcome of the action taken is limited in its predictability; nevertheless, randomness itself can be studied, and the lessons applied, to rule out situations where precise calculations would be necessary to achieve the model user's objectives. Moreover, as Professor Page instructs us, some models generating what at first glance may appear to be randomness may actually be recurring patterns that derive from an extended number of cycles. Now that we can use computers to simulate actions taken over time, the regularity of those secular outcomes can readily be seen. If a model produces an equilibrium, we know that certain actions will ultimately succeed, and others will fail. Other models produce complexity, both internally, and through their interaction with other actors above and below the level at which the model operates. We learned that those complex interactions generate what is referred to as 'emergent behaviors', sometimes higher levels of sophistication, but also phenomena that could not have been anticipated at the operational level at which those models work. This is how models, and those who use them, adapt to changing circumstances. We can also see where particular models favor selfish outcomes at the expense of the community at large, and where the community response to its disadvantage in the economic sphere by changing the rules of engagement through public policy changes that are intended to restrict or deter overreaching by those considered to be bad actors. More on that later. At the core of all models are the notions of statistical distributions, and the way in which probabilities are to be calculated. They are part of the core knowledge base for any modeler, because they determine the ways in which data are perceived and handled. A working knowledge of distributions is necessary to measure inequalities in power, income and wealth, and to perform statistical tests. Professor Page addresses the matter of distributions over two chapters, the first dealing with normal (Gaussian) distributions, often referred to as the Bell Curve. The next chapter deals with power law distributions, i.e., long-tailed events. Distributions mathematically capture variations in diversity within certain types, representing them as probabilities as defined over numerical values or categories, with the bulk of them ordinarily clustered around the statistical mean or average value, with parameters of difference represented as deviations from the mean in a standardized format. Power distributions differ from normal distributions, whereby positive feedback loops augment and reinforce the action or trend that the distribution explains or describes. Two models frequently appear where power laws occur; the first is described as preferential attachment that captures trending preferences about where people choose to live (i.e., cities); which things to buy (i.e., books that become wildly popular simply as a function of either their subject matter or word-of-mouth advertising); certain places on the Internet (i.e. websites such as Facebook), and so on. The second model may consist of self-organized criticality in the form of default choices that lead to predictable consequences, and where under conditions of entropy, where a power law distribution maximizes uncertainty hovering around a fixed mean, but with a proximity to large events that, although they occur infrequently, can have devastating consequences. Along these timelines, normality is correlated in the minds of many observers with frequency of occurrence, a conclusion justified by the number of smaller events clustered around the historical averages during the relevant time periods in the run-up to the present day. It would be as if we are traveling on a highway accelerating as we go, and viewing the highway through the rearview mirror. It is but a small leap of faith to jump to the conclusion that the future will not be markedly different from the past; and the historical record often validates that assumption. Using the normal bell curve to justify that conclusion is really asking the wrong question, because it inherently incorporates the so-called 'survivors bias', those who have averted or avoided catastrophe and lived to tell the tale. At best, it is a psychological pick-me-up that reassures us that whatever happens in the future is survivable, because we have done it before. Well, not entirely, because by focusing on survivable events (because we, or most of us did survive), instead of looking at the risks we are incurring now, with fewer resources and less resilience to withstand future shocks. We have come to realize that statistical probabilities of those large, infrequently experienced events, as represented within a nominal ‘normal distribution’ tend to understate the likelihood that those events will ever occur. In recent decades, for example, turmoil within the financial markets was regarded by many as ‘unforeseeable’; well, those events had been foreseen, and warned against. The argument against foreseeability was simply that it was impolitic, unfashionable, and unprofitable for many to acknowledge the obvious risk that major financial services institutions were running at the time. Accounting for that potentially large downside risk simply did not fit their model, based upon what had gone before. From the standpoint of statistical analysis, there are many who view normal distributions of data from of hindsight from a frequentist perspective, comparing the most recent data in the context of historical averages. Therefore, the past historical record is often be seen as prologue as to what will happen in the future. This is linear thinking where scaling remains within fixed ratios. Contemplating event probabilities that are known to be distinctly nonlinear using linear thinking is apt to draw erroneous conclusions, because the underlying forces are differently scalable from what we experienced in the past. Within my lifetime we have gone from mechanical calculators to cloud-based supercomputers, with results in computational capacity and productivity that could not have been imagined 60 or 70 years ago. We are doing business in ways that were inconceivable and unattainable only a few decades ago, using instantaneous worldwide communications that accelerate the size and pace of commercial and financial transactions in real time and ways that far exceed our collective abilities to design and manage them. By virtue of our enhanced technologies, we have opened the door to effects of scale that could not have been contemplated when the institutions we created to manage the forerunners of those now-enhanced processes were created. We are figuratively 'the sorcerer's apprentice', playing with magic, using those newfound powers in ways that we can only guess at learning how to control, and which could do us all great harm if misused. Philosopher, teacher, writer (and former stock trader) Naseem Nicholas Taleb coined the phrase ‘Black Swan’ to describe those extremely rare events that for most purposes are ignored. In a power law distribution, the probability that an event will occur is proportional to its size raised to a negative exponent. The size of the power law’s exponent determines the likelihood and size of large events, meaning that the probability that the event will occur is inverse to its size; when the exponent equals 2, the probability of that event is proportional to the square of its size. For exponents of 2 or less, a power law distribution lacks a well-defined mean. The mean of data drawn from a power law distribution with an exponent of 1.5 never converges; it simply increases without limit. Thus, the larger the event, the less likely it is to occur; but if it does occur, the potential size of the event can be catastrophic. Decreasing frequency is coupled with exponentially larger magnitude of effect, with the obvious corollary that building robustness into the enterprise is a practical necessity. And yet, in the years since the debacle of 2008, what we have seen is increasing pressure to allow those who were responsible for the near collapse of the world economy, to behave much the same as they did before. Models such as those described above are supposed to be teaching tools; but apparently those who were in charge before have learned nothing from the event since then. Time and space do not allow for a broader exposition of Professor Page’s excellent analysis; but if the crisp and well-reasoned presentation he makes in his discussion of the effect of power laws on the probability of real-world events is to be taken at face value, the balance of his book is equally well done. Nevertheless, as with any work of substantial size and effort, the sheer weight of the output, inevitably leave patches here and there that would warrant reappraisal and updating in a future edition of this extraordinary book. Two points come to mind: in discussing Game Theory in chapter 21, Professor page does not elaborate on what competition between an established business firm and a new competitor seeking entry into the market would look like. This competition depends upon the ability of the competitors to differentiate themselves, in accordance with customer preferences, and which can be any of the following – • Price/price competition. • Price/quality competition. • Price/services. • Quality versus service, with price a dealbreaker. I cannot recommend Professor Page’s book too highly. If I could, I would tuck it into the book bag of every STEM student in high school and college undergraduate. I would also recommend it highly for students who do not have plans that include cultivating prowess in mathematics and science. In point of fact, I would use whatever strategy I could conjure up to induce non-science students to learn as much as they can from what Professor Page provides in his teaching syllabus. This material should be part of every high school curriculum, and educators owe it to their students and their parents to make this information, in some form, available to their students, because without it those students will be treading water getting nowhere, as the world they live in becomes overrun with data they are unable to use.

### ⭐⭐⭐⭐⭐ Nice High level primer
*by P***K on March 14, 2025*

Helped simplify some big ideas and what’s coming next in the area

### ⭐⭐⭐⭐ Great introduction to modelling
*by A***N on December 10, 2020*

I'm just an undergraduate computer science student (so take this review with a healthy dose of scepticism), but this is an excellent book on modelling both natural and social phenomena. Its philosophy is to use multiple models to describe a phenomena that the reader is interested in. The author discusses the strengths and weaknesses of each model, and the strengths and weaknesses of modelling in general. The author discusses rational and psychological models, the bell curve and power law distributions, linear regression, concave and convex functions, network models, entropy, epidemiological models, Markov chains, path dependence, and many other useful mathematical tools. The author acknowledges the weaknesses of models and does a great job giving a nuanced discussion of employing models with a healthy dose of scepticism and with value judgement. The book ends with an investigation into income inequality and opioids with a "many-model" approach and the author shows how the use of multiple models can be helpful when trying to understand something. I suppose that my only criticisms is that the author quotes authorities to bolster the usefulness of the model that they will discuss (personally, I think that it's better to quote authorities to disagree with them). Furthermore, I feel that the mathematical rigour is a bit lacking (though I do acknowledge that this book was written for people who don't necessarily have a strong maths background). Overall, I'd rate this book like a 4.3/5 stars. Amazing primer!

---

## Why Shop on Desertcart?

- 🛒 **Trusted by 1.3+ Million Shoppers** — Serving international shoppers since 2016
- 🌍 **Shop Globally** — Access 737+ million products across 21 categories
- 💰 **No Hidden Fees** — All customs, duties, and taxes included in the price
- 🔄 **15-Day Free Returns** — Hassle-free returns (30 days for PRO members)
- 🔒 **Secure Payments** — Trusted payment options with buyer protection
- ⭐ **TrustPilot Rated 4.5/5** — Based on 8,000+ happy customer reviews

**Shop now:** [https://www.desertcart.id/products/301234708-the-model-thinker-what-you-need-to-know-to-make](https://www.desertcart.id/products/301234708-the-model-thinker-what-you-need-to-know-to-make)

---

*Product available on Desertcart Indonesia*
*Store origin: ID*
*Last updated: 2026-05-13*