On Naïve Realism and a Conflict of Visions
Building on Kling's point, via an extended comment
The general gist of Naïve Realism is that the naïve realist believes that they perceive reality fully and accurately and so their beliefs and sense of reality is sufficiently accurate to come up with straight forward solutions to problems, no matter the complexity. So no matter the problem the solution is known, either by the realist or someone else who specializes. We just gotta do it.
Naïve realism is then essentially the unconstrained vision as described by Thomas Sowell in “A Conflict of Visions”, in that those with the unconstrained vision think there isn’t a question of trade offs of even inability to solve problems, but only a desire to not fix the problems because of evil or rent seeking, etc. Utopia is possible, we just gotta do it.
All this is as opposed to the constrained vision, which basically holds that reality is really complex, so problems can’t be solved so much as trade offs made that might be a little better or worse, and indeed possibly problems can’t be solved at all. Certainly any “solutions” are not going to be simple, and probably are going to involve trial and error and debatable tradeoffs. That isn’t to say that sometimes problems don’t have relatively simple solutions, depending on the sources. Sometimes the problem with crime is “actually prosecute the criminals”. Yet most of the time social problems are really complicated and exist not because people are evil and refuse to fix them, but because there are rough tradeoffs. There will always be some crime, and driving it to zero is all but impossible, the tradeoffs being untenable. There are diminishing returns in other words, or the more you want to remove a problem the more costly it becomes.
Anyway, Kling questions why the unconstrained vision can continue to exist in an evolutionary sense.
The unconstrained vision is just wrong. Social problems really are complex. Why do some people have the unconstrained vision? One possibility is that this is the way the world presents itself to them.
The other possibility is that seeing social problems as having simple solutions is a psychological flaw. It is something people want to believe, even though it is false. Although Jeffrey Friedman opposed reducing other people’s beliefs to psychological flaws1, I think that those of us with the constrained vision have little choice but to see the unconstrained vision as a psychological flaw. It is impossible for Dan Williams or me to believe that the unconstrained vision, in which social problems have clear and obvious solutions, is true. So we look for psychological roots for people having false beliefs.
…
If I do not profess a belief that social problems have a simple solution, then people will accuse me of not caring about the social problem. They can accuse me of wanting to let people suffer from homelessness or poor education or poor health or what have you.
This seems like a pretty strong point. If you profess that you care about X, but don’t want to go all in on X because of the tradeoff with Y, it makes you seem like you care less about X than someone who refuses to acknowledge the tradeoff and goes all in.
Here’s where I want to build on this a little. Kling’s explanation assumes that other people already have the unconstrained vision who don’t understand that a tradeoff is necessary, because those other unconstrained vision folks (hereafter UVs) are necessary to reward or punish your professed beliefs. If that’s the whole story constrained vision people (CVs) will push back and see you as a simpleton, as we do. As there is negative feedback from reality for holding the unconstrained vision, we should move towards an equilibrium with no UVs.
So profession of beliefs shouldn’t be sufficient to maintain the existence of the UV. Fortunately, we have good reason to believe that being a UV is cheaper in another important way: you don’t have to spend time and energy developing enough knowledge to recognize that the situation is really complicated, much less enough knowledge to understand and make arguments about what the proper tradeoff is. By being a UV you get to act as though you know enough about any complex situation to have a simple answer, thus being high status, while at the same time not needing to do any work.
As shown in this expertly constructed graphic, the relationship between knowledge and certainty is not direct. Honest yet ignorant people will generally answer “I don’t know” when asked how to fix a problem they know next to nothing about. However, even honest people will display a lot of certainty when they feel like they know a bit1. A few clever slogans or bits of remembered prose and people feel like they have a full grasp of the problem they don’t have any experience trying to solve.
Of course, once those people have a little experience with the problem they very quickly realize “Crap, this is really tough!” More experience and more studying of the problem will make them better grasp the gap between what they know and what there is to know. Certainty drops and starts to get close to “I don’t know”, although it hovers around “well, this might help, but here’s other problems…” Then they learn even more, and certainty gets pretty high again as they become pretty expert, especially compared to most people, and they are confident that they can deal with most problems.
Then there is that last drop in certainty that comes from having lots of experience and realizing that most problems are all but unique, and knowing ahead of time how to solve them is impossible, and there will be lots of unexpected trade offs and costs to consider. Not all subjects hit that point, though; something like car repair is enough of a closed system that high certainty of how to fix something is possible. For most sociological questions, however, things get pretty far to the right of that graph, and most real experts say “Well, dealing with the problem will look something like this, but we can’t promise a specific outcome and we really need to be on the lookout for unintended consequences and be ready to change direction if things go bad.”
If this graph is roughly true, (which it clearly is, I mean look at that design), we can see the cost/benefits of knowledge at play, and why a UV saves quite a bit. Especially when one doesn’t have skin in the game expressing certainty will tend to signal that one is not in the very ignorant range and indeed might be in the expert range. This in turn signals that one is knowledgeable and thus higher status, someone to listen to.
But wait! Why wouldn’t people just assume that you are a hack, claiming certainty because you don’t know enough to realize you don’t know enough?
Well, I think that most situations people have direct experience with are closer to car repair or the like, where the graph is cut off before that dip into uncertainty at the far right. As such, certainty goes with either “knowing a little” or “knowing a lot”, while uncertainty goes with either “knowing next to nothing” or “knowing a bit more than a little”. The worst case is to appear to know nothing2 so better to pretend to know a lot. Other people want answers, and will often defer to someone who seems certain, but rarely will defer to someone who says “We can’t know for sure”.
Which brings us to the other big reason the unconstrained vision continues so strongly: it sells. Politicians and consultants3 get hired because they have clear solutions to the problems people want to get rid of. It is really hard to get elected on a platform of “Look, you have to help yourselves because no one else can solve your problems, and government sure as hell can’t.” No, people want to hear “I know the solution to all your problems, just give me a bunch of power/money and it will be all beer and skittles from here on out.”
In a leader the unconstrained vision is a cheap ticket to status and power. Making it work comes after that, and is thus a problem for another person.4 As a result, we can expect UVs to continue so long as there are positions of power, status and money for them to occupy such that professing the UV is advantageous.
For the average person, however, positions of real power are out of reach. Yet there are many points of relative power and status that can be attained more easily by excess certainty. Getting your way in a committee is easier if you are entirely uncompromising and bull your way through it. People at work take you more seriously if you claim certainty. That sort of thing.
Moreover, there is status to be gained by agreeing with the high status and powerful individuals, especially on topics that one’s knowledge or certainty will ever be tested. It signals group affinity and status by association with a higher status person. Plus, since it will never be tested, it is cheap. (And if it is tested and fails, well, it wasn’t executed properly by you so it isn’t that you were wrong…)
So, there it is. People like certainty, so the unconstrained vision persists because it allows a great deal of it, even, or especially, when that certainty isn’t warranted. The constrained vision, although more accurate, promises only tradeoffs and doing the best you can with a situation. It is the difference between the promise of someone else solving all your problems and having to deal with them yourself for the most part.
Notable, however, is that the “evil” explanation does come into play. Those who falsely claim certainty are at the lest unvirtuous, and honestly I would assume evil at levels were power and status are such that those fighting for them are smart enough to know better5. Even if someone knows that their certainty is unwarranted they have incentive to lie and behave badly, particularly if they think that blame for being wrong can be pushed onto someone else.
The human condition has some serious problems. The unconstrained vision, and all the incentives for it, is one of them.
I used to use this example in class a lot: First, I would ask students “Who here knows how a refrigerator works?” Maybe a third of the students would raise their hands. Then I would say “Oh, good! Could one of you come up and draw a quick diagram on the white board for us?” Suddenly there were a lot fewer hands in the air, generally zero. Knowing a little feels like you know enough, until you actually need to apply the the knowledge.
People value humility less than they admit, and far less than they should.
Or their more historical names such as shamans, wizards, sooth seers, priests, etc.
Future you is a chump whose plight can easily be ignored; past you is a lazy asshole who totally screwed things up; present you is somehow blameless.
See Poverty Inc. for an example where those pushing “solutions” are deeply incentivized to keep the problem going.
I've long thought the same. UVs are making their own tradeoffs (however un-or-semi-consciously), but in microcosms and on accountability nodes where any consequences can be distributed across a network, or alleviated by fuzzy "futures" (which are, again, sold with 100% confidence).
The similarities to the prototypical "confidence game" are unavoidable. In fact, I think this describes the same phenomenon. And it does explain the persistence, when we realize that expenditures of confidence are essentially a money printer that goes brrrrr in the head. Strangely, at most nodes in your very professional and scientific graph, the UV is ackually acting more economically(!) than the CV. But if we charted it through time along with rolling cost factors... yeah, we're screwed.
Why do I feel like 'naive realism' is just an insult pretending to be a diagnosis?