I've long thought the same. UVs are making their own tradeoffs (however un-or-semi-consciously), but in microcosms and on accountability nodes where any consequences can be distributed across a network, or alleviated by fuzzy "futures" (which are, again, sold with 100% confidence).
The similarities to the prototypical "confidence game" are unavoidable. In fact, I think this describes the same phenomenon. And it does explain the persistence, when we realize that expenditures of confidence are essentially a money printer that goes brrrrr in the head. Strangely, at most nodes in your very professional and scientific graph, the UV is ackually acting more economically(!) than the CV. But if we charted it through time along with rolling cost factors... yeah, we're screwed.
I think you are nailing it on the economic angle: the UV is is engaging in rational irrationality, acting crazy when there is no immediate consequence, making expressive decisions like how most voters behave, or acting crazy when the consequences for crazy are good. The problem is one of the cost of punishment on the part of more CV people: keeping track of when people were too confident and were super wrong and then punishing them accordingly (or at least just withdrawing status and belief in those people) is a lot of work. Especially if the person is overconfident in a way you like one will not tend to make that person pay the price for their bad behavior, and thus incentivize it. Keeping score on that sort of stuff is easier with people we know personally in day to day life and extremely difficult in those we only know through tv or whatever. Likewise it is easier to see it in short run events ("You said you knew how to bake a cake, and now my kitchen is on fire!") than in long run events ("Wait, so the earth is doomed 10 years from now... ok I guess I will check back with you then...")
Reminds me of some of the stuff John Carter has been writing about recently re left brain/right brain. In a left-dominated society, everything is a system... quantifiable, measurable, therefore, solving the problems humanity faces is akin to solving a complicated math problem. But reality is messy and complicated, hence the "no solutions, only tradeoffs" bit. As Jesus said, "The poor will always be with us."
As I've been pontificating on monarchy lately, this strikes me as an argument for monarchy over democracy: Where democratic politicians enthusiastically espouse the unconstrained vision and say things that feel good to get elected, the King, or leader, or whatever, can tell it like it is, and tell everyone who's not on board to fuck off.
On the right/left brain thing, the thing of it is that they are both right. It's like that old saw about the blind men describing an elephant based on touch alone, one saying it is like a tree, one saying it is like a snake, one getting crushed to death, etc. Even if one does manage to quantify everything you will still have tradeoffs to make, and even if you can't quantify everything there are things that can be changed in a predictable way. Both sides of the brain need each other. Everything is a trade off because the world is not perfectible to meet our needs or desires. We are finite so we have to choose between infinite options.
Unfortunately, monarchy doesn't solve this problem. Those promising unconstrained outcomes are promising them to the king instead of the people, so that's nice, but it doesn't mean it goes away. It also doesn't mean that the king doesn't succumb to unconstrained ideas about what he ought to be able to accomplish. Indeed, monarchy is one of the biggest draws of unconstrained visions, exactly because people imagine having the great philosopher king that will solve all their problems, cut through all the nonsense, put the nobles in their place and cure scrofula, yet it never quite works out.
The elephant bit is gold. Hadn't heard that one before.
The point about unconstrained narratives going upward to a king in a monarchy, versus downward from politicians to the masses in a democracy, is a really interesting salient point I hadn't considered. But then you also point out a hypothetical philosopher-king could get lost in all the noise of unconstrained hypothetical visions. Really highlights the single-point-of-failure issue of monarchy-- the King needs to be an S-tier specimen, which is a tall order.
It does start to double back on itself, doesn't it? Saying someone has to assume their opponents are psychologically broken or evil to not agree with them, and then saying that they are therefore psychologically broken :D
I think that if one is careful and sticks to observing their claims alone, however, it is more of a diagnosis. I rather prefer the "unconstrained vision" for that reason though, as that concept focuses on the beliefs about outcomes, not so much what is going on inside.
There is a further issue: The failure to recognize 'naive realism' as an effective strategy by ordinary people to help maintain the order to which they are best adapted.
The 'constrained vision' is presented as the vision of experts.
'Experts' have proven themselves again and again to be the enemy of ordinary people.
Opacity and muleishness are the chief instruments that ordinary people have to resist the technocracy that these 'constrained vision' experts serve.
Technocrats *always* present their cultural and political biases as 'scientific'. In fact, you could say the entire system of training 'experts' is directed to finding 'scientific' justification for doing what the ruling class wants done.
It's not objective in any way. It's selective in the extreme.
99.99999% of the time, 'nuance' is a rhetorical ploy deployed to diffuse the moral claims of ordinary people which 'lack nuance' and therefore can be ignored because they 'lack nuance'.
There is a lot there to address. I think you are misinterpreting what "naive realism" is in reference to. Typically the naive realism view is used by experts/politicians to sell their plans to the ordinary people. The way to fix any given problem is knowable and known, and therefore if you don't get on board with fixing that problem, for whatever reason, you must be bad and evil. When it used as a defense mechanism, it is a poor one, analogous to using leeches to treat disease. There are actual diseases that leeches (and blood letting) are functional treatments for, but generally it is a bad idea, not as good as other options. Likewise naive realism sometimes is good if you happen to actually have a pretty decent idea of how the world works and don't over apply it, but generally it only leads you to insist on doing things a particular way and forcing other people to do the same. The lack of humility leads people into doing all sorts of bad things, such as treating those who disagree with them as evil.
Likewise, you seem to have the constrained and unconstrained vision flipped. The constrained vision is that you can't have utopia, that you can't tailor the world to your preferences but have to make tradeoffs. Technocrats are generally those dealing in the unconstrained vision, that there are solutions to problems that don't require tradeoffs, that with enough study and understanding things can just be made the way they want.
I put a return to the more uncertain state at further levels of knowledge because with enough wisdom even experts eventually realize that you can't control things and make them turn out exactly as you like, if only because you can't know enough. Not all experts get there, and in fact people prefer listening to unduly certain experts and so incentivize against getting there; it is a human failing it seems.
" Not all subjects hit that point, though; something like car repair is enough of a closed system that high certainty of how to fix something is possible. For most sociological questions, however, things get pretty far to the right of that graph, and most real experts say “Well, dealing with the problem will look something like this, but we can’t promise a specific outcome and we really need to be on the lookout for unintended consequences and be ready to change direction if things go bad.”
Annnnnnnd we're back to my favorite hobby horse--technocrats/UVs mistaking complex problems for merely complicated ones.
I should essay a little piece on that, come to think of it. It would dove tail pretty well with Caplan's essay on free will a week or two back. I just started a new job last week and I really have my work cut out for me, not to mention much earlier work hours, so it might be a little while in coming though.
Everything you've said is right, however I think you've missed out a fundamental reason why UVs and CVs exist.
It is because people are dumb and use incomplete information to make decisions.
Even without a sooth-sayer telling people sweet lies, there is a high probability that a group of ignorant people will propose an ignorant solution.
As I see it there is a spectrum flowing from the most unconstrained bullshit, "fridges are magic", all the way to technical descriptions of adiabatic cycles.
With certain people being unable to understand above a certain threshold.
Once they reach that cognitive limit, they will propose a solution that is at the same level as their thoughts. Which seems oversimplified only to those who see more detail, and overcomplicated only to those who see less detail.
So rather than being born from either, a desire to appear smart with low effort, or a cost-benefit analysis of political issues, I propose the third reason: Sheer stupidity. With complete confidence in their view.
To clarify I'm not talking about people who cut corners and don't bother researching because they want to save time, I mean people who could never understand even if they spent their life researching the topic.
It might be hard for us to believe that they could exist, but I propose that they do.
I've long thought the same. UVs are making their own tradeoffs (however un-or-semi-consciously), but in microcosms and on accountability nodes where any consequences can be distributed across a network, or alleviated by fuzzy "futures" (which are, again, sold with 100% confidence).
The similarities to the prototypical "confidence game" are unavoidable. In fact, I think this describes the same phenomenon. And it does explain the persistence, when we realize that expenditures of confidence are essentially a money printer that goes brrrrr in the head. Strangely, at most nodes in your very professional and scientific graph, the UV is ackually acting more economically(!) than the CV. But if we charted it through time along with rolling cost factors... yeah, we're screwed.
I think you are nailing it on the economic angle: the UV is is engaging in rational irrationality, acting crazy when there is no immediate consequence, making expressive decisions like how most voters behave, or acting crazy when the consequences for crazy are good. The problem is one of the cost of punishment on the part of more CV people: keeping track of when people were too confident and were super wrong and then punishing them accordingly (or at least just withdrawing status and belief in those people) is a lot of work. Especially if the person is overconfident in a way you like one will not tend to make that person pay the price for their bad behavior, and thus incentivize it. Keeping score on that sort of stuff is easier with people we know personally in day to day life and extremely difficult in those we only know through tv or whatever. Likewise it is easier to see it in short run events ("You said you knew how to bake a cake, and now my kitchen is on fire!") than in long run events ("Wait, so the earth is doomed 10 years from now... ok I guess I will check back with you then...")
Reminds me of some of the stuff John Carter has been writing about recently re left brain/right brain. In a left-dominated society, everything is a system... quantifiable, measurable, therefore, solving the problems humanity faces is akin to solving a complicated math problem. But reality is messy and complicated, hence the "no solutions, only tradeoffs" bit. As Jesus said, "The poor will always be with us."
As I've been pontificating on monarchy lately, this strikes me as an argument for monarchy over democracy: Where democratic politicians enthusiastically espouse the unconstrained vision and say things that feel good to get elected, the King, or leader, or whatever, can tell it like it is, and tell everyone who's not on board to fuck off.
On the right/left brain thing, the thing of it is that they are both right. It's like that old saw about the blind men describing an elephant based on touch alone, one saying it is like a tree, one saying it is like a snake, one getting crushed to death, etc. Even if one does manage to quantify everything you will still have tradeoffs to make, and even if you can't quantify everything there are things that can be changed in a predictable way. Both sides of the brain need each other. Everything is a trade off because the world is not perfectible to meet our needs or desires. We are finite so we have to choose between infinite options.
Unfortunately, monarchy doesn't solve this problem. Those promising unconstrained outcomes are promising them to the king instead of the people, so that's nice, but it doesn't mean it goes away. It also doesn't mean that the king doesn't succumb to unconstrained ideas about what he ought to be able to accomplish. Indeed, monarchy is one of the biggest draws of unconstrained visions, exactly because people imagine having the great philosopher king that will solve all their problems, cut through all the nonsense, put the nobles in their place and cure scrofula, yet it never quite works out.
The elephant bit is gold. Hadn't heard that one before.
The point about unconstrained narratives going upward to a king in a monarchy, versus downward from politicians to the masses in a democracy, is a really interesting salient point I hadn't considered. But then you also point out a hypothetical philosopher-king could get lost in all the noise of unconstrained hypothetical visions. Really highlights the single-point-of-failure issue of monarchy-- the King needs to be an S-tier specimen, which is a tall order.
Why do I feel like 'naive realism' is just an insult pretending to be a diagnosis?
It does start to double back on itself, doesn't it? Saying someone has to assume their opponents are psychologically broken or evil to not agree with them, and then saying that they are therefore psychologically broken :D
I think that if one is careful and sticks to observing their claims alone, however, it is more of a diagnosis. I rather prefer the "unconstrained vision" for that reason though, as that concept focuses on the beliefs about outcomes, not so much what is going on inside.
There is a further issue: The failure to recognize 'naive realism' as an effective strategy by ordinary people to help maintain the order to which they are best adapted.
The 'constrained vision' is presented as the vision of experts.
'Experts' have proven themselves again and again to be the enemy of ordinary people.
Opacity and muleishness are the chief instruments that ordinary people have to resist the technocracy that these 'constrained vision' experts serve.
Technocrats *always* present their cultural and political biases as 'scientific'. In fact, you could say the entire system of training 'experts' is directed to finding 'scientific' justification for doing what the ruling class wants done.
It's not objective in any way. It's selective in the extreme.
99.99999% of the time, 'nuance' is a rhetorical ploy deployed to diffuse the moral claims of ordinary people which 'lack nuance' and therefore can be ignored because they 'lack nuance'.
There is a lot there to address. I think you are misinterpreting what "naive realism" is in reference to. Typically the naive realism view is used by experts/politicians to sell their plans to the ordinary people. The way to fix any given problem is knowable and known, and therefore if you don't get on board with fixing that problem, for whatever reason, you must be bad and evil. When it used as a defense mechanism, it is a poor one, analogous to using leeches to treat disease. There are actual diseases that leeches (and blood letting) are functional treatments for, but generally it is a bad idea, not as good as other options. Likewise naive realism sometimes is good if you happen to actually have a pretty decent idea of how the world works and don't over apply it, but generally it only leads you to insist on doing things a particular way and forcing other people to do the same. The lack of humility leads people into doing all sorts of bad things, such as treating those who disagree with them as evil.
Likewise, you seem to have the constrained and unconstrained vision flipped. The constrained vision is that you can't have utopia, that you can't tailor the world to your preferences but have to make tradeoffs. Technocrats are generally those dealing in the unconstrained vision, that there are solutions to problems that don't require tradeoffs, that with enough study and understanding things can just be made the way they want.
I put a return to the more uncertain state at further levels of knowledge because with enough wisdom even experts eventually realize that you can't control things and make them turn out exactly as you like, if only because you can't know enough. Not all experts get there, and in fact people prefer listening to unduly certain experts and so incentivize against getting there; it is a human failing it seems.
" Not all subjects hit that point, though; something like car repair is enough of a closed system that high certainty of how to fix something is possible. For most sociological questions, however, things get pretty far to the right of that graph, and most real experts say “Well, dealing with the problem will look something like this, but we can’t promise a specific outcome and we really need to be on the lookout for unintended consequences and be ready to change direction if things go bad.”
Annnnnnnd we're back to my favorite hobby horse--technocrats/UVs mistaking complex problems for merely complicated ones.
Very much so, yes.
I should essay a little piece on that, come to think of it. It would dove tail pretty well with Caplan's essay on free will a week or two back. I just started a new job last week and I really have my work cut out for me, not to mention much earlier work hours, so it might be a little while in coming though.
Everything you've said is right, however I think you've missed out a fundamental reason why UVs and CVs exist.
It is because people are dumb and use incomplete information to make decisions.
Even without a sooth-sayer telling people sweet lies, there is a high probability that a group of ignorant people will propose an ignorant solution.
As I see it there is a spectrum flowing from the most unconstrained bullshit, "fridges are magic", all the way to technical descriptions of adiabatic cycles.
With certain people being unable to understand above a certain threshold.
Once they reach that cognitive limit, they will propose a solution that is at the same level as their thoughts. Which seems oversimplified only to those who see more detail, and overcomplicated only to those who see less detail.
So rather than being born from either, a desire to appear smart with low effort, or a cost-benefit analysis of political issues, I propose the third reason: Sheer stupidity. With complete confidence in their view.
To clarify I'm not talking about people who cut corners and don't bother researching because they want to save time, I mean people who could never understand even if they spent their life researching the topic.
It might be hard for us to believe that they could exist, but I propose that they do.