Two types of deference
This post distinguishes between two different ways of deferring to future humans who understand a lot more about the world than we do.
I. Some futuristic minds
Consider a futuristic version of human civilization, in which human-like capacities for cognition and understanding have dramatically improved. Let’s say, in particular, that humans have figured out how to scan their brains, replicate the cognitively-relevant causal structure on computers, and then to gradually but radically scale up the size, speed, memory capacity, etc of the resulting digital minds (it’s not important, to the thought experiment, that this process be identity-preserving). Each such mind, let’s say, occupies an entire 50-floor, warehouse-like building, filled with futuristic computers running near physical limits on storage, communication speed, and processing power (see Sandberg (1999) for some discussion). These minds can interact with the world via various sophisticated sensors and robots, and they can communicate extensively with each other via various high-bandwidth channels.
Let’s say that together, some very large number of such minds (10 million? eventually there are space limitations) set out on a coordinated project of improving their collective understanding of the world, at all levels of abstraction. This is not a narrowly “scientific” enterprise, but rather one that encompasses all truth-oriented aspirations involved in human art, literature, philosophy, economics, and so forth — as well as in many other practices, which our present civilization does not engage in. These minds aren’t just looking for the most fundamental laws of physics, or the true theorems; they’re looking for everything that any human trying to understand any aspect of the world has been looking for (though they’ll presumably need some ways of prioritizing). They can use their robots and computing resources etc to gather all the empirical data they want. Let’s say that they spend roughly 1,000 objective years on this project.
To be clear: I’m not trying to give a realistic picture of what futuristic human inquiry will be like, or what versions would be desirable. Mostly, I want to evoke with some concreteness the possibility of extreme differences in scale, intensity, and quality of inquiry — as well as in the cognitive capacity of the inquirers — relative to present-day humans.
II. Concrete deference
Everyone will presumably admit that the epistemic position of these futuristic minds, after 1,000 years of inquiry, is superior to our own. But we can distinguish between at least two ways of thinking about this superiority — a distinction somewhat analogous to the distinction between “concrete Utopias” and “sublime Utopias” I wrote about a few weeks ago. I’ll call these “concrete deference” and “sublime deference.”
Concrete deference takes for granted a background set of concepts, and then focuses on the sense in which these futuristic minds are able to answer the questions that these concepts make available.
This is clearest, perhaps, in the context of comparatively mundane empirical questions. Thus, today we have questions about e.g. which minimum wage policies will have what effects on unemployment metrics, or about what drugs would have what effects on cancer, or about how babies learn. The futuristic minds would presumably be able to answer such questions much better than we can.
But we can concretely defer to futuristic minds about more abstract and philosophical questions as well. Thus, for example, we might think of these minds as knowing, e.g., the true theory of population ethics, well-being, meta-ethics, the nature and distribution of consciousness; the right approach to anthropics, or decision-theory, or reasoning in the face of uncertainty; and so forth.
That is, concrete deference thinks of these futuristic minds, centrally, as better informed about the topics of today’s discourse than present day humans are. And in this sense, it treats us, in the present, as understanding the broad framework within which their conclusions about these topics will fit; we just don’t know what those conclusions will be (though in some cases, we have guesses, and we assume that some people’s views today will be “proven right” by the test of time).
III. Sublime deference
Sublime deference towards these futuristic minds, by contrast, does not situate their views within our current concepts, but rather focuses on the sense in which their perspective is incomprehensible from our present vantage point. They are thinking in ways we can’t, about topics we cannot conceive of; they are reaching conclusions we haven’t come close to considering; they’re interacting and communicating in radically new ways, and building new means of understanding and structuring those interactions. In these respects, the difference between us and them is less like the difference between student and teacher, or between Newton and Einstein, and more like the difference between mouse and Einstein (or, perhaps, between rock and mouse).
There is some analogy, here, with the distinction, made famous by Donald Rumsfeld, between “known unknowns” and “unknown unknowns.” That is, we know that we don’t understand dark matter, and we hence expect future people to understand it better. But just as the cavemen, and the mice, don’t know that they don’t understand dark matter, we don’t know that we don’t understand X-unknown-unknown, because it’s currently too far beyond our ken. The sublime perspective expects a lot of this type of thing.
That said, the distinction between known and unknown unknowns can be blurry. For example, maybe we know that we don’t know why there is something rather than nothing (I’m currently skeptical of Kraussian explanations). But this feels different from knowing that we don’t know whether there’s milk in the fridge. One suspects that some “beyond-our-ken”-ness might be involved.
(Note that the relevant “unknown unknowns” need not come from disciplines like physics and math and philosophy, with a reputation for epistemic exotica. The literature, economics, art, social critique, etc, of these futuristic minds maybe just as sublimely unfamiliar and insightful. And let’s not forget all the “unknown unknown” disciplines…)
Some of these “unknown unknowns” may function centrally to add things, explanations, concepts, etc that are currently missing from our basic picture of the world. E.g., the cavemen knew about rocks, and so do we, but we also know about electrons, galaxies, Nash equilibria, and so forth.
But the sublime perspective (or at least, one version of it) also expects a lot of revision as well, including conceptual revision. Thus, e.g., perhaps people in 1500 would expect us, today, to have figured out the best exorcism techniques; perhaps Aristotle would expect us to have a well-worked out theory of the “vegetative soul.” But while we can describe how we understand things in the vicinity of what these people have in mind, we don’t really think in those terms anymore.
(Note that this is distinct from, e.g., explaining rocks in terms of particles. We understand rocks better than we used to; but we still use the concept of “rock.”)
IV. How good are our concepts?
Pretty clearly, our deference towards these futuristic minds should be both concrete and sublime. That is, such minds will have answers to many of the questions we ask today; but they’ll also have many answers to questions we don’t and can’t ask, and they’ll think about many of the questions we do ask in radically different ways.
Maybe sublimity sounds sexier, and humbler, and more consistent with the type of “vastness of mindspace” aesthetic I described in the Utopia post. But I don’t think it should wash out all concreteness, or be put on a pedestal. We actually do know a lot of stuff about the world (see e.g. Sean Carroll’s discussion here, though cases of more mundane understanding seem easier to evaluate), and the concepts we use to structure that knowledge and its ongoing gaps often seem pretty good. Future improvements in understanding won’t throw everything wide open.
I’m interested, though, in cases where concrete deference should actually be more sublime — for example, cases where the assumption that these futuristic minds would still be thinking in terms remotely resembling our own goes substantially astray. I can readily imagine this happening, for example, in philosophy. Thus: sometimes, when faced with perplexities relating to e.g. population ethics, or about the distribution of consciousness or moral weight amongst non-human animals, we might imagine a future in which humans have identified the true theories of these things, and have the option to act on this understanding. In this context, though, it seems worth keeping in mind that future humans may not be thinking in terms remotely resembling our own about these topics, to the extent they remain “topics” at all. Philosophy, centrally, is a place we go when we’re confused; and it does seem like, in cases like consciousness and population ethics, we’re still pretty confused, even if our current confusion gets framed as a debate between views that could be straightforwardly true or false. I’m inclined to expect many questions to be un-asked, or re-asked in a radically different and better way, instead of simply answered.