My Epistemology, Without Argument
Not that You Asked
About the Author
Bryan Frances is the world’s only intellectual wisdom coach. He’s a former professor of philosophy & logic, doing research & teaching at universities in the US, UK, Europe, Asia, and the Middle East. He teaches you how to become the wisest thinker in the room—which is different from being the most knowledgeable or having the highest IQ. Contact for a free session.
A number of people have asked me for my views in epistemology. More specifically, they ask about my views on
the typical philosopher’s knowledge of philosophically controversial claims (I’m skeptical)
philosophical progress (I’m not skeptical).
On the face of it, my views seem to be in tension: if I’m right to be skeptical about the typical philosopher’s opinions on philosophical matters, then how on Earth is there tremendous progress in philosophy?
In this post I present some of my views—without even a hint of defense. I wish more philosophers would do this: “Here’s my view, without argument”. Just hearing what someone’s views are can be illuminating, since it can suggest ideas one isn’t already familiar with. With the better philosophers, like David Chalmers, Tim Williamson, and so on, these testimonies can serve as evidence that there’s probably really good arguments for the views they endorse.
I’ll limit myself to eight theses. As you’d expect, I have others.
First Point: Frequent Weakness of ‘Know’
I suspect that most ordinary tokens (instances, particular uses) of ‘S knows that P’ (e.g., ‘Tom knows that Mary is having an affair’) have quite weak requirements in order to come out true (they have weak “truth conditions”). The weakness shows up in at least two places: (i) the strength of S’s evidence/justification for her belief in P, and (ii) the strength of her grasp/understanding of P.
Regarding evidence, I suspect that in a great many cases, all S needs, in order for ‘S knows that P’ to come out true, is for her reasons/justification/evidence/grounds for believing the truth P aren’t stupid. For instance, if Tom thinks that Mary is having an affair, and it turns out that she was, then even if his evidence wasn’t really that great, ‘Tom knows that Mary is having an affair’ might often come out true anyway, in most ordinary conversational contexts. Evidence is nice to have, and a mild amount is necessary to rule out holding the belief for stupid reasons or no reasons whatsoever, but the main thing needed for knowledge is true belief.
Regarding the understanding-that-P condition, does an 8-year-old child know that Obama was a democrat, after being told so by multiple adults? Well, her evidence might be overwhelming: she has heard it from her parents, her school teacher, her older sister, her aunt, the TV, and so on. But she barely has any idea what a democrat is. You could say that she believes it but just barely, since her grasp of the belief is so weak. I’m suggesting that that’s often accurate: knowledge that P requires very little understanding of P. This applies to things like ‘John knows that E = mc2’ too, where John is an intelligent adult, and not a child. Like most people, John has almost no idea what the Einsteinian scientific claim is. He’s like the child with ‘democrat’. In order to know some truth P, you don’t need much of a grasp/understanding of it. That’s my suspicion.
Hence, I’m saying that for many if not most tokens of ‘S knows that P’, in order for them to be true, the believer needs only a very weak grasp of P and only pretty weak evidence as well—as long as P is true.
Second Point: Polysemy of ‘Knows’
It could be that ALL tokens of ‘S knows that P’ have the same, weak, requirements noted above. So, for instance, no matter what the conversational context, the strength of evidence needed for knowledge is pretty low. I think it’s more likely, though, that ‘S knows that P’ is interestingly polysemous, meaning it has quite different, and differently demanding, truth conditions for different tokens (interpreted literally; I’m not considering metaphorical claims for instance). The factors that determine the truth conditions are probably various.
An analogy helps make this idea comprehensible.
In some conversational contexts, in order for an event to correctly count as a “miracle”, it has to be caused by something supernatural—by definition. That’s one meaning of the term ‘miracle’. But in other contexts, a “miracle” need not have anything to do with supernatural goings on.
For instance, we talk about miracles in sports all the time. In those cases, a “miracle” is, to a first approximation, an amazing, incredibly rare, and unexpected event. It need not have to do with anything supernatural.
In yet other contexts, a “miracle” is something wonderous and somewhat baffling—but it need not be rare at all, as it is in sports. For instance, many people will say things like ‘Every birth of a baby is miracle’, with of course the understanding that there’s nothing rare about babies being born. Sure, some of those people are saying, pretty explicitly, that something supernatural is going on with every birth. But for others, it’s the wonder, amazement, and continued mystery of pregnancy and birth that makes births miracles—even if they insist that the mystery doesn’t involve anything supernatural.
So, it’s reasonable to think ‘miracle’ has several different but vaguely related meanings. That’s pretty much what it takes for a term to be polysemous. (It’s interesting that ‘polysemous’ is itself polysemous. Words that satisfy that odd condition of being true of themselves are called autological.)
I suspect that ‘S knows that P’ is like ‘miracle’.
Sometimes all it takes for ‘S knows that P’ to be true is for the weak condition noted above to be met—weak when it comes to evidence/justification and weak when it comes to grasping the belief. But on other occasions, in order for a token of ‘S knows that P’ to be true, S has to understand/grasp the belief quite well and have really impressive evidence—perhaps evidence that meets a certain strict community standard. For instance, in order for ‘Jan knows that the chemical sample in the lab apparatus contains magnesium’, Jan the scientist needs kick-ass evidence and a real grasp of what the hell magnesium is.
To be clear, I’m not saying that there are just two meanings for ‘S knows that P’, weak and strong. I suspect it’s a lot messier than that, like in the ‘miracle’ case. And like I said at the beginning, I’m not offering supporting arguments here.
I understand that many philosophers throughout history have wanted to glorify knowledge: knowledge is a grand thing, a great accomplishment. I think that’s sort of right but sort of wrong. It depends on the context. The 3-year-old child knows that Mommy is laughing, the blanket is warm, and the dog is outside. Not exactly glory there; knowledge can be virtually effortless and require no fancy cognition. When philosophers and others from centuries past insist that knowledge is a great accomplishment, I think it’s wise to charitably interpret them as talking about a special “subset” of knowledge. That’s cool.
You can’t give an account, or theory, of what a miracle is—because ‘miracle’ doesn’t pick out just one thing. The best you can do is explain that there are several notions here picked out by various uses of ‘miracle’, and maybe after that you can offer accounts/theories of each separate meaning.
Many people think that there must be some correct account of knowledge. “Here’s my idea of what knowledge is”, they say. That’s a kind of presupposition for many of them. Due to the polysemy of ‘S knows that P’, I suspect they’re wrong about that: there can be no “account” of knowledge.
Third Point: Often No Single Truth-Value
For some tokens of ‘S knows that P’, multiple truth conditions apply to those tokens about equally well, so we can’t evaluate the tokens using just one truth condition. In other words, it’s often the case that there are at least two roughly equally appropriate ways to interpret and assess the token of ‘S knows that P’. In these cases, there wasn’t anything in the speaker’s intentions or words or other contextual factors that indicated which single interpretation is relevant to evaluate her claim ‘S knows that P’.
Here’s an example. Bo says ‘Tom knows that Mary is having an affair’. We might think Bo’s remark is true, because Tom’s evidence isn’t stupid and Mary really is having an affair (and Tom knows what an affair is). Then again, we might think Bo’s remark is false, because we are employing a stricter meaning for ‘knows’, and Tom’s evidence isn’t good enough on that meaning. There was nothing in Bo’s mind or words or conversational context that indicated which standard to apply when trying to figure out if his remark is true.
So, Bo’s remark doesn’t have just one truth-value (“true” or “false”). Asking “Is Bo’s remark about Tom true or false?” is a mistake, albeit a natural one to make.
Fourth Point: Skepticism & Presupposition
Just because I think that many tokens of ‘S knows that P’ have weak truth conditions (as explained above) doesn’t mean that I think that those conditions are actually met in ordinary life. Unlike some philosophers, I take seriously the possibility that we have no external world knowledge, even on low-standards interpretation of ‘knows’. I don’t accept that skeptical view of knowledge but I don’t reject it either.
As a result of this opinion of mine, for my own assertions of the form ‘S knows that P’, there is something like an implicit, general presupposition that governs them, one that goes something like ‘Assuming we have tons of ordinary external world knowledge’. I have this presupposition both in ordinary and philosophical discourse. I acquired it only after studying epistemology for a number of years. Of course, the people I’m talking to are unaware of the presupposition. That’s fine. Adequate communication doesn’t need perfect accuracy in interpretation.
There’s a lot more to say about this, and I said some of it in this coauthored BOOK with Michael Huemer. I won’t go into it here because it just can’t be done in a brief way.
Fifth Point: Epistemology Under Skepticism
Even if we don’t have knowledge of the external world, I don’t think that means that epistemology is a disaster. We still have many true beliefs, they have varying degrees of evidential support (very briefly: I think ‘evidence’ is polysemous too), many of them are good enough to rely on in many contexts both ordinary and scientific, and so on. We even have wisdom, which will get filled out as coming not from external world knowledge but of certain key true beliefs with loads of supporting evidence, amongst other epistemic things.
Sixth Point: Possible Incoherence of Truth
Unlike most philosophers, I take seriously the idea that truth is an incoherent notion, and none of our beliefs are true (not even ‘2 + 2 = 4’ or ‘Dogs are dogs’). I take this seriously because I think the semantic paradoxes (such as the Liar paradox) seem to generate reductios of truth. Perhaps ‘is true’ is polysemous too, so ‘2 + 2 = 4’ is true for one meaning but not true for another. I don’t know about these things. In any case, I have another general presupposition behind my assertions, which goes something like ‘Setting aside those weird paradoxes’. When, in the fourth point above, I said that we have loads of true beliefs, this presupposition was in force.
People who say that truth is “relative” or “contextual” are saying that some things are indeed true—they’re just true relative to something. The idea that truth is incoherent is quite different. It’s the view that nothing is ever true, no matter what the “context” or whatever. In addition, the incoherency view is not dialetheism, which definitely says that many claims are true.
I have this ‘Setting aside paradoxes’ presupposition not only because of the semantic paradoxes but because of some other considerations, ones I won’t go into here.
Seventh Point: Kinds of Progress in Philosophy
I think there is tremendous progress in philosophy. I believe this mainly because I have experienced it so many times, in virtually every subfield of philosophy. No doubt, it’s a challenge to figure out how to accurately characterize it. But that it exists at all is obvious from a first-person point of view, at least for me. I talk about that a bit in this POST.
There are many aspects of philosophical progress, some more obvious than others. Here’s one way to characterize one particularly important aspect:
Assuming truth is coherent, philosophers have an impressive amount of agreement on many philosophical issues, many of those agreed-upon philosophical claims are true, our evidence for them is quite a bit better than “not stupid”, and the vast majority of philosophers from long ago either (a) weren’t even aware of those claims (and so hadn’t considered them), (b) were aware of them but didn’t believe them, or (c) believed them but didn’t have supporting evidence as impressive as ours; moreover, people today who are outside of philosophy know little of them.
Yes, there is a massive amount of disagreement in philosophy, but there is massive agreement too. I defend part of this thesis in a forthcoming ARTICLE—which is a kick-ass one, if I say so myself, because it’s utterly original.
Eighth Point: Skepticism about Philosophical Opinion
The philosophical truths we agree on (mentioned in my previous point) are largely unmentioned in philosophical writings and teachings. Philosophers tend to focus exclusively on the big, controversial bits only. That’s unfortunate, since it gives nearly everyone the wrong impression of philosophy.
Even so, on virtually all the big philosophical theses we actually discuss and disagree on in our research, I’m skeptical about most of our opinions on the truth-values of those theses. Roughly put, almost no one should have opinions on whether physicalism is true, for instance.
My skepticism here is not the result of my thinking that the requirements for ‘S knows that P’ to be true are very high in those assessment contexts. I suspect that even under a weak meaning of ‘knows’, there is something epistemically bad about the opinions of the vast majority of philosophers on those controversial philosophical claims.
I have arguments for this opinion, which have different but closely similar conclusions. Here is a shortened version of the first argument:
1. Suppose (i) you’re a philosopher who believes philosophical claim P, (ii) P is controversial amongst the philosophers who have investigated it the best, and (iii) you’re not significantly superior to them in knowledge of the topics relevant to P. In other words, P is some ordinary philosophical thesis, and you’re a philosopher but not a philosophical god compared to others.
2. Generalization, G: if (i)-(iii) are true, then the odds are high that your evidence (justification, grounds, etc.) for P isn’t very good. In brief, if it were good, then since the philosophical community has seen your evidence (that’s a realistic assumption), and as a group they still don’t accept P, that evidence probably isn’t so great.
3. Hence, the odds are high that your evidence for P isn’t very good.
4. If your evidence for P isn’t very good, then you don’t know P. If P were some weird proposition, like ‘0 = 0’, then maybe you wouldn’t need evidence, but P is an ordinary, run of the mill philosophical view.
5. Hence, the odds are high that you don’t know P.
Here’s a version of the second argument:
A. Suppose you’re a philosopher who knows (i)-(iii) are true. This means you have some knowledge of P’s place in the profession: it’s controversial amongst the people who investigate it the best. It also means you have some self-knowledge: you know that you’re not significantly superior to them in knowledge of the topics relevant to P.
B. You’re disposed to know that G is true.
C. You’re easily disposed to know the obvious logical truth that if (i)-(iii) & G are true, then the odds are high that your evidence (justification, grounds, etc.) for P isn’t very good.
D. Hence, you’re disposed to know that the odds are high that your evidence for P isn’t very good.
E. If you’re disposed to know that the odds are high that your evidence for P isn’t very good, then there’s an important sense in which you shouldn’t continue to believe P.
F. Hence, there’s an important sense in which you shouldn’t continue to believe P.
Notice that this second argument doesn’t conclude that the philosopher doesn’t know P. Maybe knowledge is weird in the respect that you can know P but there’s an evidential sense in which you shouldn’t believe it. My second argument is modest in that it doesn’t say anything about that issue.
Neither does my argument conclude that you shouldn’t, evidentially, believe P. It could turn out that there are two senses of ‘evidentially should’, and you should believe P in one evidential sense (because, perhaps, you already know P and haven’t been given any new contrary information) even though you shouldn’t believe P in another evidential sense (because of the awareness of profound disagreement).
Most people really can’t stand that kind of semantic complexity, but if you’re going to do philosophy carefully, and well, you had better get used to it. This is especially true when it comes to highly ambiguous words like ‘should’ and ill-defined & ambiguous words like ‘evidential’.
I know that for a significant number of philosophers, premise (A) isn’t true because they refuse to believe (iii): “I am not significantly superior to the philosophers who disagree with me in knowledge of the topics relevant to P”. Or, they don’t know that G is true and would deny G if given the chance. This is understandable if they’re relatively new to the profession. But after many years? I don’t think so. So, if (A) ends up false for either of those reasons, then in most cases the seasoned philosopher is epistemically blameworthy in sticking with P. Hence, it’s almost the case that whether or not (A) is true, the philosopher is a bit screwed.


I really enjoyed reading your eight theses. I loved the balance you struck between being skeptical of controversial claims while still defending the idea of philosophical progress. It got me thinking that the disagreement at the 'frontier' of philosophy might not be a failure at all. It could actually be a sign that our shared background assumptions are so refined now that the only things left to debate are the incredibly difficult edge cases. In that way, the fighting at the top is actually evidence of a solid foundation beneath it. I know this ties into your point about 'invisible agreement,' but I’m curious: Do you think there’s something structural about philosophy that guarantees these disagreements will persist, no matter how careful we are?
Great post idea. Your book with Huemer is on my reading list and I'm looking forward to it!