I’ve noticed that there are people on IG who basically try to increase their following by telling us the best AI prompts, such as those for ChatGPT. I started thinking about that task and realized that it connects in an interesting way with something I’ve been writing about: doing greatest-of-all-time rankings in baseball and elsewhere. Bear with me.
The GOAT Actors
If you’re trying to figure out who the GOAT, greatest-of-all-time, actors are, you have to offer some elaboration on what you mean. Keanu Reeves has been in loads of movies that made zillions of dollars. So, if by “GOAT actor” you mean an actor who is great at entertaining moviegoers, then he’s probably amongst the greatest. But if you mean actors who can play characters who display complex, nuanced personalities in sophisticated dramas, then there’s no way he is one of the GOATs. He’s nothing approaching Daniel Day-Lewis.
The term “GOAT actor” has a significantly incomplete meaning. In order to communicate well with it, you usually have to supply some elaboration. It’s unlike a scientific term such as “proton” or “electrical field” in that respect.
It’s helpful to think of an analogy.
New Game: Yardball
Playing Yardball. Some young kids are playing around in their backyard. They are playing a game that has a passing resemblance to soccer. But it’s obviously quite different and the kids don’t even know what soccer is. One kid is on offense and is trying to kick the ball between the bush and tree in the back of the yard; the other kid is trying to prevent that from happening by standing between the bush and tree and blocking the ball. A player gets one point for a goal that touches the defensive player and goes past him, through the tree-bush line; they get two points if it goes through without touching the defender. The player on offense cannot use their hands; the one on defense can. Each kid gets to be on offense for a vague amount of time and then they switch positions. They call their game “yardball”, since they play it in their yard.
The rules aren’t fixed—not by a long shot. For many questions you might have about the game, the kids haven’t thought of them before, don’t immediately know what to say in response to them, would probably offer differing responses from one another, and would offer a variety of responses depending on how you asked them, whether they had had a fight with their older sibling that morning, and how hungry they are.
And yet, they’re playing a game anyway. There’s enough structure and rules to what they’re doing so that claims like “In yardball there are two players and one ball”, “Blake scored two goals in the last couple minutes”, “In yardball the one with the most goals wins”, “In yardball a goal is scored when the ball goes through the bush-tree line” are true while “Lima hasn’t scored yet”, “A player can throw the ball with his hands to make a goal”, and “One player can tackle the other” are false.
The terms “x is a complete game of yardball”, “x is a goal in yardball”, and others fail to have well-defined application conditions. They are works in progress when it comes to semantics, with meanings that are slowly being built up as the kids encounter situations that require them to further develop the rules. The very same semantics-building process occurred over many decades with baseball, as rare situations occurred that rule makers hadn’t thought of and that then-current rules didn’t dictate the outcome.
After a summer of playing yardball, the kids debate the GOAT achievements and players of their game. They are unanimous in holding that Samantha is the best offensive player, as she scored the most goals and had the highest goals/game ratio—both by a fair margin. Paulo had the greatest streak of scoring at least ten points per game: seven games in a row. No one else managed more than two games consecutively. So, that was a GOAT mark. Brit had an incredible streak of holding opposing players to fewer than four points for five games in a row; no one else managed it even two times all summer. Samantha (of offensive fame) had the GOAT achievement of going six straight games with allowing no two-point goals. No one else managed even two games in a row.
The project of trying to figure out what yardball really is, or the exact conditions for “GOAT achievement in yardball” is misguided. You can try to figure out what a proton really is, and many physicists have been working on that fruitful project for a century now. That’s because, to put it roughly, there’s a real phenomenon out there in nature that they are trying to figure out. When it comes to “yardball”, that’s not the case. The conceptions of the children figure in fixing semantics, but their conceptions are not up to the task of providing exact meanings.
So what does this have to do with AI prompts?
Often enough, asking an AI tool a question is straightforward. You want to know how Beethoven could write music while deaf, you pose your question, and you get an informative response. Simple.
Other times, it takes some clever work to figure out how to ask the AI tool in such a way that you get a helpful answer. Various people have come up with cool suggestions for optimizing questions so you get the info you really want. It’s not nearly as simple as the Beethoven example, but it’s not impossible.
The more philosophically interesting cases occur when your question is so incomplete that it’s not really possible to get an accurate answer. No one can answer a question that is as nebulous as a fart.
Incomplete Questions and AI Tools
Storytellers. For instance, suppose you want to know who the greatest storyteller was. Well, who counts as a storyteller, according to you? Maybe you were thinking of Stephen King when you thought of your question. Okay, but does Shakespeare count as a storyteller, according to your query? What about Homer? And what does it mean to be “great” at it? Artistic merit? Degree of originality? Amount of influence on other storytellers? Popularity during the author’s lifetime? Popularity after? And how do you even measure any of those things?
Pop Stars. Or suppose you want to know the most successful pop star ever. I think Benny Goodman and Glen Miller were pop stars in their day; do they count on your conception of pop star? Hell, I think Mozart was a pop star, if we’re willing to go back that far in time. Separately from that issue, what counts as success?
Romantic Dates. Chris and Alex are dating each other. Alex has a more generous conception of dating than Chris does. If Chris and Alex give different responses, “3” and “6”, to the prompt “How many dates did you two go on last two months?”, neither one is wrong, as the semantics of “date” can switch with contextual factors and the conceptions the speakers hold can, in some conversational contexts, count as relevant contextual factors. That is, when figuring out what counts as a “date” when Chris uses the term, you usually have to think about his conception of dating. Asking an AI tool, “What is a romantic date?” can’t get you an accurate answer because your question was so radically underspecified.
Free Will & Determinism. I’ve encountered many people who are captivated by the “Does determinism rule out free will?” question. I know it’s a pain in the ass, but there’s no answer to that question. Anyone who gives you one is either an amateur or is offering a gross simplification, period. The free will/determinism question is wildly ambiguous, and different ambiguations are true while others are false.
For most of us, we don’t have anything terribly specific in mind when asking some questions. Our question is radically incomplete, or underspecified, or however you want to put it. It’s as nebulous as a fart.
In those cases, AI can, at best, make suggestions as to how you might want to elaborate on your query. And that’s great. But sometimes AI doesn’t have access to material that could do the disambiguating.
For instance, suppose you want to know what it takes to make sure your beliefs are good ones. That sounds admirable, right?
But there are many different things you might mean by “good belief”. Here are just eight of them:
Evidentially reasonable belief: one based on excellent overall evidence.
Evidentially inferred belief: one that was arrived at via an evidentially sound inference.
True belief: one that’s just plain true.
Socially accepted belief: one that is commonly held in some relevant community.
Practically useful belief: one that helps you in your life.
Emotionally comforting belief: one that brings you emotional comfort.
Knowledge: one that amounts to knowledge.
Highly confident belief: one that is held with a high degree of confidence compared to others.
For the most part, AI tools don’t have these ideas at their disposal.
*****
So, what’s the lesson of all this?
I am not sure. I guess it’s something like realizing that in many interesting cases, it’s impossible to get the right answer from any AI tool. But this impossibility isn’t happening because there’s anything wrong with the tool. The fault lies in our nebulous questions. Our minds are so underspecified that not even God could tell us the answer to our question.
This relates strongly to something I’ve been putting together recently in regard to criteria. The current Miss America is indisputably Miss America, just like your proton, but the criteria for determining what is “idealized American womanhood” could be argued about for days. Effective Altruism has a similar issue. It determines the “most effective charities” and the ones they choose are the ones that best fit their criteria for this but you could argue all day about the criteria’s validity.
Hello Dr. Bryan, I was wondering would you say words like GOAT, Great, Best etc and other similar descriptive adjectives face a problem of vagueness too with regards to their meaning? That despite how they are usually used to mean they still are vague like being Bald or Tall etc?