Hi all, Have any of you run into questions where ...
# of-ai
n
Hi all, Have any of you run into questions where large AI models are missing crucial conceptual knowledge as well as are unable to find it by using Web search as a tool? In other words, what are some examples of the blind spots of "AI + public Internet"? I really mean CONCEPTUAL knowledge, i.e. HOW things work in the world, not mere factoids or events. Will likely be super-niche, or some nuance that has not been discussed on the Web, and therefore missing from the training data.
w
Questions where most of the corpus answers a slightly though materially different question. Off the top of my head, back years ago now, we noticed anything resembling the monty-hall problem was assumed to be an instance thereof. Reasoning models probably have fixed that one. The other week though I ran into a little trouble asking demographic questions about how many children families have in different places and this generally being interpreted as basic fertility. Of course a little clarification (sometimes in a fresh conversation) goes a long way. Let's see... here's a good one. Compare asking how many siblings do people in a place have vs if a woman has one child, how many other children does she have on average. Actually, this makes for a good trick question, "Suppose we have demographic data telling us how many siblings each person in the sample has. What can we then say about how many children a given woman has?"
Oh... Noticed this in Claude's system prompt... "If Claude is shown a classic puzzle, before proceeding, it quotes every constraint or premise from the person’s message word for word before inside quotation marks to confirm it’s not dealing with a new variant."