jonathoda
03/30/2023, 11:21 PMSrini K
03/31/2023, 1:04 AMDuncan Cragg
03/31/2023, 11:07 AMJimmy Miller
03/31/2023, 5:16 PMJoshua Horowitz
04/01/2023, 2:10 AMJimmy Miller
04/01/2023, 2:31 AMText generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader's state of mind. It can't have been, because the training data never included sharing thoughts with a listener, nor does the machine have the ability to do that. This can seem counter-intuitive given the increasingly fluent qualities of automatically generated text, but we have to account for the fact that our perception of natural language text, regardless of how it was generated, is mediated by our own linguistic competence and our predisposition to interpret communicative acts as conveying coherent meaning and intent, whether or not they do [89, 140]. The problem is, if one side of the communication does not have meaning, then the comprehension of the implicit meaning is an illusion arising from our singular human understanding of language (independent of the model).I'm not saying I necessarily agree with everything here. Just that this does seem to be the argument they are making. The deepity point was a bit off imo. From the article:
The other meaning is the one that suggests that ChatGPT has no understanding of communicative intent, so when you ask it a question, it can only respond correctly in limited cases where it has seen the question, or else give awkward ill-fitting answers. But in this sense, ChatGPT is obviously not a stochastic parrot. You can ask it all sorts of subtle things, questions it has never seen before and which cannot be answered without understanding.Then there are some transcriptions demonstrating understanding of "communicative intent". But that's to misunderstand the point the paper was making, not to refute it. Of course GPT can talk about communicative intent. The point being made in the paper is that the ability to talk about communicative intent doesn't entail that it has communicative intent or pays attention to our communicative intent.
wtaysom
04/03/2023, 8:51 AM[Text generated by an LM] can't have been [grounded in communicative intent], because the training data never included sharing thoughts with a listenerDoes this ring at all true to you guys? I think of the "the training data" as being things people have written online, which almost always involves "sharing thoughts with a listener." I take the fact that ChatGPT gives sensible answers in a way that previous GPTs did not as evidence that the model can at least preserve intent and perhaps recombine so as to generate new intent.
Jimmy Miller
04/03/2023, 12:38 PMDuncan Cragg
04/03/2023, 12:55 PMJimmy Miller
04/03/2023, 12:58 PMDuncan Cragg
04/03/2023, 1:02 PMJimmy Miller
04/03/2023, 1:16 PMDuncan Cragg
04/03/2023, 4:18 PMJimmy Miller
04/04/2023, 1:29 AMwtaysom
04/04/2023, 8:45 AMDuncan Cragg
04/04/2023, 8:54 AM