<@UK3LH8CF5> what do you make of section 4 of <htt...
# of-ai
j
@Jimmy Miller what do you make of section 4 of https://www.noemamag.com/gpts-very-inhuman-mind/
s
emphasis on “Artificial” from “Artificial Intelligence”
d
Can't believe the internet is still arguing over whether ChatGPT is just "glorified autocomplete" 🙄
j
It's a well written article. But there isn't much of an argument there. Can you define knowledge, understanding, intention, etc behavioristically? Well the article seems to assert you can, but why think that is the case? The evidence we seem to be given in the article is two fold 1) Look at ChatGPT doing all these things, how can you deny it understands? 2) We can take the intentional stance towards the system and it works remarkably well. I'm going to assume the article doesn't believe its presenting an argument, because if so, its a rather lackluster one. Philosophers have already talked about systems like ChatGPT well before they existed. Searle's Chinese room argument asks the exact question being raised, can a computer program be behavioristically identical to a human while not understanding? Various people fall on various sides of this argument. But I don't really think ChatGPT has really any bearing on it. Of course that doesn't stop philosophers like David Chalmers from embarrassing themselves by making silly statements.
j
I found the article (or section 4 at least) quite useful. Not because it made an argument about what understanding etc. are in a philosophical sense (I sort of care about that stuff, but not really?), but because it helped me make sense of deflationary sleights of hand in the discourse. When Bender et al. call GPT a “stochastic parrot” or Chiang calls it a “blurry JPEG of the web”, they are not making a Chinese-room argument about what understanding really is. They are making an argument about what GPT is capable of, based on its make-up, an argument which seems to be demonstrably flawed. The “deepity” concept helped me tidy up this dynamic. As far as I’m concerned, the rest of the article may be making a misstep by making it sound like this is about deep questions of philosophy.
j
I need to spend more time reading the Stochastic Parrot paper, but from my reading they are not at questioning the output of GPT style models. They seem to think they are very capable of outputting fluent speech but lack things necessary for understanding.
Text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader's state of mind. It can't have been, because the training data never included sharing thoughts with a listener, nor does the machine have the ability to do that. This can seem counter-intuitive given the increasingly fluent qualities of automatically generated text, but we have to account for the fact that our perception of natural language text, regardless of how it was generated, is mediated by our own linguistic competence and our predisposition to interpret communicative acts as conveying coherent meaning and intent, whether or not they do [89, 140]. The problem is, if one side of the communication does not have meaning, then the comprehension of the implicit meaning is an illusion arising from our singular human understanding of language (independent of the model).
I'm not saying I necessarily agree with everything here. Just that this does seem to be the argument they are making. The deepity point was a bit off imo. From the article:
The other meaning is the one that suggests that ChatGPT has no understanding of communicative intent, so when you ask it a question, it can only respond correctly in limited cases where it has seen the question, or else give awkward ill-fitting answers. But in this sense, ChatGPT is obviously not a stochastic parrot. You can ask it all sorts of subtle things, questions it has never seen before and which cannot be answered without understanding.
Then there are some transcriptions demonstrating understanding of "communicative intent". But that's to misunderstand the point the paper was making, not to refute it. Of course GPT can talk about communicative intent. The point being made in the paper is that the ability to talk about communicative intent doesn't entail that it has communicative intent or pays attention to our communicative intent.
w
Hmm...
[Text generated by an LM] can't have been [grounded in communicative intent], because the training data never included sharing thoughts with a listener
Does this ring at all true to you guys? I think of the "the training data" as being things people have written online, which almost always involves "sharing thoughts with a listener." I take the fact that ChatGPT gives sensible answers in a way that previous GPTs did not as evidence that the model can at least preserve intent and perhaps recombine so as to generate new intent.
j
The point their making isn’t about preserving intent of the original text, the question is does GPT have its own intent. Does GPT itself have a communicative intent? They want to say no, because intent is a social process and GPT doesn’t participate in that social process. In other words, intent isn’t a syntactic property, but requires a certain causal history absent here.
d
It's capable of copying all human behaviour that it's been exposed to in the form of text. If it knows how to communicate with intent then it can copy or simulate that and we won't necessarily be able to know it's faking it. This assumes it's abstracted up to a high enough semantic level internally.
Maybe not ChatGPT with it's millions of trains of thought, but a single long running instance in an AI lab. That's where ideas around consciousness and self awareness will come out
j
The point they are making is similar to a point about the Mona Lisa. Of course we can make a duplicate of the Mona Lisa. We could even do it atom for atom. But the duplicate would not be the Mona Lisa. That is something with a certain causal history. GPT isn’t copying all of human behavior. It is taking syntax of written language (and now some other modes) and reproducing it. It doesn’t have the same cognitive processes and those processes, not their output, are what determine communicative intent. (According to the paper, just explaining their viewpoint)
d
Yeah well that's the point where I believe it'll turn out that all the folk dismissing these systems as "glorified autocomplete", will be eating their words: when it's discovered that a simple syntactic learning model, when massively massively scaled, spontaneously creates deep understanding of semantics through internal abstractions
But it does require longer term memory, an opportunity for persistent "consciousness"
j
But that’s a totally different thing than intent. The meaning of words is not the same as their intent. If I tell you “The bridge on I64 is out” I am intending things that are not in the words themselves. Perhaps it is false, and there is actually a sign in I64 I don’t want you to see. Perhaps I want you to stay home and I know you hate the alternative route. These are my intents with my words. Semantic understanding isn’t that same as having your own intents. We can even using meaningless noises with intent.
d
Yeah but I would definitely expect ChatGPT or the like to understand and be able to model all of those scenarios, so it just comes down to whether "acting with intent" is indistinguishable from "real" intent to a human. Really, all you need to do is give it a longer term memory so it has continuity of thought. Then set it off with a starting prompt of "be as human as you can"
j
Yeah no doubt it can/will be able to do that. But a simulation of water is not wet.
w
Feeling Pierre Menard vibes. Plot Summary https://en.wikipedia.org/wiki/Pierre_Menard,_Author_of_the_Quixote: "Pierre Menard, Author of the _Quixote_" is written in the form of a review or literary critical piece about Pierre Menard, a fictional eccentric 20th-century French writer and polymath. It begins with a brief introduction and a listing of Menard's work. Borges' "review" describes Menard's efforts to go beyond a mere "translation" of _Don Quixote_ by immersing himself so thoroughly in the work as to be able to actually "re-create" it, line for line, in the original 17th-century Spanish. Thus, Pierre Menard is often used to raise questions and discussion about the nature of authorship, appropriation, and interpretation.
At a the concrete level, consider conversational turn-taking when talking with Chat. Usually, you ask questions, it gives answers. So it's interesting to see cases where it gets out of that groove. For instance, today I had it playing with simulated blocks, until it eventually realized that it should be ask me for what I want it to be doing. Another interesting prompt was, "Ask me questions one at a time until you can make a strong set of music recommendations for me." Out of five eventual recommendations three exist and the other two name real artists. Still on the queue to listen.
d
The fact its' so hard to get it to take any interest in you is just the way ChatGPT is set up: not to be a chat partner at all, but to be an Oracle. But that's not fundamental to the way it works, of course. I'd actually like to see a genuinely chatty bot! It's really just some config somewhere I expect