Friends, I don't know what to make of developments...
# thinking-together
w
Friends, I don't know what to make of developments in AI these days. Having worked on dialog systems in the aughts and having loosely followed developments since (I recall preparing a talk around 2010 which left me pretty enthusiastic about ML applications in contrast to the App-and-Facebookification of "tech" — that was on time horizon of a few years, which ended up being a decade plus), every day I check in on Twitter I see more exciting stuff than I can possibly process. I was just writing someone yesterday about how in six months time, we'll have LLMs acting as the front-end to knowledge bases and rigorous computational systems, and then we'll need to focus on getting the human, AI, and formal model all on the same page. As has already been noted in #linking-together today, my estimate was off by roughly six months. Consider, "I've developed a lot of plugin systems, and the OpenAI ChatGPT plugin interface might be the damn craziest and most impressive approach I've ever seen in computing in my entire life. For those who aren't aware: you write an OpenAPI manifest for your API, use human language descriptions for everything, and that's it. You let the model figure out how to auth, chain calls, process data in between, format it for viewing, etc. There's absolutely zero glue code" https://twitter.com/mitchellh/status/1638967450510458882. If you can tolerate his prose, Stephen Wolfram has a long post https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/. The "Wolfram Language as the Language for Human-AI Collaboration" section is most relevant to Future of Coding. What do these developments mean for the Future of Coding? And how are you all holding up? Me? I can hardly process what's happening, let alone what to do about it.
k
Recent AI development are almost a denial-of-service attack on intellectual life. Everybody is struggling to keep up. It's almost guaranteed that the immediate impact of all this will be negative - bad AI applications, rushed attempts at useless forms of integration, etc. It would be great if techies around the world would silently play with these tools for a while before rushing to market with their new toys.
That said, I agree about the nice perspective of AI for glue. Or, more generally, for the "outer", most user-facing aspects of software. This echoes the structure of pre-computing work based on good old mathematics: plain-language reasoning with embedded formal systems.
Oh, one final advice to techies working on AI integration: wait for Open Source AIs. If you all jump on OpenAI's offerings, you will probably regret it when OpenAI tightens the screws (as it invariably will).
t
The integration with Wolfram* is certainly interesting. Still, there is a large difference between something that fits on a screen and a significant system. As long as humans decide what goes into a system, there will remain two challenges: 1. identify a relevant question that holds value. This is a skill that can be built. It’s not about prompt engineering, but about understanding cause and effect in a specific domain. 2. figure out where, how and why a specific solution fits in the system. A completion is interesting but from a system engineering perspective, we still need to evaluate it. That happens to be the main blocker in system development for quite a while now, even without AI. For both of these, when the problem and the solution fit on a screen, they can potentially be addressed implicitly (picture generation is an extreme case of this: I believe a reason why they are so popular is that people can evaluate them quickly and implicitly). When they do not fit on a screen you still have to evaluate them, but that evaluation can be significantly more expensive. Of course, you can use tools to address that problem, too, and this will raise the next level and so on. Which then leads to a discipline of figuring systems out. Our approach so far was to pursue compressing the system for a specific perspective, and this turns out to indeed accelerate greatly the ability to reason about systems. I believe this area is in its infancy and I believe there is great potential (both intellectually and as a competitive advantage).
s
A few friends of mine and I are in a small private chat group where we discuss tech stuff and forward each other links to articles. You can imagine what that chat has become over the last few weeks. Nothing but AI. (We used to debate Apple’s upcoming headset, which feels moot now; I chuckle at the thought of a product announcement that demoes any Siri-based use case at the moment.) On top of that, I get articles about AI forwarded from friends outside of my tech bubble. Which is a clear indicator to me that this has a more significant cultural impact than the stuff we go on about usually. That lead me to re-prioritize somewhat, and now I spend quite some time reading about AI, and also playing with it. As often with technology, we’re at the whim of the companies that push it on us, so it’s partially like a roller coaster ride where you can do little but make sure you’re strapped in properly and try to enjoy it. Oh, you could choose not to ride it in the first place, of course. But then there’s nobody to talk to anymore, because everybody is on the ride and only wants to talk about it, and about the terrible things that will happen to you at the end of it, where it’s unclear what exactly will happen (part of the marketing that got you on here, I guess) and people suspect the company who built the roller coaster hasn’t fully done all the safety checks (and weirdly there’s no regulation either, so they got away with it). The good news is, most of us are in the same car (it’s massive, apparently), so take a deep breath, put your hands in the air, and brace for the next inversion… 🎢 (Some people insist that tweeting as loud as you can from the top of your lungs helps you feel better.)
j
As I mentioned in another thread on here: I'm definitely already thinking about what I'll do with myself in the future - but these considerations can only ignore the wider societal implications of the technology and only ever view AI as a "programmer replacement tech", because otherwise, the system of things to consider gets too complex. So really, the thoughts are worthless. Also, I don't know if these seismic shifts will come to pass at all/if my considerations will be relevant. Right now it's hard to tell where on the curve of possible progress with transformer based AI we are as well as what capabilities already present in the existing models haven't been discovered, thought of or exploited. Part of me also isn't entirely sure about the AI safety/alignment talk going on. I'd like more takes by people who aren't directly or indirectly involved with OpenAI, Microsoft etc. in some form. Because these companies and people would certainly stand to benefit from making GPT sound "more AGI" than it really is.
The direction that excites me is LLaMa/Alpaca + Langchain. But the direction that I'm fearing all of this will take (and that I currently use and even pay for, to be honest) is the corporate capture that OpenAI and Microsoft are currently executing.
Also, another problem I have: I was a Machine Learning Engineer in the past, at the height of the CNN hype shortly before transformers hit the scene. And when you're not one of the handful of people doing foundational work/research, I feel like ML engineering is super boring. It's basically - ironically - all glue code, all of the time.
But maybe the curve we're on really is so steep that there won't even be a real transition period where all programmers "have to become" ML engineers for a while, and we're going straight for whatever it is that follows? 😃
w
Psychoanalysis is what follows: the art of using dialog to tune the black box. Keeping my own years of observation in mind, I wonder if Bryan Caplan has gauged the situation now correctly, "AI enthusiasts have cried wolf for decades. GPT-4 is the wolf. I've seen it with my own eyes" https://twitter.com/bryan_caplan/status/1638199348738793473. What has he seen? "To my surprise and no small dismay, GPT-4 got an A. It earned 73/100, which would have been the fourth-highest score on the test." In contrast, ChatGPT of January did "a fine job of imitating a very weak GMU econ student." He continues, "I wouldn’t have been surprised by a C this year, a B in three years, and a 50/50 A/B mix by 2029. An A already? Base rates have clearly failed me." As far as can see, we're at the start of this curve rather than the end, but I can hardly see anything because the curve looks like a cliff.
i
I'm also thinking about this a lot, but I'll be brief: • This changes how computers interpret our writing, which changes textual programming, addressing many of the things I dislike about it. I think this is going to be a setback for visual programming research, which seems of most interest to those who are dissatisfied with text code. Yet, it could force the handful of people who stick around working on VP to move on from merely wrapping text in boxes. • There's surely going to be some way to use these transformers to invent a new visual programming. Very curious what that could look like. Also curious whether it will ever happen_._ It's possible that this flurry of excitement over new language-centric (written and spoken) ways of interacting with computers deprives or displaces all the other kinds of interaction for a good long while. Is this the end of direct manipulation as we know it?
w
My hope for visual programming is the automation of the UI tediousness that currently limits what we can do. Looking at image gen, prompt engineering is the least good part of the process. Better are tools that allow for interactive, iterative refinement. I’m surprised by how good Chat is at selectively editing an existing text.
k
Thanks @Stefan for describing very well what I meant with denial-of-service attack. Two additions: 1. The current AI developments are obviously an important step in the evolution of information technology, which is what our focus here is. But from the wider perspective of society, or even just technology, it's a long-term concern. The immediate problems society has to deal with are largely unrelated to AI. That's why I see the denial-of-service attack as so problematic. 2. As long as AI technology implies corporate capture (i.e. as long as we don't have good-enough Open Source AI), I doubt AI will have any positive impact on society. Conclusion: our most urgent problem is how to protect ourselves against short-term AI damage.
s
I said elsewhere that I’m sort of optimistic about all this, because I think it gets us to the tipping point of realizing what we've been doing in tech all along: The main objective of most technologies has been for a while to make a rich person (usually an old white dude) even richer. Positive changes to society were basically happy accidents along the way, and we've accepted a lot of not-so-happy accidents along the way too. If generative AI transforms business and creative industries as it looks it will, it'll just become harder for tech leaders to pretend that tech is neutral and there's no need to take any responsibility for “a little bit of disruptive innovation”. I misleadingly proposed it as a naturally following consequence elsewhere, but let me rephrase that as just a hope I personally have: If an AI can do what you can do as well or better than you can do it, than we need to ask ourselves, “What is it that I can contribute that AI can't?” And I personally am in love with that question. For me, it is a tough question, but just the process of pondering it already leads to great places and a much deeper sense of purpose and significance than I ever felt in any tech job before.
i
AI is inauthentic. So taking capitalism as a given, whenever customers value authenticity you'll find humans doing work that could have been done by an AI.
s
I found this video to be therapeutic:

https://www.youtube.com/watch?v=dxxCPdcMcFw

(If you dislike iOS or Swift, platform and language aren't really relevant for the point he’s making; I encourage you to watch it anyway.) It’s good in demonstrating: • ChatGPT is far from good enough today — sure, it is likely to improve quickly, but we still have some more time to process this • If you really care about what you’re doing, and you are willing to sweat the details, your results will likely be better in many ways, even as AI improves (one of these ways being more authentic, to connect it to what @Ivan Reese just wrote) • There’s still lots of opportunity to reframe the question and ask, “How can we use AI to support us, instead of replace us?" It’s up to us how we use these AI systems. Do we want them to automate (and eventually make us obsolete) or do we want them to augment? So far it looks like the same technology is equally capable to do either of these things. If AI is going to replace us seems to at least partially depend on how we choose to use it, what we ask it to do for us, and what results we are willing to settle with.
a
So far, people demonstrably prefer low price over authenticity. That might kick in, in the long run (it's the only hope for fiction IMO, but I think it's a pretty reasonable one). In the near term, people will continue to follow the money. The vast bulk of people won't be confident enough in any particular disaster scenario to sacrifice their (very real) short term cost concerns. They're not wrong: no one knows what's going to happen. I know I'm not fully processing even the full degree of uncertainty about the world. My brain kind of does a quick spin-up/safety shutdown routine when I try to think about it. What do atoms know when a crystal melts? Can they say whether they'll be integrated in the next structure, if/when the environment cools? But of course there's the part of my brain trying to figure out if AI models can directly output structured data for VPLs instead of text. That doesn't stop.
i
"How can we use AI to support us, instead of replace us?" It's not up to "us", where by "us" I mean "99.9999% of people". As usual, the pervading fear doesn't stem directly from the technology itself, but rather how wealthy and powerful people will use AI to further tighten their grip on the rest of the world. It's exactly the same dynamic that makes most current human labour invisible and anonymous. I know who made my belt because it was hand-made for me (see: hipster), but I don't know who made my sneakers. I know who made the art on my walls, but not the art on the covers of my books. I know who made the music in my mp3 folder, but not the music in my streaming playlists. AI is going to intensify existing forces that separate creation and consumption. It's going to turn up the heat by a few degrees, but we're already on fire. This makes me sound pessimistic, but I think I'm feeling more neutral than anything. My guess is that things will continue to go the way they have gone, at roughly the rate they've already been going, maybe a little quicker.
a
I hope so, but I'm afraid you're underestimating the orders of magnitude included on the scale under the heading "On Fire". I'm afraid it's going to be a lot quicker, not a little quicker. Remember that all the comforting takes about ChatGPT we've been hearing for the past few months are obsolete. Any conclusions drawn on the basis of GPT-3's weaknesses are obsolete. And I bet we're going to start from scratch again before anyone is ready (probably including OpenAI, judging by their appsec record to date).
s
To be clear, I was talking about us, the people reading in this forum. We’re not the 99.whatever%. We may not be Musk or Zuckerberg himself, but some of us here probably work for them. Or the next Musk or Zuckerberg could be reading here. You may not be rich. You may feel powerless. But if you spend your time here, you are likely privileged. You likely work in tech, even if you’re “just” an IC “following orders”. But how we decide to use AI has disproportionately more impact than those 99% you refer to. I’d say a lot of it is up to us, here, now. I don’t know what exactly to do about it either. I doubt anybody can. So we could all just agree that we’re all f*cked and maybe there’ll be a chance in the future where we can collectively look back and reminisce in how right we all were that this capitalism thing was ultimately toxic. Or we could try to paint faint pictures of worlds that could be, even if they’re hopelessly unlikely to ever materialize. You know, like we pretend with all that visual programming stuff. :)
i
We can also choose how (or how not) to use AI in our own lives, for our own pursuits. I'm really fond of Konrad's advice near the top of this thread: use open source AIs. Generalizing: use AI authentically.
v
Not sure you’d have that choice. If people around you use AI to get 10x productivity you’d likely have to do the same…
j
GPT feels like an existential crisis for the future of programming, if not an extinction event. I am having to rationalize to myself about why that probably isn’t true so I can keep working. The scary truth is that no one knows. We have no idea where we are on the scaling curve or even what constraints will limit it.
i
There’s work to do, but I think it’ll be commoditized in the span of weeks to months (if not days in some cases!) As a result, I think we’re witnessing the start of one of the greatest consolidations we’ve ever seen and relatively few (tech, white-collar?) businesses will survive it. It’s going to be a very interesting next couple of years. Josh and I have decided to step away from the really cool work we were doing and instead focus on being in a good position to take advantage of whatever happens.
We’ve spent the last ~8 years trying to make something like this happen; it’ll be really wild to see how much of what we came up with becomes real.
i
relatively few businesses will survive it
Can you put some bounds on that statement? Like, I don't see AI taking over food service or hospitality any time soon.
i
Ah yeah, should’ve been more clear. I believe that with the technology that currently exists, the majority of businesses that don’t have established moats around either data access (e.g mapping data), capability (e.g. twilio providing telephone capabilities), or physical manifestation (pepsi, plumbing, etc) are going to have a rough time.
That’s unlikely to be a change overnight or anything, but perhaps more pertinent to this group is that it destroys a lot of the nascent opportunities.
For example, the ChatGPT plugins release nuked a good portion of the current YC class a week and a half before demo day. Who knows what the implications of that will be.
w
Perhaps in line with some of the sentiment here, Cory Doctorow takes a dim view of how capitalists will engage in this AI moment https://pluralistic.net/2023/03/09/autocomplete-worshippers/. I wonder where are we on the S-curve? I see so many immediate directions for improving obvious shortcomings... Frankly, at a moment like this, I now have to go check twitter to update my worldview. Like what's the progress one putting models into hardware, today https://twitter.com/BrianRoemmele/status/1640105149099302913? Or can GPT-4 actually attend to feedback for real https://twitter.com/ericjang11/status/1639882111338573824?
So then I'm left wonder if human imagination will be a limiting factor — but then I spend a few minutes seeing what imaginative humans try out. And so I cannot find the ceiling at the moment.
n
I posted my thoughts on LLMs in a new top-level thread. I'm more optimistic about our jobs than some other folks. I think the future is bright 😍. Maybe we'll need fewer programmers in the future, simply because it will become easier to develop complex software. But alternatively, maybe we'll just make even more complex software, or a larger quantity of software.
v
I agree with Nick. I also think people underestimate the human side of design and interfaces. A lot of programming is understanding the “what” and more importantly the “why” and then building artifacts that are iterated upon..
w
Certainly later in my projects, out of a given hour of work, the coding part might amount to 5-10%.
j
@wtaysom As long as human creativity is the limiting factor, I don't think we have anything to worry about. I feel like for most, the anxiety kicks in when you start to think about the point where it's the AI's creativity that surpasses it. 😄
s
Trying to synthesize various replies in now various threads here, it seems there are different dimensions that deserve to get pulled apart: 1. Regardless of how optimistic or pessimistic we are what that means for the future, it seems most of us acknowledge that there are both valuable uses cases for generative AI that can enhance and augment what programmers do (generate variety, add perspectives, muse over oracle, just do the boring part, etc.) and there are also uninspired use cases to just automate something away (autocomplete worship, do this thing, probably wrong and/or buggy, but don’t care, good enough). 2. In terms of enabling people to program, dare I say “democratizing programming”, there are also two perspectives: some are excited about getting into the game (or back into it) because it seems to get easier, others are worried it’s going to kill their job as their expertise seems to be worth a lot less now. 3. Our current business environment incentivizes massive exploitation, which will double down on the uninspired use cases that will make the bad outcomes of 1+2 more likely and push us all towards (and over?) the cliff. Perhaps though, instead of pushing us over the edge, this could push our boundaries, get us to tackle even more complex problems, because it frees up capacity to let us focus on more important issues, whether related to programming or beyond. Is that an appropriate summary? Did I miss another dimension?
v
Yes, 4. This summary was not written by GPT-4. 🙂
d
I think you've missed the little thread in #of-end-user-programming that I started!
Here, I'm discussing the case where ChatGPT "is" the programming language: what formalism will still be needed? And will that be graphical?
In https://www.geoffreylitt.com/2023/03/25/llm-end-user-programming.html#opening-up-the-programming-bottleneck our @Geoffrey Litt is talking about this issue. He says for example (I'm still reading the article so may find better quotes!):
There’s a lot of value to seeing the spreadsheet as an alternate view of the underlying data of the website, which we can directly look at and manipulate. Clicking around in a table and sorting by column headers feels good, and is faster than typing “sort by column X”. Having spreadsheet formulas that the user can directly see and edit gives them more control.
The basic point here is that user interfaces still matter. We can imagine specific, targeted roles for LLMs that help empower users to customize and build software, without carelessly throwing decades of interaction design out the window.
Just finished the article and I think it's hitting all the interesting points for me.
s
@Duncan Cragg In what way do you see your thread not covered by #2 in my summary?
@Duncan Cragg And I assume your point about @Geoffrey Litt’s article is that old-school programming isn’t going away completely?
d
Oh! Um .. that's quite a surprise - I'm not seeing the connection, sorry! And old-school programming? I can't speak for Geoffrey, but I'm not sure he'd think of that as a take-away. Sorry to appear negative, but I'm struggling to see your points, sorry! 😕
Could be that everyone else is (also) struggling to see /my/ points, which seems more likely! I do Think Differently, to my cost as well as benefit 😁
y
ChatGPT has an amazing breadth of knowledge, but I’ve yet to seen any demonstration of it having novel insight. I feel like humanity still has plenty of time until AI surpasses all of us.
w
In all seriousness @yairchu, what do you feel counts as novel insight? For example, how common are novel insights? Are they the sort of thing most people have multiple times per year? Or do you more have in mind, say, shifts in understanding that a group undergoes every few years?
y
@wtaysom If you ask it how many faces a shape consisting of two attached identical cubes has it will say 12 rather than 6. It’s understanding is very very limited and basic, almost as if it’s just advanced pattern-matching without any true deep understanding. If classifying by Machiavelli’s classes of intellect (https://www.goodreads.com/quotes/241451-because-there-are-three-classes-of-intellects-one-which-comprehends), I haven’t seen it demonstrating even the second class (being able to distinguish whether something makes sense or not)
a
I don't think that's what Machiavelli is getting at in that quote, and also I have seen examples of GPT-X saying it's confused by a question and giving up, i.e. saying it doesn't make sense. There are examples of it exhibiting something very like a physical model of the world. Also, none of that addresses William's question. Before we can judge to what extent a machine is generating insight, we need to figure out what insight is.
I think insight has to be something close to generalization: in many situations, find a common pattern or rule. It probably has to be predictive, if only probabilistically... It really is hard to tease apart from intuitive pattern matching. If you, a human, can predict what some weird system or situation will do, is it because you have insight or just because you're used to the pattern? Maybe it has to do with his explicit (communicable?) the mental model is.... It will not be easy to tell when/if AI systems cross the line into "insight", assuming they haven't already.
s
We (and by that I mean cognitive science) has a pretty good grasp on what insight is at this point. And by that standard LLMs are certainly very far away from that. Can’t look it up right now, but I have a post on my Substack (which doesn’t explain insight, just tangentially mentions it), which links to an episode of Awakening from the Meaning Crisis which is about attention, insight, and flow and refers to other source material.
For those who’d like to dig deeper: This is perhaps good as an introduction to attention, which is deeply interwoven with insight (There is a reason why the foundational transformer paper was called “_Attention_ is all you need”): https://stefanlesser.substack.com/p/attention-and-insight In the post I mostly link to Vervaeke’s explanation in episode 9 of Awakening from the Meaning Crisis, which is about Insight:

https://www.youtube.com/watch?v=jkWNBdBDyoE

This is all still pretty basic stuff, but should already give you a good enough sense that you don’t need to run around scared that current LLMs might be sentient. But of course there’s more. Episode 32 of that series connects more recent research about self-organizing criticality as a dynamic component to insight (It’s probably necessary, for sure helpful, to watch episode 31 first). That is not the only component, but is interwoven with a structural component of small-world networks in a framework called relevance realization. That is all relatively recent theory, which means it’s a hypothesis that needs more validation, but if you follow the whole argument and it’s connections (which is a lot of work, so I don’t really expect anyone reading this actually spending the time and effort, but if you do, let me know!) it seems quite plausible and well-grounded in prior research. The details may not turn out to work exactly as described, but the overall connections made seem to already have good converging evidence. The gist is: the relatively static LLMs we use at the moment have some of the structural capability but are missing the dynamic capability for insight. It could be argued that some of that happens during the learning phase (and I don’t understand the AI research side deeply enough to know if that is actually the case, but probably not), but as long as we have separate learning phases and not fully integrated live learning (and the tricks we use for fine tuning models and using custom embeddings don’t do that) we don’t have to be worried that what cognitive science understands insight to be to occur in current LLMs. It is, however, not that far away, and both AI research and cognitive science are tightly interwoven to make progress on this. This is why I find what is outlined in this Relevance Realization paper quite fascinating: http://www.ipsi.utoronto.ca/sdis/Relevance-Published.pdf The good news is that as we get closer to creating consciousness, we also figure out how it works, and vice versa. The bad news is, by the time we fully understand what consciousness is and does, we will likely also have already created it.
w
@Stefan I wanted to take some time to read the substack post. I think its important to distinguish between "insight [occurring] in current LLMs" as opposed to an LLM saying something insightful. It's similar to how children say insightful things without seeing the insight. In fact, the innocence is part of the charm. Maybe LLMs are the same but to an even greater degree. They're not like a person so much as a stew of collective unconscious in which any sign of personhood is a role or an eddy in the deep. Of course, existential horror feeds on our feelings that our own selfhood is similarly fragile. With respect to consciousness, I think the main challenge with "understanding" it, is that it's a whole constellation of characteristics that happen to come together in humans. It's easy for me to imagine a near future in which we can dial up and down these characteristic independently in AIs. Harder is imagining that the characteristics of consciousness are tightly knit so that "consciousness" turns out to be a robust single thing.
s
@wtaysom Thanks for taking the time to read the post and then writing a thoughtful response! I totally support your distinction and what you describe as “LLM saying something insightful” I consider one of the most important aspects of LLMs — they can help you come to an insight. That’s also in the spirit of using LLMs as a muse over an oracle. This is the reason I see great potential in LLMs, in particular their use in Tools for Thought. I’m not sure where you’re trying to go with your comment about consciousness. Is it a complex, emergent phenomenon? Likely yes. Will we ever be able to fully comprehend how it works? Perhaps no. But you could also make the argument that we don’t really know anything about the universe. Yeah, we have some theories, and they turn out to have some pretty good predictive powers, which seems to be good enough for us to base all kinds of useful stuff on top of those theories, even though they are just simplified approximations, which likely lack important yet undiscovered aspects. With consciousness, I think we’re on a similar trajectory. Some day we’ll have better approximations of how it works so that we can do all kinds of weird stuff with that knowledge. The current debates about AI being potentially conscious or sentient and how many tech people talk about alignment just shows how much more work needs to be done to understand the human side of it better. All the power we get from computation made us think that everything is computation (or at least everything is capable of being modeled by computation), to a point where people believe that we are just some kind of machine ourselves. That is ultimately 17th century thinking (Descartes, Hobbes, etc.) and we have made some progress to figure out that what makes us human goes beyond what is expressible through algorithms, and it seems to me that’s useful knowledge to build up in a time where AI is going to do anything that can be modeled with computation (which is obviously still a lot). Vervaeke is busy publishing videos about AI on a weekly basis right now. I recommend just watching any of those to see that a debate with less tech and more cogsci about AI alignment is refreshingly different from the pure tech perspectives. I don’t know if the cogsci perspectives are “correct”, but they certainly add some valuable aspects to the discussion that are completely missing from most tech-focused discussions about it.
w
I'm going to have to listen to this now

https://www.youtube.com/watch?v=i1RmhYOyU50

. Ironically, I've been actively reflecting on AI and consciousness for more than a year after (1) consulting on a play regarding the topic and (2) seeing Blake Lemoine making reality of that fiction. The similarity being that when a people of different backgrounds encounter a novel intelligence, they draw different conclusions. Frankly things are developing so quickly that my opinions don't have any time to settle.
Let me elaborate a tiny bit on what I mean by characteristics of consciousness. Three come immediately to mind, which I'll call (those others probably are trying to standardize names for these): • Sentience: the experience of things, sense experience. • Awareness: knowing what you're doing, being able to reflect on your thinking. Intentionality (having rich concepts in mind rather than just manipulating symbols) being the beginnings of awareness. • Selfhood: having the notion of being a thinking thing separate from others with some continuity over time. One surprise with LLMs is that I now see the possibility of awareness without sentience. Though now I begin to wonder whether vast combined descriptions of sense experience can end up amounting to something similar. Consider GTP-4 being able to output SVG drawings of unicorns from only their descriptions. This capability lives in an awkward in-between. Or, I don't know if you've had this experience, but if you've read a lot, lot about a place and then visit, there is a sense of having been there before mediated by the indirect observation,
s
I have to be careful not to get too excited here, but it’s so refreshing to find somebody else who is looking into this a little more than usual. The video you suggested above is a good one, there is also his “official” statement, which he refers to in the one above, where he makes a more academic argument. All good input to think about, and so different from listening to a Sam Altman interview for instance. Your characteristics of consciousness align well with slightly differently terminology I got familiar with when reading a lot about 4E cognitive science. The mapping is straightforward though and I think you will get a lot from just listening to the above conversation. Some of it will be quite philosophical, but creating sentient beings is kind of a big deal and warrants thinking about some bigger questions. That is also why he dismantles arguments like “it’s just a tool” and “we’ve adapted to disruptive new technologies before” in the first ten minutes. Please keep me (us?) posted on what you make of it!
w
Dude, I took a degree in Philosophy. The Vervaeke conversation comes as easily. Along those lines, when I was a wee undergrad, I spent a week following John Searle around. There's a fun symmetry here. That was about twenty years ago, and about forty years ago is when he came up with the Chinese room thought experiment. Well, it's not a hypothetical anymore!