This is an incredibly thinky group of folks. I’m w...
# thinking-together
i
This is an incredibly thinky group of folks. I’m wondering how, if at all, this little community has been using these LLMs and advanced chat bots. People are playing Pokemon in a text/CLI form just by asking “let’s play pokemon but with text and skip all the boring parts”*. I have to conclude that a number of you folks have made some crazy strides in the work you’ve been doing or how you’ve been refining your ideas with these tools.
e
My very not deeply thought out answer, that also reveals a lot about how I generally approach tech: I’ve no interest in using these tools as they exist today because they’re tightly coupled with corporate interests I don’t wanna tangle with if I can avoid it. From a design perspective, they’re also way more focused on generative work than I think makes sense — having a tool based around the same modeling but more focused on making things explainable; or elevating context would get me a lot more interested.
g
I know a very senior dev in another slack who has been using chatgpt pretty heavily in his coding work for the last few months and talking a lot about his process and results (he also reads the papers and stuff). It's pretty interesting. He's developed a very good instinct for prompt engineering in a relatively short time. Even things like when its telling you something factual versus when its just hallucinating are not too hard to pick out once you get a bit of practice
Regarding corporate interests @Eli Mellen. Not necessarily. Seems like its possible to run at least some aspects of this stuff with an open model on generic dev laptops https://simonwillison.net/2023/Mar/11/llama/
i
Thanks for that post @George Mauer - I’m scanning through it and picking up a lot of what I think I’m seeing as well: the predictive model can quickly adapt to snippets, which can they be re-adapted by people for programming use cases. E.g., in the Machine Augmented Test Driven Development section, the layout there sounds very much likely discussing a protocol with a human, then coming back the next day and experimenting with it. @Eli Mellen I feel you here, and I don’t know what to do about this. I actually spent the $20 to get access to ChatGPT Plus just to see what it was like in comparison. With
llama
above, I actually have that running on my M1 air too! It works interestingly enough, but I don’t have that tuning brain yet to understand what I can change to get better results for what I’m trying to prompt for.
g
one thing that I've been doing lately to build up an instinct lately is any time I ask chat gpt something, I keep tweaking the prompt until I break it to feel out the boundaries
For example yesterday I tried to use it to write a 360 performance review for my manager - I gave it bullet points and the performance review prompt. It struggled until it was explicitly told "use these bullet points", it kept bailing on the writeup. GPT4 interestingly didn't have that issue and understood the structure fine
j
I'm using it (along with midjourney) to help me with my homebrew dnd (pathfinder) campaign that I'm making. So not exactly programming related, but has made making these things way less time consuming and more immersive. I'm also done some code translations, mostly to understand code written in a language I don't read as fluently. Also found it is pretty good at suggestion approaches I knew nothing about (like algorithms for graph layout), I've got some ideas for integrating it into the editor I'm building, but that's still in too early of a phase. I think proactive rather than reactive usage is going to be a really interesting thing to explore. Copilot is cool and all, but I really want it focused on code I'm not currently writing
c
I'm started leaving ChatGPT open and asking it for things during the day; in particular, I have it build small functional code blocks that I modify - as a starting point. One example from this morning: I'm building a layout system - the usual stuff; vertical and horizontal stacks. I wanted to visualize the hierarchy of objects in the system. Since I'm using ImGui, I asked ChatGPT for an ImGui tree. This has 2 benefits; firstly, I can never remember the syntax/approach of doing this, 2nd, it gets me pretty close to what I want: Here's the output:
After a couple more prompts to refine it, it pretty much ran first time: (The tree on the right)
The second thing I did today was give ChatGPT a structure I use for Rectangles, and ask it to build the NatVis file from it (this is a helper layout that makes it look pretty in the Visual Studio debugger). It was right first time....
g
I will say, I've had pretty good success with asking chatgpt for obscure or non-existent documentation or examples. All sorts of stuff around how to do fancy things with emacs lisp or org-mode. It is almost always wrong, but wrong in a way that orients me enough to figure things out on my own. Its a very similar experience to asking around on mailing lists to find somebody who half-remembers the answer to your question from 15 years ago,
c
Another example, from a university module I'm doing. My tutor has done some work on the large scale structure of the universe. I wanted to cite him. I've just done this again as a demo:
What is interesting about this example and my previous one is that I had no idea if ChatGPT would know how to generate a bibtex or a natvis file. What surprises me over and again is how it seems to be able to do pretty much anything given the right prompting. It is extraordinary, and a game changer if you are a coder.
g
@Chris Maughan does that paper actually exist though? That's a big problem with it that it isn't limited to just stuff that actually exists.
or were you looking specifically for the formatting example
c
Yes, it exists (https://arxiv.org/abs/1211.6256). I wouldn't use the reference without checking it, for sure. You can ask it for examples of things, ask it to write text with citations, convert them to bibtex, etc. It is a phenomenal starting point if you are writing a paper. As you allude to though; it is only a starting point, and you can't trust it; you have to do the last confirmation/reading yourself. From the code generation point of view, it is amazing how well things 'just work', and how easy it is to modify things - such as saying 'don't use smart pointers', and have it rewrite the code, etc.
(and code is much easier to check of course; though perhaps harder to spot latent bugs in the implementation)
One problem I had with the above code sample was when I asked it to convert the function to a lambda - it captured the function in its own scope so it could recursively call it (not to bore with the details, but you can't do that in C++). So I just said: "declare the lambda using std::function, and assign it", and it did the right thing. Even when it is wrong, you can prompt it to get the right answer......
i
to wit on the bot just making stuff up, here’s one from an hour ago too. It’s quite common, and really does take a good human BS detector 😉 That is indeed a router being suggested as a mouse, lol… I wonder if we have hyper-aggressive SEO ‘optimization’ to thank for tags that caused this kind of prediction…
Even when it is wrong, you can prompt it to get the right answer......
I think this is teaching us about those small changes again in context being key. If all it takes is a couple of carefully chosen words to go from non-compiling code to compiling and functional code, that’s pretty much all I need to drop back to my “we’re all computer wizards casting spells in a CLI” analogy!
c
I should perhaps have phrased it : ‘even when it doesn’t give you what you want, you can clarify your request and get what you need’. Perhaps knowing what you need and being able to express it is what will keep some of us gainfully employed for a little while yet!
n
"knowing what you need and being able to express it" is literally my definition of programming. 🫢
t
I work for Microsoft, so as you might expect, practically everything we do has someone trying to integrate GPT into it. The examples that seem most promising to me so far are simple things like answering questions about the syntax of a programming language. That's not exactly revolutionary, but it does save some time and effort over searching StackOverflow or reading docs.
n
Microsoft should try integrating ChatGPT into the next Elder Scrolls game 🤯
g
@Timothy Johnson was there an equivalent of the 1995 internet tidal wave memo?
If I were a Microsoft in exec I would send that literally word for word would just search replace
t
I was three years old in 1995, so I don't know which memo you mean, lol
Most of what I've seen so far has been grassroots effort. Lots of engineers love to play with bleeding edge tools.
g
Oh that's a central part of Internet lore. Before 1995 Netscape was trying to grab market share on the web before Microsoft started doing stuff on there because everyone knew that once Microsoft did it be game over since they were so big. In 1995 Bill Gates finally sent out a big memo that started off the browser wars and the eventual death of netscape. https://www.wired.com/2010/05/0526bill-gates-internet-memo/
t
Thanks! I guess the equivalent of that today was when Satya announced the new Bing and said he wants to make Google dance
l
I've been trying to get it to be a compiler for a joke/parody language. Interestingly, GPT-4 seems worse at this with the same prompts. It's like it's more "sensible" and can't stop reminding me that the language isn't real, offering up alternative languages instead. eg: "Here's how you do this in Python." If you ask it to add a feature to the language, its jokes seem a bit more tame and 'corporate'. But maybe I need to adjust my prompts a bit for it. The language: https://github.com/TodePond/DreamBerd
i
@Lu Wilson I really enjoy seeing the kinds of things you try like this. I’m also of the mind that these predictive token tools can be more productive for us already with the right start. On a side note, I can clearly see you have chosen violence ;)
There’s immense potential for an individual to derive immense value for themselves by creating unique prompts on the fly. I “prompted” a LugoOS that was able to pair code with me, while asking probing questions along the way as I specified to “check in” with my mental state because I was having a bad day. I mean I have to choose to read it and interpret it, but that’s something a fraction of a fraction of people in the world could even think to have access to. It’s been driving motivation for me again.
e
I’ve been thinking about this great question a lot lately — something felt a bit incomplete or incoherent about my initial answer — especially when folks rightly pointed out that you can run LLMs locally. I’ve been able to more specifically pinpoint my discomfort with LLMs, and would be interested in seeing how other folks feel. Earlier I detailed 2 issues I had: • that these tools prioritize generation over explanation • how tightly coupled these tools are to corporate interests With more reflection, I’ve realized that those two things are the same, or at least intrinsically dependent on one another…so, running locally isn’t enough to totally decouple from the corporate interest. I think part of the land grab we are seeing right now, where many groups are throwing all kinds of noodles at all kinds of walls is because companies like Microsoft see the value in controlling the stack, soup to nuts. Turning engineers into “prompt engineers” is the most direct way they’ve ever had to do this—the backing systems (GitHub, Typescript, npm, etc.) stop mattering when you give folks a tool that just does the thing when asked…and slowly folks become wholly dependent on the system asked, the system you control. By focusing on generation over explication you don’t help folks learn how to do a thing, only learn how to use your tool to do that thing. A big ol’ give a person a fish vs teaching them to fish.
w
It certainly says something that I now have a Chat window open at all times, partly because I'm writing in Python recently (guess why), with which I only have a the least familiarity, partly because Google for me has become poor at helping with programming problems (when I write
obscure_api_function
, I damn well want results containing
obscure_api_function
). Chat is pretty helpful with specific "what's the typical way to" questions. It's also interesting how often I it helps without being the final solution.
Speaking of image generation. I love the jank of some things and how new workflows are enabled. See some stylized rotoscoping in action

https://www.youtube.com/watch?v=GVT3WUa-48Y

. Also making of

https://www.youtube.com/watch?v=_9LX9HSQkWo

. An animator reacts

https://www.youtube.com/watch?v=xm7BwEsdVbQ

, an aficionado reacts

https://www.youtube.com/watch?v=GOwxXj1EIXM

. We may not be in the Singularity, but the number of tabs I have open on these topics is on its way.
Finally, it's super interesting how much of a moving target this tech is. For example, now I ask Chat, "Can you give me a link to the documentation for that?" And it doesn't just make one up.
s
I’m usually pessimistic about socio-cultural effects of technologies pushed into the world by corporations driven by capitalistic motives without much consideration for (unintended) consequences. In this case though, I’m cautiously optimistic that thanks to AI we are headed towards a renaissance of the humanities. Today’s LLMs and other AI technology will likely cost many jobs. A few years ago many professional language translators had to look for other ways to earn money, not because translation technology finally became great. It was (and still is) just good enough, but it’s cheap, so that you can have stuff translated for (almost) free. If you don’t need or want a proper high-quality translation, you can use AI for that. The result will likely be average at best, but there are a lot of tasks where mediocre and cheap is superior to excellent and expensive. I think the same will be happening to most written and graphical content, quickly followed by all kinds of media assets. For instance, game studios will for sure soon replace most designers with prompt engineers, perhaps first just for prototype assets, but it won’t be long until that’s good enough for a mediocre game to be published with such generated assets. On the plus side you could also see it that way: Soon you might be able to create your own game and don’t need to pay for any artists and still get some decent assets for it. Of course, what happens to all those people out of their jobs? I don’t know. Some will perhaps switch from programming or writing or graphic design to prompt engineering and kind of do what they did before, but this time directing AIs instead of doing it themselves. Some will go into other industries, those that seem further away from being automated for now. And some will try to figure out what we as humans can still do better than AIs. And I think that is an extremely valuable idea to think about. There is a chance for another renaissance. Good art has a deeply humanistic quality that I believe AI hasn’t cracked yet, and won’t until we get to AGI, which I’m certain is still relatively far away (I can go into detail why, but that’s probably for another thread). And I’m not just thinking about the kind of art that ends up in galleries and museums. Something you created yourself for a loved one can be a piece of art. A really well-designed product can also be a piece of art. Heck, even a really well-produced TikTok or YouTube video can be a piece of art. We know it, when we experience it. And the really good stuff won’t be just randomly falling out of an AI model, because to make the stuff that makes our hearts sing you need more than just a bunch of algorithms and a lot of processing power. What you need for that, we all have deep within us. I have to say from my own experience, going through a programming education and working in the tech industry usually doesn’t help in revealing it, and often causes it to be buried instead (because profit, speed, efficiency, metrics, etc.). But if more of us who are technically capable also find their way back to what makes us human, we stand a good chance of figuring out how to coexist with the mediocre domain-specific wannabe-humans that AIs are today. And wouldn’t it be great to have all the mediocre stuff be dealt with, so we can focus on the great? I’ve also keep a ChatGPT browser window open and experiment with it. I enjoy using it for use cases where I basically let it prompt me with clever questions to push my own creativity. Relying more on the muse than on the oracle. There’ll be a lot of people that will happily jump on the automation use cases and have their dull work taken care of, I think Microsoft is setting that up quite effectively for the business world. But with the right mindset, a healthy amount of skepticism and distrust in the generated “facts”, and creativity in how to let LLMs play to their and our own strengths, there sure is lots of opportunity. The future is going to be wild.
a
People have a hard time participating in "humanities" if they can't pay rent. That's going to be a problem much sooner than abstract questions about the intrinsic value of expression. But I do agree with your last line, the future is going to be wild.
j
That's very beautifully put, Stefan, and I mostly agree. But I think Andrew's got a very important point: I fear that these thoughts - that I share - largely ignore the economic realities that would likely follow from this technology being good enough to be used on a large scale to replace many of the "last well-payed jobs we have". I don't want to reduce this to a discussion about economic systems, but I still want to say that I don't think that capitalism's track record of channeling human time freed by productivity improvements into "socially necessary work" or simply "beauty" is ... particularly good.
That being said: I'm a programmer and I use ChatGPT daily. I love it, and am actually hopeful right now that in the short term it will free me from many of the "bullshitty" parts of the job, allowing me more time spent on the things I love about it. The ones I would still do, even if I was economically 100% replaced as a programmer - like people still play chess. But I can't say that these past few weeks I haven't thought about "what to do after". Like ... will I go into woodworking? Gardening? We'll have to see ...
I keep coming pack to this image in my head of me waking up in a few decades(?), going into my workshop and asking "my" locally running AI how our super small-scale permaculture farm has been doing over night and it answering with the most important things it can read from the (also local) sensor streams. Then I'll spend my day tending to the plants, making furniture and tinkering with the (simple, open) hardware the system is running on. All very solarpunk and very likely very naive 😂
k
The "generation over explanation" aspect mentioned by @Eli Mellen is something I worry about as well. I am a research scientist, my job is to create and improve explanations. Code in my universe should support explaining things, and should itself be understandable. If we end up with bloated codebases in which layer upon layer is AI-generated, we cannot do science with them. On the other hand, quick generation at the top level is great. Glue code? Let an AI do it. As long as the code is shallow and small, it remains understandable. Quite often, today's glue code ends up in tomorrow's libraries. At that point, we need to clean up the mess. Today we don't. Will we see AI support for this job? Maybe. It sounds doable. But does anyone care AND have the means to work on this? I think this depends on how Open Source AI will develop in the near future.
j
I use symbolic knowledge representation to generate models of laws that are capable of being validated by subject matter experts, and use those models to generate answers and explanations from fact scenarios. The explanations are typically extremely verbose, and as I add features to the representation language, they tend to get longer. I am currently exploring using GPT3/4 to summarize those explanations in ways that are easier to digest, with some success. So I use it ON explanations, but not FOR them. I anticipate also using it in a Codex-like way to help users generate the symbolic models of legal text. I will likely try that myself as part of my own coding workflow inside the tool, first. But it remains to be seen if even GPT4 makes that feasible, and there are some technical hurdles, because the language currently doesn't have a plain-text representation that is isomorphic with the visual one.