https://futureofcoding.org/ logo
#thinking-together
Title
# thinking-together
c

curious_reader

12/04/2022, 11:19 AM
Hello 👋 everyone, As you may have noticed but also @wtaysom showed in another channel, Open AI - Chat Gpt seems to generate a lot of buzz. https://twitter.com/rshoukhin/status/1598714847255855108 Looking at this I’m questioning myself if “framework” rewrites will be less common in the industry now? Will it have a deeper impact than this? Is this a development in the lines of Peter Novig’s - as we may program? https://vimeo.com/215418110 What are your thoughts 💭 on this? Thank you 🙏
w

wtaysom

12/05/2022, 2:38 AM
As I contemplate practical uses of this tech, they seem to center around interfacing with institutions: filling out paperwork, customer support, augmenting websites ("Just renew the library books that are to." "Order the same groceries that I get every time here." "When I search for dishwasher detergent, do not show me 'effing laundry detergent.") and probably programming too. A feedback loop could result in a quick improvement to these systems. I've just looked at dozens of Twitter threads where people prompt engineer ChatGPT into giving extremely good answers. Use that as training data and in short order, this software could be as uncanny at answering questions as... as... a well trained 20 question answering AI. For example http://www.20q.net/ guessed "koala" correctly with my son a moment ago despite him saying a koala weighs less than a duck and would be a good gift, which 20Q says it did not expect.
k

Konrad Hinsen

12/05/2022, 10:28 AM
Some random thoughts on this: 1. Programming, like all automation, is an amplifier for decisions. This makes it difficult to foresee the consequences of the decisions, and that is the root cause of all the trouble humanity has had with industrialization and now with computing. 2. In the interest of safety, decision amplification requires one of a. a limited scope of automation (sandboxing, ...) b. regular validation of execution by an agent that has an incentive not to do harm (liability, ...) c. provable predictability of consequences (matematical proof, ...) So... which amplifiable decisions can we safely delegate to today's AIs? I'd say none. So the questions becomes: how do AIs need to evolve so that we can safely delegate amplifiable decisions to them?
Something we can safely do even now is use AI for generating propositions that are validated by a human programmer. Make AI a replacement for looking up documentation and copy-pasting boilerplate code. But that's not what I see people discussing.
c

curious_reader

12/05/2022, 10:40 AM
But even making suggestions will influence the end result. But I would agree that there is a strong search for a use case when there is not really one
And some more this time related to proofs: https://twitter.com/lmeyerov/status/1599663686884679682
a

alltom

12/07/2022, 7:29 PM
I’m bullish! ChatGPT has been incredible for the documentation use case @Konrad Hinsen mentioned, and it’s actually all anyone in my filter bubble is talking about. https://twitter.com/alltom/status/1600170846190014464
It’s really hard for us to describe our systems to each other. Unrelatable documentation is the main reason I struggle to keep up with FoC projects. ChatGPT shows that, as long as we can describe them well enough for an AI to understand, the AI has a shot of transforming that information to make it digestible for whoever needs it.
I do think it’s critical to make the answers more grounded. The most tedious part of using ChatGPT today is checking the answers, because it doesn’t show its work! https://twitter.com/alltom/status/1599742037271937024
c

curious_reader

12/08/2022, 10:51 AM
w

wtaysom

12/09/2022, 9:18 AM
Correctness is certainly a problem people are working on. This came across my radar, "We also find that [our technique] cuts prompt sensitivity in half and continues to maintain high accuracy even when models are prompted to generate incorrect answers. Our results provide an initial step toward discovering what language models know, distinct from what they say, even when we don't have access to explicit ground truth labels." Another technique that comes to mind is having the LLM carry on a conversation with an oracle. For instance, if it's trying to write code, it will try running it before coming back to the human user.
2 Views