I did some vibe coding with Cursor and it got stuc...
# present-company
i
I did some vibe coding with Cursor and it got stuck in a loop of writing a buggy shell script, running it, looking at the output (unchanged because bugs), going "hmm let's fix that", then writing the exact same shell script.
It was 100x more productive at doing this than I would have been.
k
Maybe the 100th or 1000th attempt will get somewhere new!
k
This looks like AI is finally approaching human intelligence levels!
m
you are vibing at the wrong frequency
d
wub wub wub <- correct vibe frequencyyou're welcome
i
womp womp
d
ChatGPT does this over and over. I point out a mistake, it apologises, promises to correct, then spits out the same mistake.
a
I've also seen several cases on various models where they get stuck in a tradeoff, where they will fix a problem by creating another one... and then fix that problem when prompted by putting the first problem back
k
@abeyer Yes, and I used to do this as well for a long time. If you have n bugs to avoid in a system (Christopher Alexander calls them misfits) but your design process can only handle n-2, ping-ponging can result, often over a time of months as a bug gets reported in production, gets fixed, the other bug gets created, gets reported in production.. A lot of the value of tests for me is shortcutting this sort of ping-ponging between bugs. But if you tell AI to write the tests, and give AI carte blanche to modify tests at any time.. 🤷🏽‍♂️
a
I found it an even bigger issue when it was more "soft"/nonfunctional issues that maybe couldn't be easily verified w/ automatic testing
d
Seems like we need two AIs, each watching the other. Maybe one can write the tests and the other the code
k
Indeed, and I am somewhat surprised that this isn't done yet, given how important the idea of adversarial training has been in the short history of deep learning.
m
@Ivan Reese are you telling me llms vibe in dubstep?
my new system prompt: you are at a skrillex show waiting for the drop...
i
@Konrad Hinsen isn't this what reasoning models do, effectively?
k
@Ivan Reese According to my understanding, no. They are trained on reasoning stories written by humans. That's a form of supervised learning, whereas adversarial training is unsupervised: two AI models confronting each other. Or even a single model switching sides, as was done with AlphaGo. General reasoning models need supervision because there is no obvious arbiter for deciding if a reasoning is correct. For code, "compiles, runs, passes tests" provides three consecutive automatable arbiters.
k
My version of vibe coding is just playing some music in the background while coding.