I don't necessarily agree with all the claims, but...
# thinking-together
x
I don't necessarily agree with all the claims, but that's a cool demo: https://twitter.com/Altimor/status/1278736953836400640?s=19
👍 1
o
I struggle with this because at least with the examples in the demo - with haskell I could have produced the code for the same effort it took to write the comment. Plus English is ambiguous and not precise enough. What I want stuff like to be applied to is generating more efficient versions of the same code or efficient code from a specification or from a test. Like you shouldn’t really have to write the tail recursive version of a function if there are tools can help you do that.
👍 1
r
Cool demo, but I think it's another example of too much AI/ML hype. The most telling example for me was the part where the model made an error in the demo on the "compute_total_price" function. First, note that it's guided by doc string comments. This very quickly devolves into a classic black box optimization problem. I can easily imagine someone spending hours tweaking the wording in their comment to try and get the model to generate the right code. How is that better than what we do today where we tweak code or add annotations to try and get the compiler to produce more optimal code? This is worse because natural language is not nearly as structured (The search space is much larger) and the ML model is much more stochastic than an optimizing compiler. Second, think about when, even after the presenter "fixed" the comment, the model produced code that was almost right, but had a bug. The 80% off instead of 20% off. The presenter writes that off as no big deal, and in a small toy example like this it is easy to find and correct the error. But can you imagine that situation in a much larger code base? Or even a single moderately complex function? It's well known that reading code (and understanding it) is much harder than writing it. Anybody who has had to fix a bug in a dense legacy code base will tell you, it's way harder to find the bug than it is to fix it. Often, even if you were the original author! This feels less like "pair programming" from the future, and more like "instant legacy code generation". 😢 I think ML is good in general, but I personally feel like this is a case where more black box magic makes things worse.
🤔 2
💯 1
j
This seems like the fallacy that programming is hard because you need to master the syntax of your language. But programming is actually hard because it’s hard to express things precisely. Like, stop hiring senior developers to write your code because look, a junior developer can also write that code. Sure, it has a few errors ¯\_(ツ)_/¯
🍰 1
💯 6
s
I agree with you guys, however, I still find this demo impressive and although it does promote some kind of hype over ML/DL I do believe some parts of developer experience should be automated. May be one day in the future that will happen. However, I do not believe that it can be achieved via just a Deep Learning model (however big the parameter space can be). I believe, we will need to mix Symbolic Reasoning in this somehow.
🍰 2
r
I think a better application of this kind of technique is tabNine It is a more fine grained ML based autocomplete plugin.
👍 3
💯 1
o
I think kite does this for python and intellicode was supposed to do it. In practise, I believe I tried them out and they were buggy and I never looked back. Perhaps they are better now.
http://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD667.html “A short look at the history of mathematics shows how justified this challenge is. Greek mathematics got stuck because it remained a verbal, pictorial activity, Moslem "algebra", after a timid attempt at symbolism, died when it returned to the rhetoric style, and the modern civilized world could only emerge —for better or for worse— when Western Europe could free itself from the fetters of medieval scholasticism —a vain attempt at verbal precision!— thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole.” “Instead of regarding the obligation to use formal symbols as a burden, we should regard the convenience of using them as a privilege: thanks to them, school children can learn to do what in earlier days only genius could achieve. “ “When all is said and told, the "naturalness" with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious.”
❤️ 5
w
@Jared Windover you make a good point. Far better than help with syntax, a better programming language/tool/environment would help you see the consequences and inconsistencies of your logic. On the other hand, it's easy to forget how much basic syntax holds up beginners. Semantics probably holds them up more, but they don't get that far.
❤️ 1
@Ray Imber TabNine is a delightful little assistant for its occasional strokes of genius. I have a collection of these, all Ruby... Typing
{"B1" => "X", "B2" => "Y", "
gave me the suggested completion
B3" => "Z"}
. Starting with
product_demands.map{|
TabNine recommended
|k, v|
then after adding
"
it went on with
#{k}=#{v}"}
. Or there was the time I got to
bidding_credit_discounted(bidder, [gross - incentive_payment, 0].max)
from typing
b\t\t g\t \t
.
❤️ 3
s
@opeispo Thanks for nice piece by Dijkstra. Now, I agree with that of course. A formal system and rules to manipulate symbols are the ONLY way you can probably express a concept beyond any doubt and universally. So, as I mentioned, I do not think just naturalness and optimising a particular function is the ultimate answer. We may need to wait for the genius of someone to come up with a solution to this problem. But the way I see it, this is the only way forward. A little write up by Gary Marcus in this context - https://arxiv.org/ftp/arxiv/papers/2002/2002.06177.pdf
o
Long read so I just skimmed the first 10 pages. What he said about it being reliable reminds me of a talk Gery Sussman gave once. Actually this talk would be relevant to the group -

https://youtu.be/O3tVctB_VSU

- one of the things he talks about that is important is that these programs should behave reasonably even if they aren’t programmed in detail.
👍 3
Also there is this also this -

https://youtu.be/fAY0_pesZ6s

. We should probably be moving towards being as precise as we have to be with computers with our language too. In this video, Sussman talks about how Maths notation is horrible - and that's maths. Natural language is worse. He argues that the legacy of programming is teaching us how to think precisely.
❤️ 2
s
If this could have been like the following (suggesting the string "__main__" from the usage pattern of this expression over a large corpora of code) then it would have saved me 10 keystrokes!
o
Yeah. Also seems like a lot of people are fans of tabNine so I’m going to try it out. Advanced autocomplete is pretty cool.
t
o
and yes, tabNine has been pretty impressive. Very smart autocomplete and lots of common sense. so for example if I write something like
Copy code
arr_1 = None
arr_2 = None
arr.. <- this autocompletes to arr_3
and other similar things like that. looks like it recognises patterns which is pretty cool!
❤️ 2
Also I think it was trained on static code - wonder what would be possible if it was trained on something more dynamic or more finer grained- basically watch over your shoulder (a bit worried about privacy implications of this btw) and notice what your habits are and give suggestions based on that. Say for you, after editing the view file you always edit the models file and it suggests that to you too.
w
TabNine does seem to weight recently edits, so you'll get different completions based on what you were just typing. It's more of a delightful agent of surprise than a reliable logical assistant.
👍 1
i
Same tool, but this time it lay outs HTML: https://twitter.com/sharifshameem/status/1283322990625607681
s
Amazing. But GPT-3 is not open sourced yet, so I wonder where the model comes from? Did they build it themselves? And what (and how much) kind of data they used to train... 🤔
i guee they are using openai api 🤔
d
The main GPT-3 is said to have cost $5 million (in electricity?) to train, so don't expect you'd be able to do what OpenAI did with it, even if it were open-source.