I discovered Michael L. Van De Vanter and his work...
# thinking-together
f
I discovered Michael L. Van De Vanter and his work today. He's been working on advanced code editors already when I was born and recognized several problems with structure editors a long time ago. I read the following papers [1, 2] and can only recommend them. Especially the idea of using only lexical information for structured editing [2] sounds very interesting. His website [3] lists more publications that might be interesting for this community as well. [1]: http://vandevanter.net/mlvdv/publications/the_documentary_structure_o.html [2]: http://vandevanter.net/mlvdv/publications/displaying_and_editing_sour.html [3]: http://vandevanter.net/mlvdv/publications/ PS: This demo video shows cool things like proportional fonts, managed whitespace and IDE features that still aren't widely used nowadays. And it's from 1994 😲: http://vandevanter.net/mlvdv/publications/the-clarity-code-processor.html
😯 1
It's really unfortunate that there aren't more details / demos / code available for the CodeProcessor described in [2]...
j
Thanks for sharing
e
One of the problems with Van De Vanters work, which was excellent, is that it was based on C, and C is such a low-level language, really a replacement for Assembler, that it would be hard to build something great on top of that language. Its too bad that he didn't pick a more robust language like Modula-2 to base his work, because a strongly typed language like Modula-2 would have allowed for a lot more help from the IDE editor, because the strong naming and type conventions catch a lot of errors and thus the IDE doesn't have to be so smart. This is also the reason JS cannot have a great IDE, because the language itself is so flabby (hence all the preprocessors like TypeScript and CoffeeScript).
👍 1
f
@Edward de Jong / Beads Project That's right, the choice of language has a big impact on how easy it is to build great tools. What I find most interesting in [2] is that his solution works on lexical data (tokens). On that level, the difference between different programming languages shouldn't be that large, right?
s
Interesting.. thanks for sharing. You might also like The Cornell Program Synthesizer (from ~79): https://core.ac.uk/download/pdf/21750999.pdf
e
When i compare the same program written in two different languages, i count the number of words, which is an approximation to the number of tokens, which is a very strong measurement of the effort to originate the program. Whether the tokens are shorter or longer words doesn't matter that much in complexity; APL showed that you could cause the reader to crawl as you deciphered the symbols, and LISP with all the parentheses indicating order of calculation was also extremely difficult to read. Unfortunately tokens are inevitably parsed into a tree, and thus don't read linearly. Our eyes are trained to read words in sequence, so there has always been an interesting tug of war between syntaxes that are easier to read vs. more densely packed. But back to Felix's question, it is quite surprising that my recent tests have shown with my progression of programs that go from 150 to 1500 words (clock, wristwatch, snake, tictactoe, minesweeper, chess) that as the program size gets to 1500, the different languages start to diverge greatly, and that once you reach that stage they no resemble each other. If the program is very short, then all the programs look almost alike. There are extremely subtle progressive non-linear effects. For example, if you have a complexity coefficient of 1.02 versus 1.10 to the 50th power, one is around 3, and the other is 117, a huge difference. Programs are not linear, there are exponential processes involved, and what appears to be a small advantage of one language over another when applied to a sufficiently large problem., becomes a huge difference in size and complexity. This is my beef with Java, what i refer to as the COBOL of our time. A language which inevitably leads to ponderous, complex monstrosities.
❤️ 1
s
@Edward de Jong / Beads Project interesting insights... have you published the results of writing those set of programs in different languages and details of how exactly they start to look different? It definitely seems true that small programs can look somewhat similar in many different languages but large ones have emergent shapes that can differ quite a bit. Substrate determines structure. I assume it's not just the language but the frameworks and libraries in use.
e
I have published on my blog the specifications and ingredients for various small reference programs, starting with a clock (150 words) , then a wristwatch simulation, then snake, tic-tac-toe, minesweeper and ending with chess (no AI, just two player). That project set spans from 150 to 1500 words of code, and have given these challenges to some of the various next gen language teams like Red, Luna, etc. You can also compare them to the various GitHub examples that have been done in longstanding languages like JS, Java, etc. One thing you immediately have to deal with is the graphical subsystem that the language uses; that is a major factor, and in a lot of older environments they predate the explosion of target devices, and one cannot actually make a resolution-reactive product in these older frameworks without great difficult. Another issue you see immediately when looking at existing toolchains like Objective-C/Swift from Apple is the gigantic number of system API calls you have to learn to do almost anything. You have a dozen major library subsystems to learn, must to make a sound effect, respond to a click, and draw an image. So one of the biggest problems i see is the total number of pages you have to read in order to accomplish these simple functions. Thats what makes a little game maker tool like fancade so outstanding; it manages to boil down the verbs and nouns to a small enough set you can program on a cellphone.