When i compare the same program written in two different languages, i count the number of words, which is an approximation to the number of tokens, which is a very strong measurement of the effort to originate the program. Whether the tokens are shorter or longer words doesn't matter that much in complexity; APL showed that you could cause the reader to crawl as you deciphered the symbols, and LISP with all the parentheses indicating order of calculation was also extremely difficult to read. Unfortunately tokens are inevitably parsed into a tree, and thus don't read linearly. Our eyes are trained to read words in sequence, so there has always been an interesting tug of war between syntaxes that are easier to read vs. more densely packed. But back to Felix's question, it is quite surprising that my recent tests have shown with my progression of programs that go from 150 to 1500 words (clock, wristwatch, snake, tictactoe, minesweeper, chess) that as the program size gets to 1500, the different languages start to diverge greatly, and that once you reach that stage they no resemble each other. If the program is very short, then all the programs look almost alike. There are extremely subtle progressive non-linear effects. For example, if you have a complexity coefficient of 1.02 versus 1.10 to the 50th power, one is around 3, and the other is 117, a huge difference. Programs are not linear, there are exponential processes involved, and what appears to be a small advantage of one language over another when applied to a sufficiently large problem., becomes a huge difference in size and complexity. This is my beef with Java, what i refer to as the COBOL of our time. A language which inevitably leads to ponderous, complex monstrosities.