I don't agree with Prof. Alvaro that its okay to h...
# thinking-together
e
I don't agree with Prof. Alvaro that its okay to have only one user for your language. If it is useful, it should be widely used and not an obscurity. I am not a fan of Datalog and its derivatives. I am solidly in the Prof. Wirth camp, which is that Algorithms + data structures = Programs, and i believe the weak spot in computer science is data structures, as the fundamental algorithms are now fairly well known.
i
Designing a widely used language means that the language designer needs to 1. put emphasis on the early stages of the learning curve, and 2. cater to common denominators of all the potential users' mind landscapes. Designing a language for oneself means that the language design can have uncompromising focus on the functionalities that support the one specific mind landscape. It is of course true that neither problems nor the minds that are solving them are unique _in general_; both have a lot of repetitive patterns and thus by default it makes sense to try to make tools repurposable. But I would oppose the idea that this value of repurposability is universal enough to say that it's somehow a failure to have a language for one specific combination of mind+problem, if the language indeed fits that combination effectively. It's not like language design is a monumental effort which must be justified in large scale to be not considered a failure. I ride a sucky bike to store and consider the bike a success, even though it objectively is sucky on many dimensions when compared to other bikes. It gets me there and I don't have to worry about it getting stolen even when it only has a sucky lock on it.
e
Designing a new computer language of any generality takes thousands of hours. And for only one user, that is an awful lot of work. In fact, the simpler (and better) the language is the harder it is to invent/design. Some people feel that many of the inventions in computers were merely discovered, as they existed in nature before hand. That is an interesting viewpoint
i
Are you sure you're not making a circular statement? "Making a generic language is hard and thus it is not justified to make a generic language for one person only". How about not making a generic language in the first place? 🙂 But I'm quite unconvinced that "thousands of hours" is a fact even for ~generic languages, more so if one has "language design" in one's own personal hobby & professional toolkit. Javascript was drafted on relatively short order and its first implementation was done in 10 days back in 1995. The tools available to aid experienced language engineers in creating new languages are a legion. Language design in its core is not that hard. The devil is in the details, in integrations and in making the language generic, beautiful and approachable to the needs of a wide target audience. In short: perfection is hard. But if your target audience is one person, perfection is not needed and things start to look quite different.
e
you are somewhat uninformed as to the origin of JS. It was a 99% copy of Actionscript 2, which i can assure you took hundreds if not thousands of man-years to develop, it being an evolution of the granddaddy of all scripting languages, which was called Lingo from the breakthrough animation/interaction designer product called Macromind Director, coming out of Macromedia, Inc, which was purchased by Adobe systems a long time ago. Macromedia also invented FreeHand, which was a formidable competitor to Adobe's Illustrator. ActionScript 3 added strong typing and modules, and if you run AS3 code through a very simple find/replace script it will generate almost perfect JS code now that modules exist in Ecmascript (which is the formal name for both JS and AS3). The fiction that JS took only two weeks is a polite lie to keep the Adobe lawyers at bay; remember that Oracle has sued extensively over Java's use by Google, and fortunately for the world Adobe is not as litigious/proprietary.
You are correct that if you don't care how complex it is, or how ugly, you could create a new language pretty quickly. GitHub probably has 100 languages invented in the last year. But as for a simple general purpose language taking a lot of man-hours, the evidence from the Red, Eve, Luna, and Beads projects shows that thousands of man-hours are indeed input into such a project. To be general purpose, you have to test it against a diverse set of project types, from games, to business applications to scientific, all the while making adjustments to keep smoothing the rough spots, while trying to keep it simple. If you exclude data storage and graphical interface/event management, and imagine we are still in the 70's on terminals, you can probably crank out a simple language like Lua or Python easily enough, but i don't expect any new general purpose language to catch on if it can't do interaction and at least touch on client-server types of programs. We already have acceptable terminal-based languages that can't draw or interact, i don't think we need any more. Swift, which is probably the best designed language from the major players, is representative of the end of the road of OOP. Certainly the Apple development community has strongly embraced it, and i think it has a very good percentage of use in the Apple ecosystem, ramping up faster than hardly any other language. Another aspect of language design that sucks up the man-hours is the diverse range of platforms that are now extant: you have wristwatch computers, mobile phones, tablets, desktops, browser apps, game consoles, VR headsets, AR headsets, so many more platforms than before, and so many more devices to connect with. This is one of the big reasons for example Swift got taken up so fast: if you want an Apple Watch App, you are very likely going to be using XCode and one of only 2 languages. It is very hard for new languages to cover the full breadth of computing today. If you get down into the nitty-gritty of the GPU, you are talking tens of thousands of hours to master each of the commonly used GPU's. This is why people use Adobe AIR, and Unity, and Unreal Engine, because those systems did the really low level work of learning the hardware and making a reasonable layer on top of the hardware to make it feasible. You will note therefore that most of the projects listed on Steve Krouse's spreadsheet are 2D products, because a 3D platform is a truly massive undertaking, and a moving target because 3D hardware is iterating fast.
🔥 1
👍 1
i
Fair enough; I was not aware of that aspect of history in detail, thanks for the correction. However it still does not essentially change my point; creating new languages by copying major parts of existing ones and making adaptations is perfectly valid strategy. I would argue that it's the most sane strategy, in fact. I can't really comment too much about Datalog because I know jack shit about that, but a brief google states that it's at the very least a syntactic derivative of Prolog. While I can be persuaded that the JS-in-10-days is essentially a myth, the whole "standing on top of 'giants'" still clearly applies. Everything's a derivative. And the context of this topic as far as I see is not "how much is the total work that has gone into making a language X possible", but "what is the incremental work needed to make language X possible, given already existing tools and languages."
No arguments about the difficulty on creating generic languages, of course. That's gonna take them hours. But my core point is still about non-general-purpose languages, on domains where the target audience grows narrower. As for target platforms there are techniques, transpilation being the foremost that comes in mind. For example if the language innovation that one is pursuing is on the front-end side, then the only thing you need is to specify the runtime semantics be such that they can be reasonably efficiently expressed in C, JS or Java runtime models and you're... well not golden, but let's say "silver".