It is amusing to me that his laptop has an Erlang ...
# thinking-together
e
It is amusing to me that his laptop has an Erlang sticker, when Joe Armstrong, the inventor had such a dim view of the Actor model. There is a lively reunion on YouTube of three famous senior UK computer scientists who each selected a different technology to pursue. One was Hewitt, the actor model, and Armstrong, and Hoare who had some other paradigm i forget which. Anyway they had a debate about what were the results, and Armstrong ripped them to shreds because he pointed out that is system worked and theirs didn't actually work. Pharo has the best IDE i have ever seen, but the underlying language smalltalk was DOA (dead on arrival). The original editions of the smalltalk book by Goldberg are worth a lot, because sensible people like me who bought that book tossed it out because the language is so clunky and hard to read, and frankly absurd. You send the message PLUS to the number 3 along with another operator 2, and then the 3 updates itself to 5. That isn't too far out, but when you get to bitmap manipulation, the model really gets ugly. The actor model creates not only huge numbers of little islands of state, but then compounds that mistake by creating a message passing system that is hell to debug. In any toolchain that becomes hypercomplex to understand, there are always exceptional people who claim it is no problem, but if you take a big actor model program, and pass it to another programmer, they will have zero chance of understanding it. It becomes a nightmare of dependencies and cross-connections, just like the human body. I only make these possibly inflammatory remarks to warn people to look up Armstrong's talk, and think twice about wasting your time on a model that has such a legacy of failure. If someone would like to debate this with me, we can have Steve Krouse act as referee, and hold a fire extinguisher nearby so when you burst into flames you won't be permanently harmed ;->
s
I don’t think Smalltalk numbers are, as you suggest, mutable. Also, Erlang, as I understand it, uses a mutable state actor model.
m
Erlang is close to the actor model but not exactly, the creators didn't know about its existence until they already implemented the first erlang prototypes.
Erlang is immutable, the way erlang processes keep state is simply by recursing passing the next state to themselves
k
@Edward de Jong / Beads Project If you believe that Smalltalk was a failure because it applied its OO approach even to arithmetic, you will have to explain why Python, which does exactly the same (up to syntactic sugar) has worked so well for so many people. @Steve Dekorte Smalltalk numbers are indeed immutable, as are characters and booleans but perhaps not much else. The value of immutability was discovered after Smalltalk, and that's perhaps one of its main defects. Pharo is introducing immutability, though in an unusual way (you can set an object to be immutable but also back to mutable).
d
If your goal is truly to motivate people to watch Armstrong's talk, then post a link to it. Trolling the forum with ill informed, inflammatory bullshit is not a good way to spread whatever message you derived from Armstrong's talk. Instead, you are just pissing everybody off.
🙏 1
s
@Mariano Guerra Erlang processes see one another as mutable actors, don’t they?
m
in the sense that internal state may be mutated by sending messages? yes
but state is only accesible from the actor itself and "mutated in time", that is, returning a new immutable state after handling a message
s
But there is no referential transparency between actors(processes), correct? The reference that one actor has to another does not change in order to reference the new state, so wrt state between actors we have exactly the situation as we do with mutable objects messaging one another in smalltalk. The two differences (which are both interesting) is the enhanced isolation of actors internal state from one another and that actors have message queues and their own threads of execution - both of which make actors more pure in the OO sense as state and processing are more encapsulated.
☝️ 1
s
@Steve Dekorte’s comment is spot on. Externally, when you send a message to an Erlang process you don't know what 'version' it's at. So it looks like this mutable bundle of state you're interacting with. This is very similar to any other actor model. The only thing is a process has more control of when it accepts new messages and can do so when it has reached a consistent state. Here's an interesting comment from Joe Armstrong:
Then, my thesis supervisor said "But you're wrong, Erlang is extremely object oriented". He said object oriented languages aren't object oriented. I might think, though I'm not quite sure if I believe this or not, but Erlang might be the only object oriented language because the 3 tenets of object oriented programming are that it's based on message passing, that you have isolation between objects and have polymorphism.
Source: https://www.infoq.com/interviews/johnson-armstrong-oop/
m
@Steve Dekorte yes
I was just highlighting that state is not mutable in erlang (nothing is mutable on the erlang vm), but yes, you can keep a reference to the same process and it will "mutate", it's the only thing that can change state while you hold the same reference if I remember correctly
s
“State is not mutable in Erlang” If the way people wrote Erlang only involved a single process/actor, I would agree.
s
Oh I think another important aspect of Erlang processes is that each is 'single threaded' in the sense there can only be one control flow happening inside at one time - any other messages sent it it will wait. This is a nice property because you can think locally and ensure it always reaches some consistent state. Re state - if you write into a db from a process (dets/mnesia?) - you're still dealing with mutable state again.
BTW I always thought of Erlang as very close to the actor model, intentionally or unintentionally. So I'm a bit confused about the assertion that Armstrong held a dim view of the actor model. I think I found the panel discussion referred to (haven't finished watching yet):

https://www.youtube.com/watch?v=37wFVVVZlVU

m
usually people make the distinction between "sequential erlang", the language and its semantics that you write as modules and functions and "Erlang/OTP" the concurrency primitives and patterns used to organize processes into systems, the immutability is in the sequential part (the language) the "mutability" is in the platform
s
Yes, or have any interaction (i/o) with the actual world at all, but who needs users, keyboards, mice, touch screens, user interfaces, files, databases, sensors, or network communications? ;)
s
The immutable model, single assignment, etc does affect how you write code in the small, based on my very limited experience with Erlang. The main thing I liked was things aren't going to just change underneath you - within your process of course. Its kind of like working in a single threaded isolated heap. But you know when you're looking outside the process that you'll may get anything. Also you don't usually synchronously look outside your process. You just send a message and are done. Then later, you may get a message related to one you sent.
e
In the talk above the Hoare and Hewitt are somewhat smug, and Armstrong upbraids them because he points out his system actually built shipping products that worked well. Their approaches did not work. It is easy to lump Erlang/Elixir into a category with other languages, but it is a unique beast; with a very clever runtime that creates a stack and heap for each micro-process. This allows you to reboot a process, and not have some godawful pile of a million tiny chunks of memory that have to be marked and swept. When it comes to massive multithreading, there is only two ways to do it: a super meticulously hand-crafted runtime, or use the runtime that Erlang/Elixir have, because it solves an immensely difficult problem. If you take the straight actor model, and combine non-reversibility with concurrency, your system will be incredibly fragile. That is Armstrong's point. He wasn't a theoretician like Hewitt and Hoare, but someone building real high volume things, not toys.
Armstrong has a lot of good videos up. They are entertaining and informative. One thing to remember though is that Erlang was banned inside Ericsson, where it was created, and that banning is probably related to the poor transferability score that Erlang possesses. If you notice that Elixir is having much greater acceptance, because some would describe the Erlang syntax as opaque. A program in commercial use lasts decades, and the ability to transfer code from one person to the next is a major factor in language selection. This means sticking with popular languages, but also avoiding ones like LISP, FORTH, and APL which have super low transferability. It would be great to hear from people who used to work at Ericsson as to why exactly it was banned. There was no question that his system worked well, and was robust; but maybe it was too hard to maintain and thus fragile in the end? There is a lot of information in failure, and i wish people would be more honest about the failures of the past; this is how you get wisdom.
that thread explains why
Erlang is used a lot again inside Ericsson since it was open sourced and the ban was lifted
e
Companies like Autodesk which uses Autolisp as the core language of their system (the only large company i am aware of basing their technology on LISP) doesn't care if their language is popular. It is their secret weapon and makes AutoCad infinitely programmable. My question remains, what were the technical factors behind the decision to ban Erlang? Plenty of companies use a custom-made secret sauce to generate their products (Facebook uses Hack for front end if i am not mistaken, a derivative of PHP). When you dominate a sector using your tools, why would you ban the tool? Ericsson was kicking butt back in 1998, and still is a top-3 player in Telecom infrastructure equipment. So little information leaks out of companies about why things did or did not pan out. We usually have to wait 20 years or more to find out the true story.
Some of you guys give me a hard time about how long my responses are. Sorry i don't have the time to make them shorter. The reason i dumped a bucket of icewater on Leandro's lecture, is that he hasn't done his homework. Unlike 99% of the people on this forum, i was programming when the InMos Transputer came out, and i downloaded the user manual for Occam and tried to learn it (impossible). And remember the Connection Machine that was going to revolutionize the world? Only a genius like Danny Hillis could get that thing to say "hello world". Or how about the recent disaster, the Adapteva Parallela machine? The hardware was great; you have thousands of independent processors, with an orders of magnitude improvement in CPU power vs. energy consumption. It failed miserably even though the hardware worked, because nobody could debug their programs!
👍 1
m
It was not mainly technical, it was a strategic decision, Jane was at Ericsson and helped open source it, I think she explain it a little more in some other talks you can find in youtube, I've spoken with Joe and Robert about it and they tell similar things
s
@Edward de Jong / Beads Project I agree programmability is important, but historically it seems to get trumped by economic concerns. For example, I programmed a CM2 and Fortran90 was actually easier than Fortan to use. The problem was you had to wait in a queue with other users to run your program, which was often slower than using your workstation (if your program could fit on your workstation). What killed traditional supercomputers were compute clusters, which were actually harder to program but cheaper and therefore more accessible. Likewise, GPUs are harder to program but are far cheaper for the compute power. I suspect Smalltalk would be ubiquitous today had they made the core free and sold libraries and consulting services. Instead they chose to price it well outside of what typical users could afford, thinking that programmability justified the high cost.
e
@Steve Dekorte The traditional supercomputers were effectively a DOD sponsored product, and succeeded/failed based on the funding of that group. The average firm can't even use a supercomputer. But the Inmos transputer was cheap, and so was the Adapteva Parallela. So it wasn't about cost, but tooling. One thing often overlooked is that our entire mathematics tradition starting from Greek proof, is based on a single line of reasoning, applied sequentially, and we humans do not yet possess the mathematical knowledge that permits easy parallel thinking. The minute you have more than 100 processes operating at once, you can't even fit the threads on the screen, so visualizing and tracking it becomes extremely cumbersome and confusing. I suspect that once we have screens that are 100 million pixels - and we are getting there, it should be easier to handle. For comparison purposes the upcoming Apple 6K monitor has 20 million pixels. There are some types of projects that just demand a lot of pixels to do properly.
s
@Edward de Jong / Beads Project “The traditional supercomputers were effectively a DOD sponsored product,” Yes, and the lab I worked at was DOD funded too but it was still moving everything to workstation compute clusters by the mid 1990s because they wanted to make the best use of their funding.
@Edward de Jong / Beads Project “The minute you have more than 100 processes operating at once, you can’t even fit the threads on the screen” That’s true, though IME the problem is more the model used. I think the Lua authors put it well: “…we did not (and still do not) believe in the standard multithreading model, which is preemptive concurrency with shared memory: we still think that no one can write correct programs in a language where ‘a=a+1’ is not deterministic.” (from The Evolution of Lua) Eliminating either shared memory (with actors) or preemptive concurrency (with coroutines) goes a long way towards solving these problems, yet these solutions are strongly resisted in the mainstream programming culture.