<https://www.youtube.com/watch?v=2_4ecWDohnk>
# thinking-together
j

https://www.youtube.com/watch?v=2_4ecWDohnk

i
Hah, I apparently decided I needed to speak at 2x speed for folks 😛
that was a really weird length time slot
d
I like the speaking style you've got - no 'um's, which I'm guilty of, and very animated
Actually, a minor side note: I use Immediate Mode (imgui) in my personal C app, and I'm a React dev for money; but I never connected imgui with React until just now! .. thanks @ibdknox!
i
ah cool 🙂
yeah, I think they talked about imgui being an inspiration at some point years ago
d
also didn't realise that Datomic used Datalog, like you did, doh!
i
you have to squint a little, but it's datalog semantics if not its syntax
with some important extensions
a
heh, I was talking to a dev friend about the similarity of react to imgui the other day. I'd been wondering what the more strict functional crowd thought of imgui (in C or elsewhere)...
d
I really love it, very refreshing way to spit out UI
i
I think there's actually a more fundamental idea underlying all the immediate mode ideas over the years: at it's core it's really the idea of reducing a domain to a datastructure and then feeding a complete representation of that domain to an interpreter
d
(both imgui and react that is!) having spent years building Java UIs!
i
it's a wonderful pattern for lots of problems 🙂
(fun tie in with another topic in that talk: the theory behind CRDTs shows us the duality between state and a set of operations that create that state. Imgui presents the domain as a series of operations, react does it via state. In either case, they're the same fundamental idea operating on the same domain.)
d
Right .. but my "boiling down" of that key concept sidesteps event logs/sourcing and just sees state--f()-->state
I'm probably alone in that in this forum 😄
bit like datomic in that regard
i
datomic is built on a log I think
haven't looked at it in a long time
that being said, I don't think logs are fundamental either 🙂
i
Couldn't you make the argument that event logs and CRDTs are just retained mode interfaces for state management? You're applying a series of transformations to a cached representation. The transformations (events/CRDT updates) are the canonical representation.
d

https://res.infoq.com/presentations/Datomic-Database-Value/en/slides/9.jpg

This is how my own language (Onex) works, more or less
I asked Rich Hickey when I met him once why store all the state forever, and he didn't have a passionate answer!
I don't see storing all events OR storing all state that ever happened as a necessary or fundamental concept
i
the main advantage is enabling you to change your mind
d
argh. .. ok .. go on!
🙂
i
with the log, I can choose an entirely different interpreter (in the information theory sense) and arrive at a valid alternate future
relatedly, auditing is important in certain domains
we had a version of Eve that was built entirely on a log and you could do some really neat stuff with it
e.g. we wrote a little game, and you could change parts of the game and the state would now be as if that change had always existed
debugging was very simple
nothing could hide from you and there was no irreversible mistake
(barring effects that escape into the real world)
d
(just thinking that over..) 🙂
i
one way of looking at it is that if you only store final states, then you're tying the meaning of events/information to your current, likely incomplete, interpretation of those
and because of that you lose the ability to inspect that history or to interpret it differently
which makes "why did this go wrong?" pretty hard to answer
and it diminishes your ability to fix the mistake afterwards
d
yeah, but "barring effects that escape into the real world"
isn't this just in dev/debug?
i
nope
you could selectively roll the whole system back, or play it forward with new rules, or what have you
you can't take certain things back, e.g. sending an email
but everything else you could
imagine this in the context of a bank: let's say you had a bug that caused everyone with a certain type of card to get double charged
if you fix that bug and play the log forward, everyone's balance would now be correct
d
hmm ... (skeptical feelings but unfocused as a response) 🙂
i
the bank example is good because it's not a closed system
and so forces you to think about the scary ways your system escapes into the real world
❤️ 1
what happens if the people who were transferred twice as much money then spent some of it?
d
yus
i
banks today generally do what you'd expect: your balance goes negative
if that's the intended result, then that one bug fix and log replay actually did resolve the issue, at the cost of pissing a lot of customers off
d
I'll need to sleep on it and think about it in the shower tomorrow.. 😄
it's a bit like Git only with live data
i
yeah 🙂
d
which scares me as Git is a nightmare!
i
well, it's the principle
git is a bad interface
but the notion of versioning is very powerful
don't know if this is the case anymore, but MMORPGs used to have to deal a fair amount with this stuff - world breaking bugs that then needed to have the affects reversed somehow
they usually resorted to massive data-loss events (they just rolled the DB back to the last backup)
d
basic problem is when you have system A rolled back to prior state/version, and system B has already assimilated the 'wrong' state - its behaviour can't be determined when A is back in time
or are you going to CRDT yer way out of that somehow?
😄
i
depends entirely on the way the systems communicate 🙂
d
off to bed now, it's Wed 2nd here . . g'night!
e
In the Granger talk there were comments about how industry and academia are coming together. I don't see anything useful for simplicity coming out of academia or the slightest bit of cooperation in terms of practical language development. The brilliant work of Prof N. Wirth of ETH has been dumped onto the ash-heap of history. His stuff was real simple compared to anything else of the time. Academia is busy using R, or playing around with Coq and similar tools which is not a practical tool for general purpose programming. The obsession with proving programs correct is a mostly pointless exercise, as you can't even prove TicTacToe correct using that system, because it doesn't have concept of drawing. Hardly anyone is doing raw computation now, it is 99% about interactive graphics as expressed as websites, IOS/Android Apps, etc. I wish academia was making the next generation of tools for us to use, with their tens of billions in funding that collectively they have each year, but alas i can't think of a useful project originating from academia in a long time. All the action i see is around what Google is doing, or Amazon with AWS, Apple with Swift, or what JetBrains is doing with their now dominant IDE. By my estimates, Academia is using far more closed-source, monthly-subscription software than private companies, tools like ArcGIS, R, Mathematica, Adobe Creative Suite, are all dirt cheap for students, but quite expensive for post-students, and that is the business model, of capturing academics with really nice packaged software that they will end up licensing as they graduate into companies. I see a bizarro world now, where academia is entrenched in commercial tools, and industry is going more open source and doing the R&D at their own expense, and ignoring the academic output for the most part. I think the universities are so fat now, they are like the pentagon; with deans flying around in private jets, pulling 7 figures. And i don't see the serfs known as graduate students empowered to do big enough projects to make a toolchain.
s
@Duncan Cragg wrote
basic problem is when you have system A rolled back to prior state/version, and system B has already assimilated the 'wrong' state
Can the the idea of managed time be pushed down to the OS? Then all systems built on this OS use the same psuedo time base and cross system consistent state is tractable.
d
probably .. sounds a bit over the top to me though! 😄
e
Time control of a client + server program is a fascinating challenge. It can be done, at least to the point of reversing both in sync. It would be wonderful for diagnosing a multiplayer game.
w
@ibdknox Welcome to the 2x club. Two favorite lines, ""we deal with concurrency through this horrid mess of locks and sadness" and "C is not going to save us."
@Duncan Cragg @shalabh I've had some fun with managed time over time. Once it's rich enough, you get a lot a nuance — and a potential nightmare. Suppose you have a nice, nice mechanism for dealing time (a run, an execution) in a first class way. Soon you'll find yourself manipulating many runs at once, each with its own history. At first it's like having many documents open, each with its own undo buffer. Later it begins to feel more possible worldy. You find that patch vs replay are just extremes and that there's value to inserting code updates in the middle of execution to find where it is that they actually make a difference. With orchestration of runs a complicated as that, you are sure to want to inspect the process of inspecting processes. Still not entirely comfortable with the ramifications of the feedback.
🤔 1
Stepping back from all that, the most direct way I've found to explain the how log-based systems relate to state-based ones is fundamental theorem of calculus. If you record state changes, you get the log. If you sum up the changes, you get the state. Each brings out an aspect of what's going on.
🙌 1
@Edward de Jong / Beads Project The academia conversation is big one to be had. And academia is big: to what degree are the people using R, the people using Coq, and the people using commercial software, the same people?
e
I think the real reason academia is so heavily using R, ArcGIS and other subscription products is that they aren't sharing code; they instead need to share data using the same program, and their datasets are complex and it is just too painful to have different data structures, so they tend to standardize on a small set of products so it is easy to ship data back and forth. In many fields like Archaeology,. Geology, etc., the main work they do is actually collecting data. Data hoarding is a big problem in archaeology for example, because if you spent hot days using a small brush to delicately expose some ancient thing the last thing you want is some hotshot doing a better job analyzing your hard-won data. And let's not forget that the vast majority of people using computers are not in the computer field. But the real issue i see is that for the purposes of this future of coding group, i don't think we can expect a warm welcome from the academic communities in any field, because they are so already standardized on high quality subscription-based commercial software, with strong support groups already formed. I think the early adopter audience for new languages is going to come from the hobbyist and curious professionals, who have quite a different need set than the other users, but are at least open to something new. So this is why R, MatLab, ArcGIS and many other expensive subscription products dominate the academic space. In the hobbyist environment on the other hand, free tools are very common, and you see Python, Atom, etc.
🤔 1
s
@wtaysom - interesting. I think we've had a similar conversation before. Do you have any publicly available material about your experiments with managed time?
w
I wish. Was last working on it seriously in February, and have since come to reject the specifics of that approach, want to try something more Eve-like the next time around. Let me share two tricky bits instead. Start with identity of introduced entities when comparing parallel histories. Suppose the conceptual different between two histories is that in one you added an extra entity somewhere along the way. If, for example, you are tracking them by some index, then all indices will be affected and you're in for trouble. This is essentially the same trouble one encounters with operational transformation of text and even good old textual diff.
🤔 1
There's a deeper problem though. Given two histories with "trivial" differences (e.g. when exactly a person clicks through things), what are good ways to ignore this noise so that you can compare whether important parts are the same. Have a good grasp on this one, and you have a good grasp of many problems. 😁
s
Thanks for the insights @wtaysom.