Hi all! I'm When. I'm excited to participate in di...
# introduce-yourself
r
Hi all! I'm When. I'm excited to participate in discussion here and share ideas. I've been a long time dreamer in what has been possible for the future of code. I've been working in software for over 20 years now and I've always been passionate about empowering people with software, especially those who don't know how to code. I have a degree in both Art and Computer Science, and the intersection between expression, intention, interface, abstraction, and usability has always been of keen interest to me. I mostly found my calling in writing tools that allow other to work at a higher level. In 2007, I wrote what I believe to be the first ever "reactive javascript framework", at least in the form you see now - client-side templates, data-binding, client-side routing, components etc. I had to do a lot from scratch: module system, template compiler, fine-grained data-binding system. I've been watching the rest of the industry do it wrong ever since (I'm kidding! 😝). It's still in use by the company I built it for (although I don't work there anymore), and it beat out Vue 3 + Vite in a head to head evaluation recently (from what I hear). It currently powers a loan automation system that handles billions of dollars in loans. I was never able to open source it, so I guess y'all have to take my word for it, lol. Happy to talk about it if there's anyone curious. It still has a few tricks I haven't seen in the wild. I've also built a custom workflow engine and business rules system that had a pretty unique architecture and scaled really well (over 500k rules), and actually delivered on the promise of moving a huge amount of logic into the hands of non-programmers (without needing programmers to babysit it). Those are some things I'm most proud of and have deeply informed some of the ideas that I imagine for the "future of coding". My closest thing to an actual claim to fame was my involvement in the standardization of ES6. I have a name credit associated with breaking the standoff around classes. The "maximally minimal class" proposal was based off of a proposal I made. These days I've been working on my own vision for the "future of coding". I'm a big fan of Alan Kay and Brett Victor. From Alan Kay, I really love the idea that we need to shift from mechanical to ecological thinking. From Brett Victor I love the concept that the "digital medium" needs a separate concept of "authors/authoring" that is separate from engineering/programming. The fact that programming languages have been a tool for both is part of what holds us back. I'm working on a unique sort of universal IR binary format inspired by DNA, and a programming/modeling language inspired by cellular biology. Happy to chat about any of this stuff, but mostly here to see what everyone else is doing! Looking forward to learning from all the brilliant minds here.
g
I'm in awe! We had a thread about workflow engines a few weeks ago, which feels like a blindspot to me, would be curious to read more from from you about it
r
Yeah, sure, let me see what I can do to convey this stuff. So first of all, the system I built was not based on BPMN at all. It was built from scratch and adapted over many years to suit the wide variety of use cases needed. It was still a reasonably generic workflow system, but by being something in house we were able to make it have the actual features that would get the job done, as opposed to grabbing something off the shelf and praying. My core criticism of BPMN and most workflow systems - like say Camunda - is that they are simply a kind of glorified chart execution system. In general, they are completely disconnected from the actual system they are meant to control. They started from the same kind of modeling background as UML, and have historically had trouble IMO by effectively standardizing first. There was a lot of speculation about the value it could bring. On paper it might seem like the right path: create a visually oriented tool for describing business processes. Focus on the visual language that subject matter experts might be able to understand. Then create a standard for it so that a variety of tools can consume it. One of the problems you see is that the early versions of UML/BPMN were really about a shared visual language for communication purposes. Then as the hope of making it work through automation came into play and a variety of enterprise vendors got involved. Both UML and BPMN went through revisions into order to give them execution semantics to actually run. It was also happening during peak XML/WebServices complexity explosion was happening. This also dramatically increased the complexity, in many ways making a kind of worst of both worlds. Its no surprise to me that the AWS step function system is not based on BPMN, but is quite simple in comparison. If you want a gruesome inside look of what creating a standard through OMG is like, I find this tale about CORBA to be a good example of the kind of insanity that happens when you make a standard from vendor soup: https://queue.acm.org/detail.cfm?id=1142044 there's a lot of "castles in the sky" kind of thinking. Here's an account around UML that echoes the similarities https://tratt.net/laurie/blog/2022/uml_my_part_in_its_downfall.html. A lot of dreaming of how to model the happy path of processes without actually building something and using it and making sure it actually worked. A lot of kitchen sink acceptance of feature requests without a simple coherent vision. When I built our workflow system, it was a completely different sort of process. We started with the simplest thing we could build - a kind of simple flow chart/state machine that was built as a functional system first. As we built out the larger system (like I said, it was a complete loan automation system), and as we were trying to handle the problems of a real national mortgage lender, the system was built to solve the actual problems in ways that would actually be used/useful. Where it ended up was dramatically different than something like BPMN.
Without writing up a full specification for you, let me see if I can hit some of the highlights about the major differences. Like I said before, one of the really core differences that shaped everything was the fact that our workflow system was more deeply integrated into the larger application. Take a product like Camunda, based on BPMN and deployed as a completely independent or even SaaS hosted approach. Triggering a Process to start generally means an API call from custom software. The data that the Process has access to is usually a payload (json) from the initial trigger that is carried along, possibly manipulated by the Process itself, but disconnected from the system that sourced it. Decisions could come from the outcome of a performance step - a human being making a choice. However, automated decisions need to be data driven. These decisions are handled by "logic gates" (that can pollute the chart a bit when it gets complicated), or use another standard, DMN, for decision modeling - typically using a kind of decision table. Vendors also love to add in machine learning or maybe a rules engine integration to supplement. In all of these cases that make decisions based on data, it requires you to put the right data into the workflow for it to be available for the decisions. I don't really see this highlighted as a major sticking point because I'm honestly not sure people are aware of how much it affects the whole problem space. Here's a dialog I had with ChatGPT to sort of draw out an unbiased assessment of BPMN style tools and then compare to what I built, so there's a deeper explanation here of what I actually built https://chatgpt.com/share/67b942ab-8a40-800d-9b97-be9c8f4bde38
g
Thank you for going into great detail, I really enjoyed that (and all the links you shared)! The kind of perspective i was hoping to get here. Also about UML which I've vaguely associated with all this, but it was fascinating to read about the standardization efforts
r
yeah, I think its kind of interesting how IMO some really fumbled attempts at a kind of "model-driven development" have completely soured the entire concept for many people. I think there's a lot of potential value in it being done in a much better way
Its the same sort of tragedy I see with most of Tim Berners Lee's efforts around the semantic web and solid data pods. I think the goals are very interesting, but the premature standards-first approach of RDF etc. is a complete dumpster fire, and it really stems from not actually building something useful first, but instead going for the castles-in-the-sky approach
g
ha fun connection that I can relate to, I feel very similar about the RDF stuff. I have a livelong affliction of being too castles-in-the-sky so all my side projects have been shots at immediacy. For me Bluesky and its ATProto are great modern examples of rubber-roading first and then standardising it (tho I guess that success is still TBD)
r
yeah, I agree about Bluesky. Yet to be seen, but if its gonna happen, I think thats exactly the sort of path that will do it. I'm honestly a lot more skeptical about the long term success of ActivityPub in comparison. It's not as bad as RDF, but I think there's some inherent flaws that will prevent it from really succeeding at scale. I just don't think that federation of state in that way is the right approach. I guess we'll see! At least it has a committed working implementation, and started from implementing before standardizing.
g
Yeah I share that worry, ActivityPub seems designed for more nimble use cases which is cool for all who need that but was not what I was looking for when I was shopping for a Twitter alternative
r
So I'm actually really interested in figuring out better decentralized system in a broader sort of way - like how can we make decentralized more the default. I think its interesting how some of the early choices of the web really lead us here. For example, making it so that the http protocol was stateless and simple was an understandable choice for a kind of thin client, public document protocol, but it also creates the kind of systemic forces that force every server to own your data and identity and trends towards centralization and winner-takes-all dynamics.
If you consider the pre-web internet, it enabled more data sovereignty, and more capabilities of the actual computer you were using. I mean, instant messenger rose up before the web, and it took forever for the web to enable anything like realtime messaging, but it also trends towards big servers owning everything.
And I think a lot of the modern decentralized architectures unfortunately continue that pattern. Matrix and ActivityPub, while being decentralized/federated, also deeply intertwine all of the application state with the federation/communication protocol, both making it more complex to implement and adapt to different needs, but it also doesn't actually enable personal data sovereignty. It just moved from a single central server, to a user's home server, which is a tradeoff more than a clear win, IMO
g
I share that interest! If you haven't seen it already, I quite enjoyed the discussion between a Bluesky engineer and a Mastodon co-founder here (that's the last response in the thread I think, but the previous ones are linked). Christine who authored that one is afaik also working on something that attempts to answer the big issues you are raising (I haven't read about it yet tho, so not sure how successful, if I can even judge that)
ATProto seems like such a great shot but is also architecturally+cryptographically humbling me enough that I'm less inclined to think about the solution space much anymore these days. But always happy to read more about it!