So the AI tarot reading app is done: <https://thet...
# share-your-work
t
So the AI tarot reading app is done: https://thetarot.online/index.html It has been programmed entirely within Observable. Production traffic is routed through the developers notebook if you have it open, so you can observe the system live. I exploit observables dataflow so each step of the processing pipeline is cached, so you can see exactly where a problem occurred, and thanks to hot code reload, you can fix a bug and have the pipeline continue without rerunning prior steps allowing easy iterative programming workflows. Of course, because Observable notebooks are executing Javascript, you can attach a debugger too (even programmatically) One click forkable, MIT licensed. This is what I think the future of software development should look like. But I am struggling to communicate this well. The idea is to make cloud programming feel local, to exclusively test in prod, because when you are hitting third party APIs its a waste of effort mocking a staging environment, or a local environment. I want to remove the thick layers of indirection in the development process that no longer make sense. Forget git, forget toolchains, just straight to prod using modern Javascript, debugged with the authoritative debuggers built into browsers.
❤️ 1
d
Fascinating stuff, but how does one get rid of git? How would a large team work together without stepping on each other's toes? How would folks keep track of features, merge branches, research where exactly a bug got introduced, etc etc etc? Git is such a perfect tool for the job in so many of these cases, so I'm not sure how one would get rid of it.
t
I don't need git because Observable notebooks support forking and merging and have a linear history with rollbacks, which is enough for teams. Git is great, but its overkill (how many people stop working when Github is down, in practice, we don't use it like Linus needed it to be) and its kind of a workflow distraction (ok! now I need to think of a commit message). If we cut (Github in particular) out we have eliminated a whole load of extraneous development complexity and distraction. Nobody spontaneously follows a git workflow, you have to be trained, it's unnatural. We just need a get out of jail command for mistakes (rollbacks) and merging for working with others. Version control should work as background feature to a realtime process
d
It sounds like this move might be right for you and your team, but this has been pretty much 180 degrees from my team's experience. In my team, devs from all over the world working in a variety of languages and platforms all know Git. It's something they share, a language we all speak. We occasionally use Github, but we are definitely not dependent on it. In our projects, we see a lot of benefit from leaving detailed and clear commit messages. It helps a lot when doing forensic analysis. In any case, I love what you've done here re: Observable development; love the vision! 👍
t
yeah it doesn't suit everything, but not everything is worthy the formality of commit messages either. Maybe this is an interesting use case: end2end API testing. I have cronjobs that ping notebooks, that can login and test secure APIs against unit tests (https://observablehq.com/@tomlarkworthy/testing). By writing a test you pretty well specify everything a commit message would and the reactive nature of the notebooks are really good for iterating on like API authentication. WDYT?
c
@Tom Larkworthy, really neat AI interpretation and card choosing mechanic. I'm tempted to try building something like this for myself, since I should be able to see how you did the AI since it's in ObservableHQ, right?
d
It's really neat!
t
@Cole yes! The code is https://observablehq.com/@tomlarkworthy/tarot-backend. While the app is feature complete, I want to do more work on the literate programming aspect and add more explanation to the code. So maybe it's a bit hard to read ATM, feel free to leave comments in the notebook and I will expand on confusing bits. You need an OpenAI API key and a Google Cloud Service Account Key to get it fully working locally.
Its had a little traction on redit, but if breaks every 12 hours or so. So now I have to figure out why the remote version falls over, which is not so simple. I am wondering what the best tool to get connected to it is? Remote debugging? i.e. the devtools wire protocol? Or load tester so I can knock it over locally?
y
Really impressive how far you’re able to take this approach @Tom Larkworthy! For me, one of the main takeaways is that it’s possible to have many aspects of programming supported by a “live executing” toolchain. I.e. you don’t need complicated build / deploy setups, and exploring how far we can take this imo could open the doors to make programming more accessible to a wider audience
t
I had a thought yesterday that I have something very close to being an Electron-for-servers, which doesn't sound very flattering but then you have to admit that electron has radically lowered the cost of entry for desktop software.
🙂 1
k
How's it been going debugging remote issues in prod? That's the sort of thing the modern toolchain has evolved to deal with..
On a tangent, I cast my first casting of the I Ching yesterday. https://bits.ashleyblewer.com/i-ching via* https://github.com/ablwr/my-recurse-center-syllabus
t
Good question about production errors... and timely. I had an issue popup. I thought the fact I can debug and step through the server code would be enough, but retrospectively obvious in hindsight, of course that does not help for those heisenberg bugs that happen in production for unexpected situations. I had just such an issue occur. Anyway, the bug was that the server would break against malformed requests. A classic case of not checking my preconditions. It was quite easy to generate a malformed request as anyone who clicked the API link in the backend notebook source would generate such a request. But all I saw was the server would break after a while randomly. Turns out even though I had Sentry setup, it does not notify caught exceptions and Observable's runtime catches errors. To find the error I resorted to exporting a notebooks state using flatdata. I had to create a method for exporting a notebooks state first (https://observablehq.com/@tomlarkworthy/notebook-snapshot) Once I could see the cell "validatedConfig" was unexpectedly throwing it was fairly obvious what the programming mistake was. Then it became obvious why Sentry was not detecting it. So I created a generic catchAll method that creates a global notebook error handler (https://observablehq.com/@tomlarkworthy/catch-all), and from there I could upgrade my Sentry integration to capture notebook runtime exceptions too (https://observablehq.com/@endpointservices/sentry) So now those random errors in prod should be picked up by sentry, and sentry is awesome because it includes the context and line numbers (pictured) So now the answer to detecting bugs in prod is to just use Sentry! What's cool about using a browser as the backend is that a single Sentry installation covers both frontend AND backend monitoring. That same sentry setup is present and active regardless of whether using the whitelabel domain (https://thetarot.online) the front end notebook (https://observablehq.com/@tomlarkworthy/tarot) the backend notebook (https://observablehq.com/@tomlarkworthy/tarot-backend), the webcode API endpoint (https://webcode.run/observablehq.com/@tomlarkworthy/tarot-backend;api) or the application embedded in Medium (https://medium.com/@tom-larkworthy/gpt-3-knows-tarot-385d935662a3). You integrate a single state-of-the-art production monitoring solution and cover all bases with a single tool.
❤️ 1
k
That was a fun read. I look forward to hearing more when you next run into a problem Sentry can't handle 🙂 This reminded me of something I wrote a couple of years ago (2019-11-19; #of-end-user-programming; deep in our Slack archives by now):
After banging my head on a problem at work all day, the answer came to me in a flash of insight on the way home. I spent all day repeatedly running experiments on my program, inserting complex sequences of breakpoints, emitting large traces, gradually refining and automating a whole complex workflow so it could be more easily repeated after making changes to my program. I had more ideas for things to try later in the night, but the insight short-circuited them.
One voice in my head (the one often active when interacting in this forum) whispers that if only I had better tools the process could have been shortened.
Another voice in my head whispers that I'm stupid for taking so long to figure out something some putative body else would find obvious. ("If deleting no-op nodes in a dependency graph causes nodes to fire before they're ready, that means some edges are being spuriously cut.") Or maybe I'm rusty, because I don't work anymore with graphs 12 years after finishing grad school.
But the dominant voice in my head is just elation, the flush of insight, of having tamed a small portion of the wilderness around me and inside my own head. And it wouldn't have happened without struggling for a while with the wilderness, no matter what tools I had. A big portion of today was spent trying to visualize graphs and finding them too large for my tools to handle. So I had to resort to progressively more and more precise tools. Text-mode scalpels over graphical assistants. And that process of going beyond what my regular tools can handle is a key characteristic of going out into the wilderness. When tools fail, the only thing left is to try something, see what happens, and think. No improvement in tools can substitute for the experience of having gone beyond your tools, over and over again.
There's a famous saying that insights come to the prepared mind. It's easy to read and watch Bret Victor and imagine that we are in the insight delivery business. But we're really in the mind preparing business.
But this isn't quite right either, because we're really never without tools. What we have is levels on levels of tools that tend to accrete upwards, with tools at lower levels getting used less and less frequently until they're forgotten and lost. Periodically ripping out swathes of tools and trying to start afresh is a great thing to try. God knows I've done my share of that 🙂 Even though you can't ever really leave the midden[1], it's clarifying to take stock of your tools and identify dead weight. And if what seemed like dead weight turns out to have use, well that's good to know as well. [1] http://akkartik.name/post/deepness
❤️ 2
t
Reliability is pretty good now and there is a fairly sane development methodology to detecting and correcting errors. However, the new issue is performance sucks so my next goal will be trying to discover the methodology for fixing this. I can see quite a few network stalls so I wonder if HTTP/2 is not properly working (?).
❤️ 1