• Kartik Agaram

    Kartik Agaram

    11 months ago
    Has the utterly brutalist approach to end-user programming ever been tried? Just forcibly package up apps with all their dependencies, along with all the tools needed to edit, build and run them? For a while now, we've had this notion of "end-user programming" in this community: the ability to modify software while we use it. https://futureofcoding.org/episodes/033.html by @stevekrouse https://www.inkandswitch.com/end-user-programming/ by @szymon_k https://malleable.systems by @J. Ryan Stinnett is also relevant Here's a sketch for an MVP that provides this experience in the bluntest, most obvious way possible: • Download a framework packaged as a single file, including all necessary dependencies. You download it from an https URL, and that's it, you're good to go. • It only supports *nix platforms on desktop machines. Linux, BSD, maybe Windows Subsystem for Linux. Macs are explicitly out because they're basically not an open platform anymore[1]. And we're going to need an open platform for the sorts of stuff we're planning below. • You can install arbitrary apps from arbitrary sources that run atop the framework. The apps are in interpreted languages and always come with source code. • When you run an app, it always opens on the app first. This is important. There's no REPL or IDE front and center. The interaction modes are whatever the app chooses. • When you run an app, the framework always shows the privileges it has in some consistent part of the screen. The vocabulary and enforcement of those privileges is the major responsibility of the framework. Needing it to be always visible is why you need a desktop machine with a large screen. • The app can ask for privileges, but the framework gives you the ability to lie to the app. Here's a simulated network environment that looks offline or has these honeypots. Here's a simulated file system with almost nothing or a few honeypot files. (Inspired by the Arcan project: https://www.divergent-desktop.org/blog/2020/08/10/principles-overview/#p6) • While running any app, the framework always provides a consistent set of primitives for interacting with the interpreted sources for that app. Imagine a button in the corner that flips a Hypercard over to open an editor on its sources, or something like that. Once you're in the editor you can modify it, run it, get syntax errors, test failures, etc. The editor and build environment themselves are implemented in the framework; for the MVP we'll assume we don't need to support modifying the framework. I wonder how far Glamorous Toolkit is from this sort of experience, @Tudor Girba @Konrad Hinsen. On one hand it massively over-delivers on the editing framework. The sandboxing stuff is a new frontier with lots of open-ended questions on the best experience to avoid confusing people before they understand how things work. I'm also thinking about building on something less ambitious for an MVP, like libSDL atop femtolisp or LuaJIT. Maybe also JavaScript 😬 [1] Like, it's great Apple that you eliminated vectors for malicious apps with all the restrictions on installing software. But I already had a perfectly good and healthy and functional relationship with the folks who provide gdb. When you prevent me from installing gdb, that's not cool. It's good to want to protect people from dysfunctional relationships. But to require all relationships to go through a single point is not. /rant
    Kartik Agaram
    Konrad Hinsen
    +7
    38 replies
    Copy to Clipboard
  • curious_reader

    curious_reader

    10 months ago
    Hello everyone 👋 Its probably due to facebook announcement, that I feel this topic gains attention. But I would be curious: What are your thoughts on the "Metaverse":

    https://www.youtube.com/watch?v=WJecbZWSbVs

    Exciting <=> Frightening? Indifferent? Especially from what we often discuss here the agency perspective, will this development lead to more centralization and more negative effects? Will it lead to better or worse - human relationships? Thank you!
    curious_reader
    e
    +8
    56 replies
    Copy to Clipboard
  • daltonb

    daltonb

    10 months ago
    A thought that’s been crystallizing for me is that the essence of ‘coding’ is modeling & simulation (not e.g. data and functions). These themes show up all the time in FoC contexts, but as far as I can tell they’re rarely the ROOT metaphors of a system. Of course there are plenty of examples in “real engineering,” what Alan Kay refers to as CAD<->SIM->FAB system. Do you know of examples of ‘convivial computing’ projects where modeling and simulation are the main event, or readings on the topic? What do you think of this premise? Here’s a recent Quora answer for more context: Does Alan Kay see any new ideas in computing?
    “New” is not what I look for. “Ideas that make a qualitative difference over past techniques” are what I’d like to see.
    Years ago, I’m fairly sure I was aware of pretty much everything regarding computing that was going on in the world. Today, I’m definitely not aware of everything, so it’s reasonably likely that if there was something really great being done somewhere that I wouldn’t know about it.
    I would be most interested in learning about “qualitatively more expressive” programming that is more in line with top-level engineering practices of the CAD<->SIM->FAB systems found in serious engineering of large complex systems in the physical worlds of civil, electrical, automotive, aeronautical, biological, etc. engineering.
    In the CAD<->SIM part I’d like to see the designs understandable at the level of visualizable semantic requirements and specifications that can be automatically simulated (on supercomputers if necessary) in real-time, and then safely optimized in various ways for many targets.
    Isolating semantics in the CAD<->SIM part implies that what is represented here is a felicitous combination of “compact and understandable”.
    The FAB-part pragmatics are very interesting in their own right, and besides efficiencies, should be able to deal with enormous scaling and various kinds of latencies and errors, etc.
    The above would be the minimal visions and goals that I think systems designers within computing and software engineering should be aiming for.
    I’m not aware of something like this being worked on at present, but these days this could be just because I haven’t come across it.
    daltonb
    ibdknox
    +6
    20 replies
    Copy to Clipboard
  • Felix Kohlgrüber

    Felix Kohlgrüber

    10 months ago
    Hi folks! It's been a while since my last post in this group, and it feels good to be back with some new FoC-related thoughts: I've been thinking about the tree structure of file systems recently and it turns out that they're limiting and require workarounds for relatively common use cases. Files contain data, but don't have children. Folders have children, but can't store data themselves. What if a file system had "nodes" that could store data AND have children? I've written a blog post about this and would like to hear your thoughts. As I'm not a native speaker in English and not really talented in writing, I'd be interested in feedback regarding the content as well as the general writing style etc.. Thanks in advance and looking forward to interesting discussions!https://fkohlgrueber.github.io/blog/tree-structure-of-file-systems/
    Felix Kohlgrüber
    w
    +1
    5 replies
    Copy to Clipboard
  • curious_reader

    curious_reader

    10 months ago
    I came across this interesting tweet from @ibdknox https://twitter.com/ibdknox/status/1458099415462318080?s=20 I do hope that with a focus of a tool connected to the web3 environment there is a chance to have a better discussion about what value acutally means. It is really a complex negotiation: how transparent should a tool be? How changeable to still gain traction? How can you as a creator help individuals to create value and or even groups? The larger PKM/Personalised Information trend we are seeing, I think is no fluke. People are trying to live the good live, it may start with having all your information or a reasonably size of it at your own disposal, for introspection. From these patterns can emerge : "Oh look I'm binge consuming information" - How do other people deal with this? Can I have a meaningful discussion about this with other people? Its a Meta Reflection on many levels and people navigate in their own ways towards it but I believe its really they are looking for "the good way of life". web3 is a huge as in humongous experimentation space, e.g. people thought prediction markets would work, but they didn't. They tried and are trying and will try crazy things. Its a very interesting substrate for stories and memes. (https://blog.simondlr.com/posts/infinite-stories-in-blockchains ) As such its as much about your "tool" what you will provide as it is about "the way" you will choose to interact with said ecosystem (token, voting, proposals, DAOs etc...) So in some sense people are in deed looking forward to have a conversation with you through the medium that is your tool and said "way" of interacting with people. Interesting times ahead for tools for conviviality 😃
    curious_reader
    Tim Lipp
    +5
    11 replies
    Copy to Clipboard
  • Konrad Hinsen

    Konrad Hinsen

    10 months ago
    A recurrent topic in this community is "Why do today's programming system so strongly rely on text files, and can we do better?" This tweet made me think of a possible answer: epistemic transparence (of text) vs. epistemic opacity (of data formats requiring more specialized tools for inspection). We have so many tools for inspecting text files that it's hard to imagine that someone could sneak in a tool that deliberately misrepresents the information in a file. Human-readable data encodings in text files thus provide acces to a shared ground truth. The tools intermediating between bits in memory and UIs (screens etc.) are so simple that they are easy to understand and easy to verify and validate. Even for relatively simple structured binary formats such as tar, this is no longer true.https://twitter.com/slpnix/status/1457642326956855296
    Konrad Hinsen
    Chris Knott
    +7
    27 replies
    Copy to Clipboard
  • h

    heartpunk

    10 months ago
  • Breck Yunits

    Breck Yunits

    10 months ago
    Does anyone know examples (or name of the pattern) of markup formats where instead of writing format directives inline <b>like this</b> you do it out-of-the-line like:
    text writing format directives inline like this
     bold like this
    Breck Yunits
    a
    +5
    14 replies
    Copy to Clipboard
  • Kartik Agaram

    Kartik Agaram

    10 months ago
    Immortal programs vs crash-only programs Immortal programs: http://steve-yegge.blogspot.com/2007/01/pinocchio-problem.html Crash-only programs: https://en.wikipedia.org/wiki/Crash-only_software In brief, immortal programs try to never, ever reboot. Crash-only programs are designed to always be able to recover gracefully from a reboot. There's a fundamental tension here, and I'm starting to realize I'm very definitely on one side of it. I like a neat desk, and am compulsively closing things (terminals, browser tabs, browser sessions) when I'm done with them. I prefer text editors to IDEs, vim to emacs, unix as my IDE rather than slime. I'd always thought of these as subjective opinions that were just down to my personality and past experience. But, upon reflection, I want to make a stronger case that "my side" is superior. 1. Focusing on recovering from reboots makes you better at simulating immortality. Restarts can in principle become instantaneous. Focusing on never rebooting makes you worse at recovering from crashes. 2. It's easy for immortal programs to end up in situations that are difficult to reproduce. I spent some time recently programming with @Tudor Girba's Glamorous Toolkit. Modern Smalltalk uncomfortably straddles the image and git repo worlds. The way you work is to make changes to your running image until you have something you like, then go back and package up a slice of your image into a git repository to publish. If you make mistakes, others can have trouble reproducing the behavior you created in your image. Testing if you did it right necessarily requires rebooting the image. Putting these reasons together, immortal systems are more forbidding to newcomers. Crashing becomes a traumatic event, one newcomers are not used to, something beginner tutorials don't cover. When things don't work, it's more challenging to ask for help. Creating and sharing reproducible test cases requires crash-recovery skills. Rereading the Pinocchio post now, I notice that there's actually no concrete benefits stated for long-lived programs. All there is are (compelling) analogies. A counter-analogy: an immortal program is like a spaceship. Once launched you're in a little bubble, stuck with whoever you happened to start out with. A crash-only program is like a little stone rolling down a hillside, gathering other stones until it turns into an avalanche. As I said above, I'm biased because of my experiences. I'm curious to hear from others with more experience of immortal programs. Am I understating the benefits, overstating the drawbacks?
    Kartik Agaram
    Cole
    +10
    19 replies
    Copy to Clipboard
  • curious_reader

    curious_reader

    10 months ago
    Hello everyone! I'm curious in the sense of "thinking together" regarding this happening in the rust community: https://twitter.com/etorreborre/status/1463422189080915969?s=20 what are you throughts in it? Why is it happening within the rust community and more general? How much are programming languages influenced in terms of communicating culture by things like the culture wars? Looking forward to your thoughts
    curious_reader
    a
    2 replies
    Copy to Clipboard