reflecting on jam oriented programming... i can't...
# thinking-together
l
reflecting on jam oriented programming... i can't believe i spent so many years being dictator of my projects. what a waste. i now do the jamming approach... it means i MUST accept all changes, even if i disagree with them. if i care enough, i can change them or revert them, but that takes effort, so they usually stay. and for ease, i make everyone admin of my own project. if you submit a pull request or an issue, i just instantly merge and make you admin. then i don't need to be a blocker in future: you can commit straight to main it means the project becomes ten times richer because it's a team effort with everyone pulling it in different directions nothing has to be perfect, and it gets done FAST. it's more open than open source. it's jam source! each day it becomes more hilarious/tragic to me how most HCI and "future of coding" developers keep things so closed off and secret, now that I've experienced this better way
c
This reminds me of an article on how the default of the web is implicit feudalism. Feels similar to how we make software. https://www.colorado.edu/lab/medlab/2021/01/08/implicit-feudalism-why-online-communities-still-havent-caught-my-mothers-garden-club
l
right! thanks for the link. I've been thinking how github repos default to dictatorship, and how unhealthy that is for the ecosystem
there's some more info about jam oriented programming here: https://www.pastagang.cc/paper/ it's a work in progress paper intended to be submitted in about a month. the paper is getting jam written by tens of people
c
Curious what things you see github doing?
Does the main branch have protection on by default? Although it certainly nudges you towards that by the notification to set branch protections
l
it's not my paper
no it's nothing specific to GitHub, rather that the whole open source approach of benevolent dictator is shit
when you make a repo, only you can edit it. pull requests need to be reviewed by the glorious leader
forks are second class
c
Pastagang’s paper* 😄
l
to be clear i personally wrote like less than 5% of the content of that paper
it's not just a word game
c
Ya my bad
m
Rebellion is fairly cheap in the world of open-source though. A dictator can usually be toppled with a single fork.
l
not cheap enough!!!!
k
One problem I see with forks is that they are asymmetric. There's always "the original" and "the fork". GitHub makes it very obvious which is which. Something I'd like to try is a pool of repos that are equals. Everyone can see everyone else's changes, adopt some and reject others, but without any notion of hierarchy or convergence to a consensus version. Git allows this, but today's forges don't. I am not aware either of any tool support for working in a pool of equals.
l
having one big jam repo is great honestly. being forced to accept every change (even the ones you want to reject) is important
k
Story time: back in 2009, some people on Hacker News got together to create a community fork of Paul Graham's Arc Lisp that anyone could modify. We called it the Arc Wiki and I suggested the name anarki. It is still around. Back then Github had a setting to let you make a repo public so anyone could push to it. It disappeared around 10 years ago, in a move to be more enterprise friendly that nobody remembers anymore, and that seems minuscule compared to everything they've done since. When that checkbox disappeared we adapted to it by saying anyone can get push permissions. Just come ask. I was obsessed with the Arc Forum to an extent that I still find my fingers typing out the url when I'm out walking. My fingers dream that it is 2014 and they're at a keyboard. The forum never had more than a half dozen people but it was surprisingly active for its size. The forum was the jam session, anarki was just a tool. On one level my story since has been about trying to find ways to make the jamming more acute, all while finding the jamming get less acute. I sense that there are many ways to create jamming scenes. I've seen projects like Tiddlywiki and oh-my-zsh start out extremely permissive in letting people add changes to them. Over time they slow down as it gets harder and harder to make changes while maintaining any sort of sense of stability or continuity. So it depends on what different people want. These days I think it's all an experience. Random walks and sprints to a destination all have their place, and they mix together in the world anyway if you think of it all as one giant repo under the sky. Today I maintain 50 or so forks containing various apps, but none of the codeforges can tell that they're forks of each other or determine which one is the root, because I clone and reupload each fork from scratch. That information is in the git logs. They're true git forks arranged in a network rather than github forks arranged in a tree. The jamming still doesn't happen, though. The future seldom cooperates. Still, I feel I'm going somewhere interesting, and so is everyone else. Prosperity and computers have radically expanded the space of things humans can do. As we expand into this space the default density of people drops. Each of us homes divergent urges to explore alongside convergent urges to explore together.
l
Over time they slow down as it gets harder and harder to make changes while maintaining any sort of sense of stability or continuity.
i guess this is a big part of it. letting go of stability and continuity (and control) is an important part of jam oriented programming
k
Yeah. It's not just stability and continuity for the author/dictator. Every community has a tendency to prioritize the needs of people who show up early on. The minority in the present captures the infinite future.
l
there's certainly that risk for sure. much less risk than not taking the jamming approach i think
m
Reminds me of the do-ocracy we practiced in our shared flat in my 20ies: Anybody is allowed to change anything (in the communal rooms). Everything is shared (except toothbrushes) BUT no hard feelings when you just undo anything. To make this work we had semi-regular meetings where we talk real, little playfights in the hallway to resolve tensions and regular shared meals. Not sure how that can be transferred to the net. I think this physical presence/contact was crucial.
l
this kind of thing has been happening at https://pastagang.cc for quite a while now and very recently https://pondiverse.com
in-person and remote
m
Yes. I discovered the pastagang before. Watched you jam in awe but did not dare to touch the contraption. Too much stuck in old patterns I guess.
k
It's interesting to see that all of you mostly focus on the permissions aspect. For me that's rather secondary. With git, everyone has a local copy, so nobody can do serious damage to anyone else. So let everyone have whatever permissions on a shared repo, that's an administrative detail. What I miss for peer-to-peer collaboration is discovery and exploration tools. I'd like to be able to check easily what everybody has been working on over the last week. Or find the branches across all repos that have a specific version of some piece of code. And I am not thinking only of jamming, but also of long-term, slow, asynchronous. Code that a hundred people use, but which only sees two or three changes per year.
l
perhaps it seems only secondary to you but it isn't at all! the whole point is that you want to subject yourself to the danger of other people's changes on yourself
j
I guess this is the wikipedia approach. They have to have moderation tools at that scale, but the model is still allow-first and worry-later.
For live-coding, I wonder if different languages would allow this to scale better. Eg if you have 1000s of people editing an imperative program, at any given point it probably doesn't run at all. But in a dataflow program, if part of the graph is broken then that part of the graph just doesn't put out values any more, but everyone else is still getting feedback on what they are doing.
l
i'm hesitant to suggest that different or new languages are needed because i want people to know that this approach is possible to do already with our existing tools - especially on the live coding front. see http://www.youtube.com/watch?v=HCcSHMu0gzg
and yes, part of the "agreement" of a jam is that: yes you can make any edit. this also means that anyone can reverse your edit. there are other strategies you can take to scale and influence people's behaviour too, other than central control
k
@Lu Wilson So you don't keep a local repo at all? The shared one on a forge is the only one you have? Or is it simply a social convention that the current state of the official repo is the only one from which anyone may move on?
s
I've always done that with my discord servers - everyone is an admin , anyone can change anything , disputes are resolved with pleasant conversation. Moving that policy from a small group of friends I know personally to anyone on the internet seems scary to me but I might just not be letting go enough. Cool to think about
l
@Konrad Hinsen i guess i do keep a local repo but it's often out of date because i often make edits via the forge ui itself. and it's not really the point. anyone can do anything. who knows what will happen when it inevitably all gets deleted
@Spencer Fleming there's certainly a lot you sacrifice by doing it. there's a lot that's off limits, like auth, because it would make you too vulnerable. the community does a lot to try to keep itself "healthy". and people sometimes talk about the eventual inevitability of things getting mass deleted. it was covered on this episode of the pastagang podcast

https://www.youtube.com/watch?v=x7Z6Uo4torg

the recurring "let code die" mantra is part of that.
k
Whether you look at the short history of social networks, the longer history of human societies, or the even much longer history of biological evolution, you always end up concluding that open communities (i.e. those that let anyone join by default) need an immune system to defend against hostile takeovers. So when I see these stories of "anyone can do anything", I wonder if they are about open communities that have been lucky so far, or about open communities that do have immune systems, even though they may not be explicit. Question to @Lu Wilson, @Spencer Fleming and anyone else involved in such communities: Do you have an idea what would happen if, say, nazis move in? Has something like that happened already?
l
making sure we keep bigots out is a fairly regular conversation topic within the live coding world and places like pastagang. there are lots of different strategies you can take, which look very different to the ones you adopt in a more traditional hierarchical organisation
there are too many to list here really, but they include embedding activism deep into the community, making it clear who is (and isn't) welcome. and the "anyone can do anything" thing is actually helpful for this. i don't expect many people to understand, i would instead encourage you to try participating and see for yourself https://pastagang.cc
to answer specifically, the nazis would be kicked out by the overwhelmingly larger number of non-nazis! if that fails, then we would let pastagang die and start again with something different. ("let code die"). there have been some bad actors, and they have never lasted long. sometimes they've been frustrated by how unbothered people are when they clean things up and/or invite the bad actor to join in. someone once tried to make bad/unpleasant sounds happen in the public jam room, and everyone flipped it on them by incorporating their sounds and contributions instead. in terms of content that NEEDS to be removed, the "anyone can do anything" thing means that moderation happens MUCH faster than traditional organisations.
there are certain ways in which we can alter the "physics" of collaboration. eg: deleting is much more powerful than adding within the nudel tool. undo is blocked. pasting is blocked. this means it's much easier to clean up trouble than create trouble
s
This thread is making me reconsider my current project some- the current idea is a collaborative rube goldberg machine where anyone can add on to the end of it and keep making it bigger. Got a cool infinite grid design based on https://cs.wellesley.edu/~pmwh/labyrinfinite/ The plan so far was to divide the grid into squares and anyone can upload a PNG to have it added in, after a quick check that it's not trolling / hate. Once added its permanently there.
Trying to imagine myself as an uploader - There is something nice and comforting about being able to have a section of the world that you added and you can show off to friends later without worrying. And you'd be able to have your name attached to it and give the 'tile' it's own title. The inspiration was of being a gallery, something like https://www.collectionrert.org/ where anyone can add a work to the collection but your work itself is not open for modification
But at the same time, it would be cool to have someone add something to what you've made, as a nice feeling that your work was something inspiring for someone else. And the UI for just drawing straight on the live work would be really slick.
Towards jam coding specifically: there are a number of times that I've come across a project with a trivial mistake somewhere, a typo in a readme or out of date documentation or other very small very obviously correct change, and I've wished it was possible to just make that change there and then so that everyone can benefit. But, of course, github does not make that flow easy
Github with a wikipedia style flow would be a very neat project
I wonder if there'd be some way to add a somewhat more trusted group of testers that can flag specific revisions as being particularly workable so that this idea could mesh easier with Library development as compared to App development or Live Performance
& last little thought, but now that I live next to a light rail station, I've often wished that they had brooms/trash cans so I can tidy up a some while waiting. I imagine there's a good reason there isn't any, since no trash cans in particular seems too obvious to be an oversight.
k
Thanks for the details @Lu Wilson! Looks like you have a nice immune system. Much like a bacterial colony, or more generally swarms of roughly-equal individuals. An anarchist's dream!
b
https://openopensource.github.io/ was a handy summary to link to for "we're liberal with push access here" although that site's repo got archived. It still recommends PRs for pre-merge review of non-trivial/breaking changes. Pieter Hintjens (RIP) was outspoken on "Optimistic Merging" (great name imho) e.g. blog, book excerpt — which is more radical, closer to what Lu describes in saying that merging a bad change first, iterating later is a feature, not a bug.
s
Cool article re: optimistic merging
I don't quite get the explanation why a toxic patch can be forced through under PM but not under OM though
the more I think about it the more I like it though. Instead of keeping the main branch perfectly clean, or worse the entire repo, instead it can be signing off on specific tags, or moments in time only
k
One potential issue I see with universal push access, as opposed to optimistic merging, is bad actors using automation to do harm. If you are really determined to kill a project, you could create a new account on GitHub every day and use it to attack the repo. Has this been observed or discussed?
k
@Konrad Hinsen Reason doesn't help here. Yes, bad things are possible. One can never anticipate everything. What you need is faith; before bad things happen there is more room for good things to happen. If bad things happen we just pick ourselves up and build it all up again somewhere else.
k
One can never anticipate everything, but every living organism needs to anticipate as much as possible of what can threaten its survival. So...
If bad things happen we just pick ourselves up and build it all up again somewhere else.
... this means that the repo is not important for survival (of the team). No obligations attached, no responsibilities. That's a rare privilege.
s
wikipedia pulls it off, but also they have more nuance than always merging immediately (some articles are locked etc) but it's still their default
the strategy makes sense for me for a repo thats not immediately getting deployed somewhere, such that whoever is responsible has a chance to look things over
I also think things like CHERI and fine grained sandboxing can improve this. it's less of a risk for end-user-app type things if the possible fallout is contained
hopefully reverting a malicious change is as easy as denying a malicious merge request, so the only difference is that the repo will temporarily be in a bad state
but in general I really think this is a good strategy if you want people to contribute to your project. I edit Wikipedia for typos etc way more than github docs because its so much easier
k
@Konrad Hinsen:
this means that the repo is not important for survival (of the team). No obligations attached, no responsibilities.
Yes my sense was that is already assumed by the fact that incompatibility is not checked for or protected against. Malice as a difference of degree in incompatibility?!
k
@Spencer Fleming Wikipedia is indeed an interesting case because it has evolved, over many years, a very effective immune system. Neither too weak nor too constraining. The motivation for the questions I have been asking in this thread is finding other cases of effective immune systems for large-scale collaboration.
@Kartik Agaram I guess you can define malice as severe incompatibility, though I am not sure it's a very helpful definition. Incompatibility comes at different levels of severity (easy vs. difficult to fix or work around), but also at different levels of social organization (incompatible with the latest version of gcc vs. incompatible with the values of the project founders, for example).
l
Catching up with everything... The rules of "open open source" seem pretty strict and fight against some of the benefits of jam coding. we tend to encourage people to test on the main branch and in production. it brings you closer to your fellow users. you shouldn't go and hide away in your rabbit hole.
s
@Konrad Hinsen I think another effective strategy is obscurity / small scale
trolls don't tend to show up while you still have a small nice community
I think its worth considering using that advantage while you have it
l
still catching up... I very much agree with the observations on OM vs PM. @Spencer Fleming the argument is that OM nips toxicity in the bud, while PM gives it lots of attention. I've certainly seen this myself, with bad actors getting bored very quickly when people just delete or revert (or embrace - their least favourite outcome) their code
k
@Spencer Fleming It works for a while. In the long run, someone will find you. Ask the dodos for their experience with this approach (if you can find one).
l
lmao
i need to catch up with all these messages at some point
don't be so afraid of death!
k
In terms of biological analogies, this is colonies of single-celled organisms (OM) vs. multi-cellular organisms (PM). Both forms of organization have been around for a long time. Both have their ecological niches. Different strengths, different weaknesses.
l
One potential issue I see with universal push access, as opposed to optimistic merging, is bad actors using automation to do harm. If you are really determined to kill a project, you could create a new account on GitHub every day and use it to attack the repo. Has this been observed or discussed?
not observed but discussed. people in these jamming projects prepare themselves for this kind of possibility. the ability to 'let a project die' gracefully is embedded into the culture. "let code die" is thrown around non-stop. it's the thing i see newcomers struggle with most. if the project gets ruined that's okay. there's an acknowledgement that the value created is not in the code we write but in the interactions we have and the things we learn as a collective. there will come a time where a project needs to die. a bad actor might speed up that process! it'll make space, and free up energy for something new! this is a healthy part of the cycle
@Kartik Agaram is right: bad things might happen, but there is a much higher chance of bad things happening in a traditionally hierarchied(?) project
.. this means that the repo is not important for survival (of the team). No obligations attached, no responsibilities. That's a rare privilege.
that's correct. this is the whole point and approach of the "slippy mindset" as a potential solution to the "tadi web" problem. if you can allow yourself to be in a situation where death of code is okay, then you unlock SO MANY possibilities (including jam oriented programming)
the strategy makes sense for me for a repo thats not immediately getting deployed somewhere, such that whoever is responsible has a chance to look things over
what's the point in a piece of jam code that doesn't get deployed? the whole point is that it gets instantly deployed. that's what makes it a jam! there are certain precautions you have to take like: make sure you unimplement any auth systems that get added. it can't be relied on because it would be easy for someone to steal your cookies or logins or whatever. "jamming" is an infectious activity in that sense. it encourages you to create other jamming tools
regarding malice: Sometimes it is very hard to determine what is "malice" and what is "someone getting confused" so it's best to take on board any change even if it seems bad! why not take the jam in a malicious direction for a bit. let the tool die! let it crash! why not??
i do think that obscurity helps. it's not watertight but you're in the game of numbers and chances, so of course it contributes. of course it doesn't last forever. nothing does. not me or you or any of us or any of the code we write. everyone and everything will die. until then, it's good to do things together!!!!!
s
re: not deployed, I was thinking in the context of an app that's running on your computer or with accesses to stuff that ought to stay private. For something like a public art piece or properly sandboxed tool then absolutely!
this is why I'm so excited abt stuff like cheri, genode, sandstorm, because I'd love to be able to run any software and be worried only about it being buggy and weird, and not about it messing with things outside of its scope
if there's no sort or reward in terms of being able to mine bitcoins, steal juicy data, be part of a botnet, or disrupt specific groups, then i don't think anyone has any reason to be malicious and won't. for my upcoming project its just a nice website. I'll serve it via an iframe or something and feel very safe that the worst that could happen is easily reverted vandalism and the best that could happen is beautiful and spontaneous ideas from people that would otherwise never have bothered
also, inspired by this thread, im looking into how hard it would be to build wiki-style live code updates into the website such that if anyone changes it, then it changes for everyone immediately, without needing to download the code, edit offline, and push
k
Thanks a lot @Lu Wilson! Code as an ephemeral cultural artifact, like street art, is something we (i.e. society) need a lot more of. If only to develop the elusive "computational literacy" that people have been talking about for ages but always tried to achieve by design and top-down deployment, with roughly zero effect. But beyond any considerations of literacy, it's simply a great way to develop creativity. I am looking at this from a very distant point in the code universe. My work is scientific research on biomolecules, and that's what I write code for. This is work that others will build on years from now, and it might end up becoming relevant for developing medical treatments. Which explains why the value system of research is very different from jamming: everything needs to be traceable, inspectable, verifiable. And yet, science needs the unbounded creativity of jamming as well, and I see it disappearing due to the growing complexity of software stacks and the (justified!) bureaucracy generated by traceability. Which is why I am asking all these questions: to figure out how jamming and science can be made to coexist.
@Spencer Fleming
reward in terms of being able to mine bitcoins, steal juicy data, be part of a botnet, or disrupt specific groups ...
The specific group being disrupted could be your jamming group. Science is currently experiencing such attacks (and not just in the US, even if elsewhere the attacks proceed more slowly and less openly), which is why I am quite sensitive to this aspect.
s
Yeah totally, I'm trying to work together my thoughts on this too. For example, for a lot of tools I like leaving updates completely off so that things only change when I want them too. Big frustration re: windows. Jam coding that updates everything I use always would be like living in a big house with no room to myself
I think there's definitely something to the SQLite / Dwarf Fortress model of accepting no contributions so everything you're working on fits in your head. Same for the "make what you need / use what you make" style indieweb projects where its for yourself and given to others as an added bonus and contributions accepted based on usefulness to you
But if the point is to form a community all improving the code I think there's something to the Optimistic Merging style in making it as easy as possible
l
will catch up on this thread in time but for now check out this blog post that someone wrote about the jam style of coding they've experienced in the pondiverse https://cthulahoops.org/collapsing-the-pondiverse they published it today
k
@Lu Wilson This remark from it is 💡
One intriguing idea around the Pondiverse is that things should be centred on types of creations not tools. It shouldn't just be about showing off the tool creators work, it should be about making tools that can be used to create.
One question I've aimlessly had without resolution is how to add native apps to Pondiverse..
k
how to add native apps to Pondiverse
Isn't that just the old question of interoperability between Web-based and native code? Or is there some additional obstacle?
l
there's no reason you can't make a native app. take a look at the endpoints that the pondiverse button sends things to and send to it. https://github.com/TodePond/Pondiverse/blob/main/pondiverse.js#L8
j
Part of the advantage that wikipedia has is that it's not executable. If someone vandalizes one page, the whole website doesn't crash or get stuck in a infinite loop. Vandalization doesn't scale well. But the code running wikipedia is managed on gerrit with auth and hierarchy. If it just auto-deployed anonymous pull requests, wikipedia would have a lot more crypto miners and porn ads, and someone would delete the entire database.
That's why I was thinking about programming languages/models/architectures/whatever. We can jam with dozens of people on creative projects. But we can't jam with the internet on mediawiki. The more machines and personal data the code has access to, the higher the reward for abuse. We don't have a model for jamming yet that can tolerate even very small percentages of serious abuse (ie malware, not trolling).
Browsers are probably the closest we've gotten - I don't feel much trepidation opening a random website and letting it run javascript on my computer full of credit card numbers. But any website that allows user-contributed javascript to run does end up having an abuse team, because there is serious money to be made by running abusive javascript on enough browsers.
This has very "governing the commons" vibes.
Copy code
1. Clearly defined boundaries  

   Individuals or households who have rights to withdraw resource units from the CPR must be clearly defined, as must the boundaries of the CPR itself.

2. Congruence between appropriation and provision rules and local conditions  

   Appropriation rules restricting time, place, technology, and/or quantity of resource units are related to local labor, material, and/or money.

3. Collective-choice arrangements  

   Most individuals affected by the operational rules can participate in modifying the operational rules.

4. Monitoring  

   Monitors, who actively audit CPR conditions and appropriator behavior, are accountable to the appropriators or are the appropriators.

5. Graduated sanctions  

   Appropriators who violate operational rules are likely to be assessed graduated sanctions (depending on the seriousness and context of the offense) by other appropriators, by officials accountable to these appropriators, or by both.

6. Conflict-resolution mechanisms  

   Appropriators and their officials have rapid access to low-cost local arenas to resolve conflicts among appropriators or between appropriators and officials.

7. Minimal recognition of rights to organize  

   The rights of appropriators to devise their own institutions are not challenged by external governmental authorities.
l
catching up these messages gradually... @Konrad Hinsen you see these two sides as being conflicting or to be pulling in different directions but they are complementary. within these jamming communities, "recording" is a huge part of the ethos. things are getting recorded and documented much more than before because (a) it is massively encouraged and (b) there are tons more people. people write and create stuff around the jam in their own personal spaces. the idea of a "snapshot" is especially helpful here. however that's all it is: a dead snapshot, not the work itself. I genuinely believe that science would be much more resilient to such attacks if "jamming" was a greater part of it, because it would have less hierarchal weak spots that can be pulled out, and there would be so many more people who are mobilised to defend and regroup around those attacks. this is very hypothetical because science (and academia) seem so unjammy. why can't anyone in the world jump into your current notes/document and leave a comment or make a creation? why can't anyone in the world see what you're doing at any time? i find this plainly ridiculous
@Spencer Fleming I suppose my take is the strong view: That we shouldn't do things privately. I think it's too costly to do that, despite the advantages to some. Not for fun, not for work, not for science, not for art, not for anything
@jamii you've done the "... doesn't scale" meme. The whole point is that the jam'd code executes on our machines. I do it all the time. I am very vulnerable to an attack. The idea is not to avoid attacks, it is to be able to recover well from attacks. I think the scalability of jamming has a drastically higher ceiling than traditional top down control and we are all currently discovering what that looks like. i invite you to join in! I think people may believe me more if we leave the browser and get it working locally instead
k
@Lu Wilson I agree that science would benefit from adopting jamming practices into its culture. The closest I know of is Open Notebook Science, which is pretty exotic. The main obstacle to working in the open is the fear of being scooped, which is related to the tradition of attributing a discovery to whoever talks about it first. That doesn't make sense any more in the Interne age (you could perfectly well be the first to discuss a topic in a jam session, and have ample of evidence for that), but habits change slowly.
And I fully agree that jamming can scale better than top-down. Not necessarily better as in "more results", but better as in "more robust results". More variants explored, compared, etc. Better exploration of the problem and solution space.
j
> you've done the "... doesn't scale" meme. I'm trying to feel out what parameters of a project create physics such that recovery is cheaper than attacking and there isn't enough reward for temporary attacks. It's clear that wikipedia content manages that, but not at all clear that the (much smaller scale) underlying infrastructure (mediawiki, databases, deployment keys) does. I think that's interesting and worth teasing apart, rather than dismissing it as a meme. > I think the scalability of jamming has a drastically higher ceiling than traditional top down control Do you think that mediawiki would work better as a jam? Or if not, why not? Does the number of developers matter, or the amount of state, or the size of the audience (making attacks more lucrative), or the fact that the code can spend money? What kinds of problems does jamming work for, and when does recovery from attacks become too painful? These seem to me like useful, obvious questions to ask, not just memes!
This is also why I brought up governing the commons, which teases out under what conditions community governance of common resources has worked in practice. What conditions would be on an analogous list for jamming?
Also interesting is the idea of strong- vs weak-link problems. Creative projects and scientific experiments are strong-link - you only care about the good results and you can throw the rest away. But an infrastructure software project like mediawiki is weak-link - if any of the code is broken then the whole thing might be broken. Maybe jamming works better for strong-link problems?
k
Good questions @jamii! One more criterion I see is formal vs. informal. Wikipedia is an informal collection, made up of mostly informal information in the form of text. Informal knowledge can tolerate ambiguities and contradictions. Formal knowledge, as in code, cannot. Yet another one is pace layers. Both Wikipedia and MediaWiki are in the infrastructure layer. The pondiverse is in the fashion layer. The most-discussed examples of commons are infrastructure, but there can be fast-paced commons as well.
l
that pace layers article seems like nonsense to me, especially how it talks about art.
"... doesn't scale" is maybe more "... doesn't scale in the way i expect things to scale". being able to recover from 'death' means you need to die quickly and gracefully (no boiling frogs) ("let code die"), and it helps to be overly redundant. in the pastagang group, there are many alternative tools that allow you to keep going if another one breaks. the most popular tool, nudel, even intentionally disables itself in various ways on a regular interval to try to encourage this. "should mediawiki be a jam" is the wrong question. you need to zoom out a bit more. mediawiki can be part of a bigger jam. the difficulty/delicacy of its situation is that too much relies on one platform being relied upon. it's better to form a network so that data is duplicated and deaths can be recovered from. it's hard to scale one thing to massive size! it gets dangerous. if you can somehow get lots of smaller things to work together in a loose way, it can scale much bigger in a healthier way
k
@Lu Wilson The idea behind pace layers seems valid and important to me. The article reflects the strengths and weaknesses of its author. I agree that art is not among his strengths.
too much relies on one platform being relied upon
Which is an expression of modernism: efficiency, bet on a single optimized resource, etc. The alternative you propose is robustness through redundancy: nothing is indispensable because other parts can take over its roles. Much like in a natural ecosystem, where the death of an organism is a banal event and even the extinction of a species leads to no more than a minor reconfiguration. For software, that means coordination through protocols rather than through common dependencies.
However, it is important to realize that if you start out building a world-wide knowledge network in terms of robust and redundant ecosystems, what you will get is not something like Wikipedia. It will be something a lot messier, with different copies of same-titled pages coexisting and possibly contradicting each other. The very idea of an encyclopedia is an expression of modernism.
m
Randomly throwing this is because you are talking about death, code dying, modernity and messiness: I reading "Hospicing Modernity" by Vanessa Machado de Oliveira right now. It is awesome so far. Maybe we should not let the code die. Maybe we need to hospice it. This is, to me, a much more gentle approach. https://decolonialfutures.net/hospicingmodernity/
j
the death of an organism is a banal event
I imagine the organism feels differently.
"More death" is a directional argument. What's the optimal amount of death? When do we stop letting things die because they are dying too fast?
I'm thinking of this in analogy to "the optimal amount of fraud is not zero", which is an important and unintuitive point. But it's not quite the same as "just let there be fraud", because the level of fraud can also be too high and become a heavy tax on experimentation.
The accurate version of the argument has to talk about the cost of combating fraud vs the cost of allowing fraud. I'm understanding @Lu Wilson as saying that we spend too much on combating code drift/decay and that the optimal amount would be less, but I wonder how much less.
Certainly in some of the domains where I work, I feel like the cost is already too high and I wish there was less churn, not more. It's quite a tax to constantly have to switch dependencies because they got abandoned. But probably the higher up the stack you go, the less downstream dependencies you have and the less costs you are externalizing onto other people by churning.
I'm not a big fan yet of coding with llms, but I do think it'll be interesting if they reduce the cost of coding small projects enough to change the balance. Why bother maintaining a project if it's cheaper to just rewrite it from scratch next time you need it?
l
seeing as everyone's getting into the "let code die" thing, you might want to do some reading on it before digging in: https://www.pastagang.cc/blog/let-code-die/
j
Feels a little like https://www.hytradboi.com/2025/03580e19-4646-4fba-91c3-17eaba6935b0-throwing-it-all-away---how-extreme-rewriting-changed-the-way-i-build-databases, although the latter still has the intention of building something permanent from the stream of attempts.
k
@jamii Biologists have thought about questions like the optimal amount of death. I have only vague memories of seeing work on this. But I am pretty sure that the answer is contextual. There is no universally optimal amount of death, but one for each species in a specific ecosystem. Moreover, there is probably not one optimal value, but a wide range of good values, with soft lower and upper bounds. The more interesting question for me is why there needs to be death at all. The answer is adaptability to changing environmental conditions. Which explains why the best amount of death depends on the rate of change in the environment. My reading of @Lu Wilson's "let code die" is "don't focus on the code, focus on the role of the code in its socio-technical environment". I also implicitly read it as "we need to let code die more than we do today", rather than as an absolutist statement "all code everywhere should die quickly". With these interpretations, I fully agree with the message. We have way too many dependencies today on code that we cannot adapt to changing needs. Our socio-technical environment should better allow for updating computational behavior without breaking everything.
l
just to be clear, I didn't write that "let code die" blog post