Ivan Reese
Eli Mellen
07/03/2023, 4:50 PMJimmy Miller
Eli Mellen
07/03/2023, 4:53 PMJason Morris
07/03/2023, 9:45 PMJimmy Miller
Jason Morris
07/04/2023, 12:56 AMEli Mellen
07/04/2023, 2:25 AMJason Morris
07/04/2023, 8:08 AMLu Wilson
07/04/2023, 4:16 PMBut you can just stop using themI don't know, I feel pretty stuck with the languages I use sometimes - for various reasons :)
Jason Morris
07/04/2023, 6:28 PMDavid Alan Hjelle
07/05/2023, 1:29 PMJimmy Miller
Show me a situation where someone is trying to use software to collect and hold power over others, and I'll show you someone who is using a combination of software and law. A license, a contract, a patent, or something.As software engineers, we often make systems that rely on the backdrop of the law to enforce things. But the way in which we enforce our side of those terms can be incredibly legalistic and harmful to users. For example, recently a number of youtubers big and small have had their ability to monetize completely removed because of "suspicious traffic". Basically they have been accused of ad fraud. From what I've seen, even the most connected have had a difficult time solving this issue. Here I see a classic case of the kinds of confusions we as software engineers make. What we are interested in is ad fraud (in this case). We want to stop users who are created fake bot traffic from benefitting from it. But what we actually have access to in our software systems is not whether or not someone committed ad fraud. We have numbers and correlations. But use these as if we are getting at truth. We build systems for which there is no recourse. I did what to pull out a few things you said below, but if I missed something you'd want me to comment on, happy to.
But what people are actually proposing to do has nothing to do with executing law programmatically.Yeah, I don't think he is claiming that. But if we were unclear on that point, that's our bad.
The idea that code is dangerous because it can be used to turn norms into laws is true, but only inside the context of non legalist structures, which means the danger is mitigated. And it is not unique to code.My personal concern is that codes legalism leaks into the way we think about systems. Code's legalism is seen as a virtue to be emulated. Ambiguity (even intentional) and context-sensitivity are seen as bad. The distinction between what we are trying to achieve and the measurement of that achievement are conflated. (OKRs are a terrible idea)
"Speed of execution of code prevents the possibility of reevaluating it's terms." That is it's virtue. It is not without a concomitant risk that we are doing the wrong thing, but faster. But doing the wrong thing faster is an inherent risk that is mitigated by basically all of software development. You cannot take the quality of laws we have now, and assume that they will be automated as-is.I mean, we do that in some ways. Look at the DMCA processes our youtube content. The copywrite strikes are automated, the demonetization is automated, many times even the appeals are automated. Obviously no one thinks we are going to take all our laws and automate them. But it's hard to see how not being able to reevaluate the terms is a virtue. Getting the terms right is the hardest part about software and I don't know any system that gets those terms right from the outset.
We are also taking the ex post necessity of the legal system and treating it as a virtue. The fact that you have to sue someone and ask a judge to interpret a contact is not a feature.Yeah, I don't think anyone thinks the fact that you have to sue someone is a virtue. What is a virtue is that you have the freedom to do actions that you believe are or should be lawful and if you are arrested/fined you have the ability to appeal that decision. Contrast this with "cursing" in club penguin for example. You are immediately booted, you have no recourse. (I don't actually know if there was/is an appeal process in club penguin).
We cannot pretend that laws don't need to be automated. They plainly do.Yeah, I agree. I see that as the point of this paper. How we can automate things in a good way? What changes can we make to make sure our automations don't have legalistic problems?
And raising this spectre of strong legalism in code, while it has the intent of protecting people from harm, is actually being used by -among other parties- a protectionist legal profession to argue directly against one of the most helpful things we could do right now, which is automated legal harm reduction. An automated system can literally be only better than nothing, and justified on that basis, because nothing is what so many people actually have.I can definitely see how this would happen. And I can see how it would be frustrating from the position you are in. For what is worth, I don't see Diver doing this, but instead proposing ways in which we can make these systems well. That's one of the things I like about his work, it isn't an argument against using code, it is a discussion about how to do it well.
Sure, programming languages constrain their users. But you can just stop using them, so the constraint is voluntary.
Fair. What I mean is that if you did stop using them, no one with any state-sponsored monopoly over violent persuasion would have anything to say about it. Which is admittedly a very low bar.Yeah, but other people using software that you didn't explicitly decide to use can still ruin people's lives without "state-sponsored monopoly over violent persuasion". Imagine the company I talked about that screens applications using machine learning is used by all fast-food restaurants in your area. Imagine these are the jobs you are qualified for, but the ML model has decided you will quit the job too early. Of course, you can go try and find a job elsewhere. Of course, a similar situation could happen due to human bias. But there is something very unsettling about this version of the future. The ML model can't be convinced, it can't provide reasons. It isn't a rational process whatsoever. I think these are real harms we ought to pay attention too. I think saying people can not use software they don't like is just like saying people can move if they don't like their local laws. Both statements are generally true, but no helpful for many people.
We need tools that are accessible to a much wider variety of people, that have a far smaller semantic gap between the natural language expression of the rule and the computer language expression of the rule, tools that are designed to facilitate human validation of those encodings, languages that are inherently explainable, with sophisticated reasoning, that cite their sources, that name the person whose legal interpretation was modeled, that are accessible, open source, and trustworthy. And those tools needed to be possible to use to test and validate anything else we might like to reduce the risk of. So we don't need to change all of programming, but we do need to add to it.This sounds super interesting. If you have any papers that argument against what this paper argued for and gives what you see as the alternative prospective, super interested in that. No promise we will do it on the podcast, but definitely interested. Ideally a paper a bit less in the technical details of how to do these things with code, and more arguing for their applications. In general, I think what you've said here doesn't feel too much at odds with what I think we were trying to explore. I do think software has a role to play. I do think we need to automate things. I do totally get how these sorts of arguments might be used against projects like yours and that must suck. I don't think that's the aim of the argument here. You are definitely right that there is no discussion of how to use code to improve laws. I'd love to explore that further and am super happy there are people like you working on that. If you can help point us in that direction for some readings, I'd love to take a look :)
Jason Morris
07/05/2023, 9:46 PMDavid Alan Hjelle
07/05/2023, 11:01 PMJimmy Miller
Personal Dynamic Media
07/10/2023, 5:19 PMJimmy Miller
Jason Morris
07/12/2023, 11:25 PMJimmy Miller
Jason Morris
07/12/2023, 11:29 PMEli Mellen
07/19/2023, 8:14 PMMany developers are glad to contribute software without restriction. But some of the same developers hesitate or refuse to contribute other kinds of software under such terms. Frequently, they fear that particular software lends itself to unethical uses, either because of what the software does, or because of market, government, or broader societal conditions. Currently, collaboration by developers on such projects remains largely closed. An ethical subcommons can provide more of the benefits of open collaboration, without asking contributors to check their ethics at the door.