Straw poll... As a rule, which is more challenging...
# thinking-together
w
Straw poll... As a rule, which is more challenging in the software systems you work on: • 🤯 Changing an algorithm. (Example: Need to switch to recursion because nested loops aren't going to cut it anymore.) • 😭 Changing the data model. (Example: A user should be able to register more than one email address.) • 😮 Some other change is my main source of stress. (Example: It always takes an unreasonable amount futzing to convince the linker to use the updated version of a library.)
😮 4
☯️ 1
😭 11
i
But also — changing the UI? Because the UI is always the hardest part? But also — changing user/stakeholder expectations?
🤣 2
p
I said data model, but I think the best way to make it hurt less that I know of so far is the Parnas approach. https://dl.acm.org/doi/10.1145/361598.361623
i
[moved from top level, original post by @David Brooks] changing the data model is always more challenging, imho, since data comes first. for challenge 3, just use NixOS: problem solved 😉
c
I went 😭 . An algorithm is relatively self-contained. The implications of changing a data model can be impossible to even enumerate.
🤔 1
j
😮 Understanding all the code, including code not in my codebase, to know if the change I want to make is okay. Actually making the changes to an algorithm or a data model is not too hard, in isolation. I think data models in practice are typically harder because other code depends on it in a way that I have to understand that other code too. That other code may not even be accessible to me. (A customers codebase for example). Algorithms can have that implication, but not as often. Since I’m guessing this question is asked partially to motivate which problems to solve, I do think there are ways to tackle this. First, since others code in my scenario is a blackbox, we can’t just make tools for actually understanding that code, it isn’t accessible. But what we could do is make it easy to explore implications of changes. Take an algorithm change. Instead of sending a request for every one message, I aggregate every 10 messages and send them. What are the implications of this? What if I never get 10 messages? What if I overflow some bounds? What about retries? Would my accuracy decrees if my failure rate increased? Would this change be effective? Would it reduce my downstream costs? Consider a data model change. I have data structure A and I want to have data structure B. Is there a bidirectional mapping between them? What will the increase in my storage be? If I need to convert between the two, at what do I need to support? I know that I need X,Y, and Z access patterns. Are those efficient with the new data structure? All of these things are questions we have to ask ourselves as we are making changes in systems. But we have no way to watch them play out. No way to tweak the parameters. Really there are two tools trying to help us 1) Whiteboards 2) Formal models.Surely there is a nicer middle ground here.
🤔 1
t
😮 I spend inordinate amounts of time fighting with libraries and frameworks that I use without completely understanding. E.g., I recently migrated the backend of a web app from .NET Core to .NET Framework, because I needed a specific feature. It took a week to fix all the things that broke.
😮 1