😮 Understanding all the code, including code not in my codebase, to know if the change I want to make is okay. Actually making the changes to an algorithm or a data model is not too hard, in isolation.
I think data models in practice are typically harder because other code depends on it in a way that I have to understand that other code too. That other code may not even be accessible to me. (A customers codebase for example). Algorithms can have that implication, but not as often.
Since I’m guessing this question is asked partially to motivate which problems to solve, I do think there are ways to tackle this.
First, since others code in my scenario is a blackbox, we can’t just make tools for actually understanding that code, it isn’t accessible. But what we could do is make it easy to explore implications of changes.
Take an algorithm change. Instead of sending a request for every one message, I aggregate every 10 messages and send them. What are the implications of this? What if I never get 10 messages? What if I overflow some bounds? What about retries? Would my accuracy decrees if my failure rate increased? Would this change be effective? Would it reduce my downstream costs?
Consider a data model change. I have data structure A and I want to have data structure B. Is there a bidirectional mapping between them? What will the increase in my storage be? If I need to convert between the two, at what do I need to support? I know that I need X,Y, and Z access patterns. Are those efficient with the new data structure?
All of these things are questions we have to ask ourselves as we are making changes in systems. But we have no way to watch them play out. No way to tweak the parameters.
Really there are two tools trying to help us 1) Whiteboards 2) Formal models.Surely there is a nicer middle ground here.