Can we make this a top-level discussion please? ...
# thinking-together
s
Can we make this a top-level discussion please? https://futureofcoding.slack.com/archives/CCL5VVBAN/p1703859436830029?thread_ts=1703808750.611779&channel=CCL5VVBAN&message_ts=1703859436.830029 I’m not so much interested in what the “correct” way of distinguishing those three is, more in how people distinguish them, in their idiosyncratic ways. So what do you think makes programming different from software engineering different from computer science?
k
Implementing quicksort in Java is programming. Proving that quicksort is O(N log N) is computer science. Deciding if quicksort is a good choice as part of a large software architecture is software engineering.
j
There are ways to make these things distinct. Programming is writing code. Software engineering is more than writing code. Computer science is some formal discipline. There are other ways to slice things up I’m sure. But regardless of the division, I don’t see why we would want to do that. I don’t think these activities form a natural kind. So it doesn’t seem we are trying to get at the truth by making these divisions. Instead we are trying to rank them. We are trying to say some of these are more sophisticated or nobler than the other. “That’s just programming, not true software engineering”. I especially don’t understand the point of drawing a distinction between programming and software engineering. What do we gain by making this distinction? Is the act of planning how to write a program really not part of the programming activity? If I’m drawing up plans for how I’m going to make a chair, am I really involved in an activity distinctly different from wood working? By drawing these distinctions we also open the door for the division of labor to occur. This creates the awful concept of “architect”. The perhaps most harmful concept we’ve ever created as a profession.
l
I'm also interested in what people think coding is. How is it different or similar to the other three? Where does it sit? I don't really have any strong opinions here, just curious
j
I’ll just say the only distinction people have given me is that of status. Originally I told people I code. People told me that was lower status than programming. Then I said I program, people told me that was lower status than software developer. Then they told me that was lower status than software engineer. I wish this status stuff just didn’t exist. All code is code.
j
The fact that many people are trying to assign status doesn’t mean that the distinctions can’t be worthwhile unless that’s your goal.
j
I’m saying I don’t think there is a real distinction other than the status. How do you draw sharp non-arbitrary distinction between these?
a
@Lu Wilson I liked this definition and history, from Smalltalk: Best Practice Patterns
k
@Jimmy Miller My first approach to technical terms is to figure out how they are used by the majority of people in the field, in order to be able to communicate precisely with them. That doesn't imply agreeing with the utility, or even well-foundedness, of distinctions implicit in these terms. I agree that programming vs. software engineering has an aspect of status, which I dislike as much as you do. But it also contains an aspect of size or complexity of a software project, and that makes sense. Writing 50 lines of code to reformat my blog entries is programming, but not software engineering.
j
I don’t think the distinction has to be sharp to be useful. I’d also say the distinction between computer science and the other two is sharper than the distinction between software engineering and programming. That said, the latter distinction is real. I don’t really think I can give a proper definition, but I think it’s something like: “software engineering is about studying the practice of building software as a repeatable process with codified methods designed to facilitate development and maintenance by groups of people.” Whereas programming is the catchall term for the activity of writing programs of any sort, and can be done with or without any attention to software engineering.
I think the distinction between coding and programming mostly is just about status, however. Coding sounds like you’re just a machine translating things. Programming sounds more like solving problems. But to me, that’s just the connotations, not a semantic difference.
k
Compounding difficulties, computer science is not a science, and software engineering is trying to be a science but still faking it. I tend to be suspicious of both terms. They mean about as much as my PhD in CS, particularly since they were all minted in academia under peculiar incentives. CS and SE are buzzwords, not jargon. Excluding a small region of math, it's all one amorphous beast we haven't found the bones for yet, let alone start to cleave at the joints. And yes, this includes what Greg Wilson does and whatever it is that Stephen Wolfram does. I tend to call it programming, since that was good enough for Dijkstra. (I stand ready to duck.)
a
I haven't finished the episode yet but I keep thinking that declarative vs imperative is the most fundamental difference between work that's "officially" programming and not. Part of it is that thinking imperatively (i.e. predicting how a computer will do this) is a rare skill we've acquired through huge effort and trauma that's both incredibly powerful, in that it lets us occasionally do low level crazy shit to get close to optimal solutions... And another huge part is that many, many, many extremely valuable regions have no known effective declarative implementation techniques.
Declarative in the small: pure functional programming is the only way to do this that we broadly believe in, and it really sucks for most people
Declarative in the large: still a pipe dream except for query languages and related DAG composition systems, which are both super limiting
Componentware failed mainly because all useful abstractions eventually need to leak
I remain hopeful for declarative breakthroughs but I've been hoping for 35 years and they don't come around too often
j
I think y’all are doing a fine job of exploring this question. I just want to throw in a meta-level reference: I really enjoyed the classic paper “_Boundary-Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists_”. It foregrounds the non-stop work people in professions have to do to define what lies inside their domains and what lies outside them. It’s got some really fun & paradoxical examples of these miniature battles. I recommend it! https://law.unimelb.edu.au/__data/assets/pdf_file/0009/3623526/2-Gieryn-Boundary-work.pdf
a
Computer Science is like most science in that it's most (only?) effective when the subjects are atomized into parts that are small and simple enough that they allow rigorous study... but essentially all valuable software is vastly larger than that, and composed out of pieces that don't have simple relationships
put another way: if it's simple enough to be scienced, it's already a commodity
j
@Justin Blank
software engineering is about studying the practice of building software as a repeatable process with codified methods designed to facilitate development and maintenance by groups of people
So if a single person does something does it cease to be software engineering? Is quickjs not an example of software engineering because it was all done by Fabrice and isn't maintained by a group of people? By the same token, does the maintenance of a spreadsheet by a group of people with codified practices count more as software engineering than the development of quickjs because multiple people were involved? I don't mean this to be pointed, I sincerely don't understand what the distinction is supposed to be. Is it about groups? Is it about rigor? Is it subject matter, difficulty? I don't consider anything I've done in my career to meet the criteria you gave. I don't think any of the processes I've done are repeatable, except the act of programming itself, which are trying to make as a distinction here. @Konrad Hinsen
But it also contains an aspect of size or complexity of a software project, and that makes sense. Writing 50 lines of code to reformat my blog entries is programming, but not software engineering.
Yeah, I just don't buy that though. A lot of what falls under the banner of software engineering is no more complex than writing 50 lines of code to reformat blog entries. The way in which I think about doing these activities is the same. The kinds of decisions you have to make to do them is the same. I am spending hours and hours of my time right now at work resolving merge conflicts from a fork that stopped tracking upstream. It is all to support things at a massive industrial scale. But my activity is writing a little clojure script that shells out to help me manage git patches. Is this software engineering? Does it change it if the resolutions here are for a compiler?
l
I'm getting an idea for a new

video

a
Software that truly solves valuable problems for non-expert users and has the declarative mojo does exist, but it's 1) always specialized to particular verticals/use cases, and B) that's how billion-dollar software businesses happen. So, good news/bad news 😉
j
I don’t think the right distinction is “is this person writing code doing programming or software engineering.” Obviously, they are programming. I specifically said software engineering is the study of practices. There’s a parasitic sense in which you can say “someone who applies that study is doing software engineering, not just programming”, but imo, that’s less useful. Hence as a title, software engineer isn’t usefully different from programmer. Basically, the distinction is not really between two activities, but between an academic discipline and an activity. The question you’re asking (or saying doesn’t make sense to ask) is a bit like asking “are you a writer or a historian?” It’s not an either or. But a writer isn’t the same thing as a historian in spite of that.
c
A potentially useful framing that’s a little different from what I have seen mooted above, but maps a bit to the adjacent fields of non-software engineering: a software engineer is someone who may be applying computer science in the act of programming with a particular degree of responsibility taken for the outcomes of the work done. This is more, I think, aspirational than practical, certainly in terms of the titles people have at most jobs! There, the “prestige” dynamic reigns supreme. But this is the model I would like to be the case for software engineers, and when I distinguish in practice between “programming” and “software engineering” as gerunds, this is the kind of thing I have in mind, and is closely related to the point named by @Konrad Hinsen in very pithy fashion at the outset!
In mooting that definition I am not making a value judgment about whether someone who acts as a software engineer while programming is better (or worse!) than someone who does not. It is not an ordering, partial or otherwise. 😂 Rather, I am saying that if we adopted this way of thinking, it would come with a set of expectations and norms that would be profitable for the field (profitable in the “generally good” sense; it might be less profitable in the “literal money-making” sense!).
So while this is absolutely true today modulo prestige considerations—
as a title, software engineer isn’t usefully different from programmer.
—I would very much like for it to be different. I would like it to be the case that we who want to claim the mantle of “engineers” take responsibility for (and are held accountable for!) the work we do in a particular way. And while yes, that would then become aspirational, it would be because we are expecting more from a subset of the field of software practitioners, and thereby perhaps elevating the field as a whole a bit.
j
@Justin Blank Interesting. So you consider software engineering to be studying how people (ought to/should/do/etc) create software, not actually the creation itself? So software engineers aren’t doing software engineering? (Except in that less useful sense you mentioned) That’s an interesting take for sure. I could definitely see a distinction if that is what is meant. Not sure that’s the common understanding though. Also, shouldn’t we just call that meta-programming ;)
k
@Kartik Agaram
computer science is not a science,
Are you referring to the old debate about what Herbert Simon nicely called "sciences of the artificial", which some do and others don't consider science? Or is it about CS not being serious/mature/whatever enough to be called science? Personally, I am happy to include the sciences of the artificial under the "science" label, and I am happy with accepting immature fields as well, as long as they are working towards improvement of their own standards. So CS is a science for me. It's just not about computers, but that's only a problem with the English-language label.
and software engineering is trying to be a science but still faking it.
In the old science-vs.-engineering debate, I have taken the taoist stance and declared them a yin-yang pair. Science is about learning how the world works, and engineering is about changing the world based on scientific understanding. The problem with SE is that is hasn't much of a "science" partner yet. People are too busy making software and then making it obsolete that they have no time to reflect on their practices. So yes, "faking it" is a good description.
k
No, the artificiality isn't the problem. It's that vast swathes of CS academia aren't good at nailing down the context for their conclusions. They're often not reproducible, and even if you can reproduce them you can't reason about whether they apply to a slight change in context (this improves things for these benchmarks; does it help for this other benchmark?) You have to rerun the whole experiment all over again.
k
Ah, I see. Sounds familiar. But more an issue with 21-st century academia than with CS specifically. It's not different in other disciplines. The replication crisis is only one symptom of an increasing mismatch between the complexity of the questions being studied and the economic and political pressure for quick and simple answers, in the absence of any accountability.
g
Mary Shaw herself has described software engineering as "applying known solutions to new problems". Using this definition, I think of computer science as the field of identifying or discovering fundamentally new solutions.
j
’car: status stuff is stupid ’cdr: What I want us to mean by these terms… Programming is a big tent label for anything that involves instructing a machine, including encoding knit patterns, setting the time/duration recording instructions on a VCR (which only some of you will remember), &c. Some programming problems are trickier than others, but it’s all just programming. Like most human activities, practitioners range in seriousness from home cooks to highly trained chefs — and some of the former get better results than some of the latter. (Many of us have been making fun of snobbery in the field since the 80s.) “Computer science” is a bad name for a set of findings that form a loose branch of applied mathematics. While mostly discovered in a computer-adjacent context, it has turned out to be much deeper and more widely useful. Only rarely do working programmers do any CS, except in a similar sense that structural engineers do physics. (You might infer from this that I feel having people major in CS to become programmers is misguided; you’d be right.) Software Engineering is building software in a particular way, in the same sense that building a shed out back and building an earthquake resistant high rise involve building in a different way. Very few programmers — amateur or professional — work this way because the economics usually don’t support it. (I think it’s okay that most software is of the shed variety, and I’m interested in making it easier for more people to build their own.)
k
“Computer science” is a bad name for a set of findings that form a loose branch of applied mathematics.
Like chemistry is a loose branch of applied physics. That's a point of view I have defended for many years, but no longer. There's a reason why academic discplines are called disciplines: they are about shared attitudes and values, not so much about topics. Chemists have different values and attitudes than physicists, and the same holds for CS vs. mathematics, although in both cases there is no clear demarcation, but a smooth transition. We have chemical physics and physical chemistry as sub-disciplines, and within CS, formal methods people are closer to mathematics than HCI researchers. Maybe a good perspective on programming vs. CS vs. SE is to look at the respective communities of practice. CS and SE both have such communities, with various subcommunities. There is also some overlap between the CS and SE communities. Programming doesn't. It's indeed a big tent label, and even if you restrict it to computer programming, it's still very diverse. Maybe "industrial software development" has a community of practice? I'll let those involved with it decide. From the outside, the large number of conferences suggests there is indeed a community.
Something that worries me is that the label "computer science" is becoming a self-fulfilling prophecy. In academia at least, I see a growing tendency to consider everything that involves the use of computers as a branch of computer science. As an example, a few weeks ago I heard a colleague claim that computational chemistry has to become a branch of computer science, much like bioinformatics, if it wants to make progress. That's putting a tool above the goals one wishes to achieve with it.
j
@Konrad Hinsen We disagree somewhat. While I share your position contra “chemistry is just physics”, in the case of CS we’re looking at a field that’s still extremely new, having grown out of mathematics and electrical engineering departments in the second half of the 20th century. As a consequence, the boundaries — which, like all conceptual boundaries, will always be fuzzy — are currently not even vaguely established, leaving “CS” a label for a grab bag of partially (and sometimes dubiously) related things. For example, we classify HCI under CS when it’s probably more properly considered a branch of industrial design. In this, I agree quite vigorously with your second statement above: just because you do it with a computer does not make it computer science! The portion that I think is the central example/contribution of what we call computer science is the mathematical work around computational complexity, information theory, &c, all of which make an excellent set of tools/lenses with which to approach other fields (in the same way that other branches of maths do).
g
I believe strongly that SWArch, SWEng, SWImpl, SWProdEng, SWMaint, etc. are very distinct disciplines under the same umbrella called "programming". A single person (group) can do all of the things involved in programming, but, if you need to scale upwards from a cottage industry mindset, you need to find ways to cleave the work into separate parts which can be done by other people / groups. ... more thoughts...
just because you do it with a computer does not make it computer science!
++
a
I think a lot of this is symptomatic of a field of endeavour (or practice) that has wildly outgrown its field of study, and so quickly that academia hasn't really had enough time to adapt
Normally the pipeline from materials science to engineering takes multiple decades
(if not occasionally centuries)
g
I think a lot of this is symptomatic of a field of endeavour (or practice) that has wildly outgrown its field of study, and so quickly that academia hasn't really had enough time to adapt
I agree and disagree. I would say that academic study just about always lags behind practice. For example, we still don't really know what "electricity" is, but, we use it every day. The academic explanation seems to come in 2 stages 1. quantify the relationships 2. explain the relationships Tesla and Edison worked out how to use Electricity in practice. Steinmetz, Wheatstone, Maxwell, et. al. created equations that quantified some of the effects. I don't know of any real explanation for what is going on at a deeper level (i.e. (2)). Robert Distinti has found new effects to quantify. And, he's working on an explanation. Tesla probably figured out more about the explanation than he cared to share, or could put into words.
Normally the pipeline from materials science to engineering takes multiple decades
That's just historical. We can do better.
k
@Jack Rusher With CS reduced by definition to its mathematical core, I agree it would find its place in mathematics. But I do have the impression that there is a coherent community of practice around what is called CS today, distinct from the community of mathematicians. But yes, all that is in flux and may well come out differently in a few decades. @guitarvydas Academic study and practice take turns in lagging behind each other. That's why I like to view them as a yin-yang pair. Progress in one leads to progress in the other some time later.
s
I had a feeling this question would draw some discussion. Thanks everyone for explaining their categories. :) To me programming feels like the most innocent term. It doesn’t try to carry any professionalism and almost feels like a pragmatic answer to, “Hey, what do we call that thing that we’re doing with these computers?” Both software engineering and computer science are their own “professionalized” versions of that. There must have been some motivation to differentiate from “this isn't just programming, this is more serious” and software engineering got a little more business seriousness flavor, while computer science got more academic seriousness flavor. For that reason I find myself coming back to programming. There was a time in my career where I didn’t want to call myself a programmer. I felt the need to make it sound more professional. But these days I feel a little more turned off by that because it never felt like these other terms had any more meaning in them than just trying to belong to a group that thought more highly of their skills and practices.
k
An interesting quote I found yesterday: > The creative activity of programming - to be distinguished from coding - is usually taught by examples serving to exhibit certain techniques. From Program Development by Stepwise Refinement, a classic by Niklaus Wirth (1971) Also interesting: the example Wirth considered typical of programming back then: solving the 8-queens problem.
g
Thanks to @Konrad Hinsen's stimulating comments, I expanded my thoughts on the use of the word 'practice' https://guitarvydas.github.io/2024/01/05/Fumbling-Around-and-Applied-Science.html
s
What pairs with “fumbling around”? What inspires new avenues for “fumbling around”? What is a better phrase/word for “fumbling around”?
"Fumbling around” to me sounds like opening up to our human capacity for insight through (serious) play. It has a connotation of unprofessionalism as you would expect in our results-oriented society. If we like it or not, that’s where most (all?) really good ideas come from. The kind you later think are totally obvious, and where it’s hard to see how we ever did it any other way.
k
Fumbling around is a small-scale rapid-feedback cycle of learning (yin) and doing (yang). Science and engineering is the large-scale and slow version. Good ideas do indeed often come from fumbling around, because rapid feedback matters so much. But many of those ideas require refinement and combination with other ideas, both of which happen in the slow cycle. Reminder: we wouldn't be discussing anything here without lots of collective effort over many decades that went into semiconductors, electronic devices, their mass production, setting up network infrastructure, etc. You don't get any of this from just fumbling around.
@guitarvydas
We believe, for no good reason, that all practical programming languages must be based on CPUs and assembler:
Do we (i.e. some unspecified majority) really profoundly believe this? Or is it a necessary working hypothesis made by software people who have to make do with off-the-shelf hardware because, well, life is too short?
a
I think it's important to distinguish between computers-as-they-exist (i.e. mostly Von Neumann machines, internally incredibly complex but presenting a wildly oversimplified architectural model to the programmer/OS...) from PL semantics, which are promises made in good faith that are kept surprisingly frequently (modulo UB)... Because the abstraction is not only necessary to our productivity, but also, like all abstractions, provides a point of agreement where one party can say "I played by the rules and you didn't" -- this one leaks, like all abstractions, but it's amazing how often it's valid to pretend the machine is simple
s
The scaling we have achieved through technology rests on a deal to trade understanding of how it works for abstractions that somewhat reliably do what we want them to do. Once you understand what the interface does, then you can use it, even if you don’t understand how it achieves that. This is what allows us to have few specialists that make sure things work reliably enough to an interface spec, and lots of generalists (or less specialized specialists) who can ignore the lower-level workings and use their freed up mental capacity to build on top of those interfaces instead. There are all kinds of problems with that approach that we are incentivized to ignore: • It creates dependencies on those specialists that know how, which often translates into commercial relationships of some kind. • As lower levels in the stack mature and their interfaces work reliably enough, we are gradually concentrating expertise across fewer specialists. • Simultaneously, we increase dependencies on those interfaces, because more higher-level components rely on them. • Innovation moves up the stack, because that’s where almost everybody is “fumbling around” now; lower levels stagnate and become harder and more expensive to change. And then we end up stuck with artifacts from last century and can’t seem to get rid of them, because we can’t just reinvent and replace them. They’re carrying too much weight now. And so we keep using CPUs and assemblers and text editors.
c
Interesting. Of that bullet list, the only one that actually seems like a problem to me is the last part of the last one:
lower levels stagnate and become harder and more expensive to change.
It’s not at all clear to me why you think the others are problems. 🤔
g
lower levels stagnate and become harder and more expensive to change
In my view, this is the problem. It behooves us not to gloss over it and not to accept it as a fact. There are 2 parts to fixing a problem 1. define / state the problem, then, 2. fix it. Method for identifying part 1: Ask "why?" over and over again, recursively. 1st iter: why are the lower levels more expensive to change? [my comments: an alternate way of asking the same question: Why is/was there a Moore's Law for hardware but not for software? My current answer: deep asynchronousity, lack of dependencies. ICs are asynchronous, lines of code are synchronous]
k
@Chris Krycho some guesses on why they're problems: • Principal Agent Problem • Reduced resilience • Reduced resilience • .. paradoxically this is the one I don't have a good answer to. Good abstractions should have more consumers over time and grow less likely to change. The problem we have today is premature ossification -- because consumers don't do due diligence before adopting. Freeze abstractions because they're good. Don't call abstractions good because they're frozen.
k
> consumers don't do due diligence before adopting. That's perhaps a big reason. Many decisions in computing, especially buying decisions in large institutions, are taken by people who have little competence but are well-connected to sources of rumour.
j
Is it true that fewer people understand the lower levels? In percentage terms, I would assume so, but in absolute terms, I’m not sure it’s true.
k
I don't have data but anecdotally the Linux kernel definitely has a well-known problem recruiting new maintainers. The existing ones are overworked, nobody knows what will happen when they burn out, retire, etc. e.g. https://lwn.net/SubscriberLink/952034/922c90d8097bd209 It makes sense to me. Historically specialization has formal funnels leading into it, and there's some amount of price elasticity in the face of constrained supply. Low level open source projects add impedance on both sides of that. Historically the open source funnel starts with using something and starting to hack on it. But today we have so many layers we are not even aware of using. Meanwhile the major projects continue to grow compoundingly complex over time (so you need more recruiting this year than you did last year). Private enterprise growth rates feel like a bad fit for open source, but the combination is all too common.
a
IMO 100% of large software projects eventually have the same problem... Things get more and more specialized as time passes, local idiom becomes the law, etc.
OK, maybe 98%... The 2% are those that invest early and painfully in modularity 😉
j
90s: “Come be the first one to do X in the kernel!” 2020s: “Slog through the process of contributing. Do it long enough and you can help enforce the process.”
k
This paper I just found from 1992 seems very relevant: https://www.developerdotstar.com/mag/articles/reeves_design.html
..the software industry collectively misses a subtle point about the difference between developing a software design and what a software design really is. ..programming is not about building software; programming is about designing software.
...
The final goal of any engineering activity is some type of documentation. When a design effort is complete, the design documentation is turned over to the manufacturing team. This is a completely different group with completely different skills from the design team. If the design documents truly represent a complete design, the manufacturing team can proceed to build the product. In fact, they can proceed to build lots of the product, all without any further intervention of the designers. After reviewing the software development life cycle as I understood it, I concluded that the only software documentation that actually seems to satisfy the criteria of an engineering design is the source code listings.
..no other modern industry would tolerate a rework rate of over 100% in its manufacturing process. A construction worker who can not build it right the first time, most of the time, is soon out of a job. In software, even the smallest piece of code is likely to be revised or completely rewritten during testing and debugging. We accept this sort of refinement during a creative process like design, not as part of a manufacturing process. No one expects an engineer to create a perfect design the first time. Even if she does, it must still be put through the refinement process just to prove that it was perfect.
...
The overwhelming problem with software development is that everything is part of the design process. Coding is design, testing and debugging are part of design, and what we typically call software design is still part of design. Software may be cheap to build [using compilers and linkers], but it is incredibly expensive to design.
c
Note that the comparisons with other industries—like this one—are often hilariously wrong. Rework as part of the manufacturing process? No. Massive amounts of rework as part of the design process? Good grief yes. Hillel Wayne has done yeoman’s work documenting this. Also, what?
The final goal of any engineering activity is some type of documentation.
That is, again, just not how actual manufacturing and other “physical” engineering disciplines work. Architects, civil engineers, chemical engineers, nuclear engineers, mechanical engineers, etc. are often deeply involved in the physical aspects of their work. Obviously the specific kinds of involvement vary across those, but… yeah, this quote is just completely wrong in the way that many descriptions of “real” engineering disciplines are.
(That’s not to say there are not real differences between “software engineering” and other kinds of engineering; obviously there are. But there are also comparably large differences between other kinds of engineering.)
So when we get to things like this:
We accept this sort of refinement during a creative process like design, not as part of a manufacturing process. No one expects an engineer to create a perfect design the first time. Even if she does, it must still be put through the refinement process just to prove that it was perfect.
—it is just proffering a false distinction. Lots of physical engineering tasks absolutely treat actual manufacturing as part of the design process, albeit with an expected yield curve shift over time where it smooths out as you get past initial hurdles. But… that’s what we do with software, too!
(This is one of my AGGGGHHHHH YOU MASHED MY BUTTON NOOOOOO subjects. 😂)
k
The good part of that post is emphasizing design as the dominant aspect of software development. Compared to matter-based technologies, manufacturing costs are near-zero for software, and the physical constraints that shape design elsewhere are abesent as well. What we get is pure design, cheap iterations, and few external constraints. Then Jevons paradox kicks in and the demand for redesign increases. Constant redesign is limited only by the capacity of the humans doing the design work, which is exactly the resource that software development ends up overexploiting.
s
Is it telling that “physical constraints” and “manufacturing costs” always come up as the big differences between physical and software engineering? And then we keep scavenging papers from the 1960-80s, when technical constraints were much more of a concern, for good ideas and designs. Maybe we need bits to cost as much as atoms to be forced into making good design decisions?
a
In my experience--and it pains me grievously to admit this--the productivity of any design process that's uninformed by an implementation-in-progress rapidly diminishes to negative
Sadly, the prototype always goes to production
k
That's another peculiarity of software. In the material world, you can't ship a prototype because there is no efficient mass production infrastructure for it. Wanted: some mechanism that creates a non-negligible cost for software deployment.
a
uh, have you paid any programmers recently? 😉
j
No I’ve only paid software engineers 😉
a
I think one reason ideas like software engineer licensing etc. never get traction is that it’s already wildly expensive and unpredictable, and adding even more friction doesn’t sound good to anyone who’s actually involved in building actual software… except life-safety-critical/industrial-controls embedded stuff, but that’s a tiny part of the industry anyway
Having said that, there’s a shocking amount of tech that ends up being life-safety-critical implicitly, by default, but was never intended to be so, like smartphone OS’s
(and batteries, and baseband hardware/firmware, and RF hardware…)
Dumbphones were certainly simpler and more reliable, and had longer battery life… I don’t think Google or Apple would be keen on adding a smartphone equivalent of the “limp-home mode” that modern cars can all use when the computers shit the bed, they would need to be forced to by legislation