https://futureofcoding.org/ logo
#of-ai
Title
# of-ai
k

Konrad Hinsen

08/25/2023, 8:09 AM
An interesting take on AI risk: https://metamoderna.org/what-can-stop-the-ai-apocalypse-grammar-yes-only-grammar/ The vision exposed in this article is that humans with their institutional superstructures (bureaucracies, markets, corporations, ...) and AIs (plural) will/should form an ecosystem in which all players coevolve, competing and collaborating at the same time. I don't think that any of the players are ready for this, but in the long run, this is where we could be heading.
j

Jack Rusher

08/25/2023, 6:33 PM
We’ll need some research that could plausibly generate AIs first, I suppose.
k

Kartik Agaram

08/26/2023, 12:35 AM
What did I just read! Can someone post a summary that doesn't require understanding Lacan/Derrida/et al.?
k

Konrad Hinsen

08/26/2023, 6:06 PM
@Jack Rusher What's nice about this analysis is that it doesn't really rely on any specific notion of what AI is. "Large-scale information processing system" is all it takes. @Kartik Agaram Well, the author is a sociologist, and that shows in some places. I just skip the references and try to make sense of what's left, which works pretty well for me.