• Vaughan Rouesnel

    Vaughan Rouesnel

    4 months ago
    Visualizing data structures (e.g. in PostgreSQL) When we think about the data structures our programs use or the execution flow of our programs, I’m pretty sure we all think visually and if we had to explain it to someone we would be drawing diagrams on a whiteboard. Yet pretty much all our coding and debugging is text-based. We are forced to visualize things in our minds. I’ve been debugging the PostgreSQL codebase to better understand how things work. I think a lot of people treat it like a scary magical black box, but it’s quite easy to debug a query’s execution path, and start to understand how it works. When we learn about databases, say from Andy Pavlo’s CMU database course, there are a bunch of core diagrams used to explain things. Like the parse tree, logical plan tree, physical plan tree, disk storage layout, btree/bitmap/hashmap index, etc. So what I was thinking was to instrument the PostgreSQL C codebase (via LLVM-IR), and then visualize some of the key data structures and stores. Imagine how easy it would be to teach database internals, if you could type in a query, and then visually step through the actual execution of the query with visual diagrams. I would imagine that any PowerPoint slides and diagrams could be replaced by this visual interface with a slider and some filtering options for what to show.
    Vaughan Rouesnel
    Ivan Lugo
    +2
    5 replies
    Copy to Clipboard
  • curious_reader

    curious_reader

    4 months ago
    Just found this gem while cleaning up my Zetelkasten a bit

    https://youtu.be/MlK0IQmqmCQ?t=2432

    "40:41 "using objects at the lowest level in the system and combining the language with this idea of co-routines so I realized that if you had co-routines and persistence you didn't need files because you basically just paused every structure that you wanted to stay around and just let it stay active there's a way of getting away from data structures so a lot of interesting ideas in this thing and on each iteration of these languages.."
    curious_reader
    Kartik Agaram
    +4
    28 replies
    Copy to Clipboard
  • n

    Naveen Michaud-Agrawal

    4 months ago
    Do programming/computing systems have to be self-referential/meta-circular to scale?
    n
    Tim Lipp
    +5
    10 replies
    Copy to Clipboard
  • Mariano Guerra

    Mariano Guerra

    4 months ago
    Why don't we "put our code where our mouths are"? I've used really alpha projects in many areas but I almost never consider using "future of code" projects other than a short evaluation My main "excuses": • the result must be available as a web page/app • if it involves writing it must have vim keybindings (that's why I never stick to "tools for thought" note taking apps) Which are yours?
    Mariano Guerra
    Ivan Lugo
    +2
    4 replies
    Copy to Clipboard
  • j

    Jack Rusher

    4 months ago
    This tweet is a distillation of my position in a long thread that's linked below. It would be nice to hear from some of you in that linked thread, which already features a bunch of FoC-adjacent people.https://twitter.com/jackrusher/status/1525357409681776640?s=20&t=u0-jN3LMw24VkZJUduvBaQ
    j
    Kartik Agaram
    +4
    15 replies
    Copy to Clipboard
  • h

    hamish todd

    4 months ago
    Been thinking of getting a Remarkable / Onyx Boox / general note-taking/drawing tablet, intended as a serious replacement for my mechanical-pencil-and-paper-notebook setup. I'm using it as an opportunity to think about these devices as thinking/coding tools.
    h
    Chris Knott
    +4
    36 replies
    Copy to Clipboard
  • h

    hamish todd

    4 months ago
    If anyone has a writeup/previous discussion on here of this kind of thing, I'd love to see it. But essentially I was thinking about this kind of thing

    https://www.youtube.com/watch?v=nqx2RKYH2VU&t=6s

    Of course Bret demo'd Stop Drawing Dead Fish and, I think, Drawing Dynamic visualizations on touchscreens. I think a stylus is a good addition to that. It's slightly unpleasant to put one's finger on a touchscreen, and to try to do very precise manipulations of potentially pixel-sized things below it using one's relatively fat fingers. Multitouch is nice in theory, but genuinely, aside from pinch-to-zoom/rotate, I know of nothing else good that uses it
    h
    s
    2 replies
    Copy to Clipboard
  • Kartik Agaram

    Kartik Agaram

    4 months ago
    @Ivan Reese at https://futureofcoding.slack.com/archives/CEXED56UR/p1652719357709289?thread_ts=1632872466.023400&cid=CEXED56UR:
    what even is a computer and what will we do with it?
    I've actually been struggling with this question a whole lot outside this community. Perhaps I should bring y'all in: • https://lobste.rs/s/i1b6tw/computer_science_was_always_supposed_be#c_2mffpq https://forum.merveilles.town/thread/35/what-are-computers-for%2c-anyway%3f-38 https://merveilles.town/@akkartik/107896035093628757 https://merveilles.town/@akkartik/108043961340591227 (and thread) The conclusion I've currently arrived at is: • The kind of computers we have today prioritizes large organizations that pay people with money. The influence of this mindset is deep and infects almost all our tools. This "computer industrial complex" is often useful on short timescales, but it is also blind to a lot of the value people create with it. As a result, future decisions by the computer industrial complex often destroy value at people scales. • Right from the start (suspense alert), there's been a shadow computer with a different, more convivial purpose. Confusingly, this shadow computer often looks just like other computers. You have to look closely to look past the camouflage. • It's hard to see what people would do with these shadow computers if we weren't immersed in a world created by organizations. After some time thinking about it, I can't find a better answer than the one (drumroll) Vannevar Bush arrived at, right at the start: one thing convivial computers are definitely for is to externalize our brains. Paper expands memory. Computers expand both memory and modeling. This is a surprising, even shocking, conclusion for me to arrive at. I've always slightly looked down my nose at all the "tools for thought" conversations in this Slack. It's felt too close to productivity porn most suitable to avoid doing anything productive. Suddenly they're super relevant. But it's not enough to build more tools for thought. We have to think also about the process by which they're built. We have to ensure that the people-factory generating the convivial iPhone is also convivial. Because if it isn't, the conviviality will be short-lived as organizations kill or coopt it for their needs. The most important property to preserve, IMO, is to keep the raw code as naked and free from packaging as possible. It should be literally begging to be opened, inspected, tinkered with. Most software today fails to fit this bill. C programs come without source code by default. Browsers require big honking machines to be built from source. We write lovely naked Ruby code but then package it up into gems that go hide in some system directory where nobody can go look inside them. This is what my future of software looks like.
    Kartik Agaram
    p
    +6
    18 replies
    Copy to Clipboard
  • Alex Cruise

    Alex Cruise

    4 months ago
    Time to dredge up Out of the Tar Pit again? It’s the only thing I’ve ever seen in ~25 years that looks like it might actually simplify real-world apps
    Alex Cruise
    Jimmy Miller
    +2
    17 replies
    Copy to Clipboard
  • Chris Knott

    Chris Knott

    4 months ago
    What would you put in an Information Management/Data Modelling version of 7 GUIs? That is, what difficult to model scenarios would you use to "stress test" different data formats/information systems (I'm struggling for the words here but I'm talking broadly about stuff like Relational Database, JSON, XML, Java classes etc).
    Chris Knott
    ibdknox
    +3
    18 replies
    Copy to Clipboard