# thinking-together

Mike Austin

10/19/2023, 4:35 PM
Thoughts on AI code assistants? It doesn't seem that far off from easily prototyping things like games. An AI could understand what a screen is, a sprite/character, movement, etc. Heck, just tell it to generate a random game and fine tune it.

Eli Mellen

10/19/2023, 4:38 PM
Not a solid thought, but a link to interesting research that just dropped. Pluralsight shared some interesting research they just completed. The research seeks to validate a framework that can be used to understand developers’ relationship to AI. Quoting from the data highlights of the landing page:
• 43-45% of developers studied showed evidence of worry, anxiety and fear about whether they could succeed in this era of rapid generative-AI adoption with their current technical skill sets.
• Learning culture and belonging on software teams predicted a decrease in AI Skill Threat & an increase in both individual developer productivity and overall team effectiveness.
• 74% of software developers are planning to upskill in AI-assisted coding. However, there are important emerging equity gaps, with female developers and LGBTQ+ developers reporting significantly lower intent to upskill. On the other hand, Racially Minoritized developers reported significantly higher intentions to upskill.
• 56% of Racially Minoritized developers reported a negative perception of AI Quality, compared with 28% of all developers.

Lukas Süss

10/19/2023, 4:44 PM
Seems @andrew blinn gave a talk on AI as programming assistant recently. Hazel system being the context. Not aware of a recording ATM.

Mike Austin

10/19/2023, 4:44 PM
I confuse AI with genetic algorithms sometimes, and I think my example was more GA unintentionally.

Lukas Süss

10/19/2023, 4:57 PM
I think that AI really really needs some memory beyond the current naive context window approach. Some non-AI smart (back)traceing of dependencies that helps filling that memory with what actually matters. – ATM I have not the first clue about vector databases beyond the name, if it's a good or bad idea. Wouldn't one want model-weight-modifiers? Or are vector databases just that? – – – It's merely obvious to me that storing stuff in a re-read context window is (while most simple also) the most stupid approach possible. – – – Locally running open source AI would also be nice not running into exploding fees (for just one thing).

andrew blinn

10/19/2023, 8:44 PM
no recording unfortunately. paper eventually though. in brief I think llms have huge potential here but also tons of failure cases, hence trying to deeply integrate them with some kind of structured semantic analysis. at the least retrieving similar code via embeddings feels both like under and overkill given that we could use types and variable binding instead to precisely source relevant code. there are certainly interesting hybrid options here, like if there's no code available having the relevant types, using a vector database of type embeddings to retrieve 'similar' types, and then proceed semantically. this all relies on being in a language withe nice, expressive types though, and being in a codebase that doesn't (say) just use strings for everything. in the broader sense of assistance, beyond simple code completion, we're looking into combining latozas work on programming strategies with type driven edit calculus based approaches to create scaffolded multi-stage processes to perform non-trivial multi-location codebase edits (2nd last slide has a very primitive mockup of what this might look like)

Ivan Reese

10/20/2023, 1:06 AM
Mary Rose Cook (friend of the show) is working on something exactly related to this, in the context of games. I don't think it's public yet, so I can't share specifics, but keep an eye on her twitter. In short, yes, a lot of the "I think AI needs _____" or "AI could do __, but ____" ideas in this thread are things she's engaging with in this work.