I shared this as a response to a message earlier t...
# linking-together
I shared this as a response to a message earlier today, but I think it may be worth sharing here, too! Pluralsight shared some interesting research they just completed. The research seeks to validate a framework that can be used to understand developers’ relationship to AI. Quoting from the data highlights of the landing page:
• 43-45% of developers studied showed evidence of worry, anxiety and fear about whether they could succeed in this era of rapid generative-AI adoption with their current technical skill sets.
• Learning culture and belonging on software teams predicted a decrease in AI Skill Threat & an increase in both individual developer productivity and overall team effectiveness.
74% of software developers are planning to upskill in AI-assisted coding. However, there are important emerging equity gaps, with female developers and LGBTQ+ developers reporting significantly lower intent to upskill. On the other hand, Racially Minoritized developers reported significantly higher intentions to upskill.
• 56% of Racially Minoritized developers reported a negative perception of AI Quality, compared with 28% of all developers.
Emphasis mine 😄
Those are interesting numbers, but I'm always curious about them. Sample size? Demographic? Why would anyone not have intent to upskill other than switching careers, or only working on personal projects? I'm late in my research, but I know I need to do it.
It’s all in the linked white paper! But my tl;dr is that there are groups (my self among them) that are skeptical of AI for various reasons, be they ethical, environmental, just thinking it’s the wrong tool for the job, etc., and that influences folks willingness to upskill in AI.
I find it hard not to get provoked by the term "upskilling", but I'm also in the category that isn't taking part.
Our final sample consisted of 3,267 participants: 2,472 ICs and 795 managers
The breakdown by race and gender seemed roughly representative of western industry; not so by sexual orientation (IMO). Analysis of individual questions had much lower N, e.g. AI threat by race was barely half at 1647. Selection bias probably plays multiple complected roles:
We recruited participants...on social media (e.g. X (formerly Twitter), Facebook, Mastodon, LinkedIn, and Reddit), professional listservs of interest to developers, and on the Pluralsight Skills and Flow platforms as banner advertisements
Most obviously, this seems to pull from the minority of programmers tuned in to the pop culture of industry. Pluralsight's business model makes me take this with a pinch of salt, though it seems carefully done. For instance they pre-registered 5 hypotheses. On the other hand, the 5th has 5 subgroups, only one of which registered what seems to me a small difference (see Fig 8 for visualization of 4th bullet above) which is nevertheless reported as a headline. This gives me mild "data blip" feelings. I don't trust anything relying on self-reported productivity; this taints a bunch of the findings. (To the extent that it (or "team effectiveness") does measure something, I don't think it's what they intend or say they're measuring.) I'm curious what the exact wording was for the 2 "brilliance"-measuring questions, since it read like it would be difficult to disentangle several related beliefs.
Why would anyone not have intent to upskill other than switching careers, or only working on personal projects?
The question is whether to upskill switch to LLM-assisted programming. Personally I would prefer the question be answered in the inverse first.
The question is whether to upskill switch to LLM-assisted programming
I don't agree with "switch" there, either, though. That seems to introduce a dichotomy that isn't real and isn't of any real benefit.
@abeyer the linked page (not the PDF) very clearly puts the phrase "*the transition to generative AI-assisted software work*" front and center. This term is used repeatedly along with "era of rapid generative-AI adoption" and "adapt to <this new workflow>"; they call it . To the extent that some consider AI code generation a now-indispensable part of their vision of programming work, I think "switch" is justified.
but "switch" to me connotes leaving one thing for another -- and it's not clear that the paper (nor any realistic approach I know of right now) actually even implies that, rather they clearly are talking about adding a set of tools to existing ones