I've been (trying) to build autocomplete for canva...
# share-your-work
l
I've been (trying) to build autocomplete for canvas. keen to hear thoughts/ideas from anyone else who's worked on anything similar-ish!

https://youtu.be/r6ls8Gw9MmY?si=JT8VjUvxgcRk9MxP

😎 1
💡 1
🔥 15
❤️ 1
🫧 2
l
Soo cool! I've been wanting something that continuously shows me inspiration of similar things (that possibly others are working on, for connection; or not, just as suggestions); sometimes in place like this, but mostly as a range of "alternate futures". Ie. with the grid example, from just two datapoints, you could extrapolate in multiple ways! I can get frustrating when the assistants goes with something different than you intended, but visual things have the benefit of being easily distinguished at a glance! Have like five "agents" that are more stable across time, but continuously change with your changes. Each estimates its relevance, and fade accordingly, getting replaced by new fresh agent that starts off from your latest view. All five on a row, then scrolling down to see the future, ie. letting them progress even further, but really, causing a fractal tree as the future branches exponentially for every choice, but again, with less relevant branches fading into the z-distance. Tap anyone to see bigger, either just highlight elements to bring into your canvas, or fully switch into their world.
👍 1
m
Been thinking about this for a while! Ultimately, I want a brain reader that gets the context of what I am trying to do and passes it on to my AI buddy so that I can just think and see things happen on the screen. Until then, it’d be nice to have an AI buddy with whom to chat (as in actually talk) on things like: • What do you think of this flow, any other ideas? • Prepping this flow for ‘x’ client and you know how prickly they are on ‘y’, anything jumping at you in this sense? • There’s something off I can’t quite pin down here, any thoughts? • This looks a bit messy, could you clean it up for me? and get a mix of pixels and audio waves back. My current common use for AI buddies – mostly patching GPT-4 output in Figma – are • Content and data generation ◦ here’s an example metric for the dashboard UI/UX I am designing, give me 10 more ◦ give me 200 random numbers between 1 and 100 (o.t. would be really curious to study the output and see how random it actually is) ◦ here’s a dataset heading for a dataset containing ‘x’, give me a blurb for it, give me another 10 datasets similar to ‘x’ with blurbs • Sense checking ◦ Which icon would you use to identify the concept ‘festival’, what about ' ‘infrastructure project’, what about ‘the sense of awe one gets while staring at the light shining through the trees after it stopped raining at dusk’? Lastly, explorations I’d like to do in a shorter timeline are in the direction of • ‘Design-critique’ AI buddy • ‘Pretend-to-be-a-user-with-xyz-accessibility-issue-and-record-yourself-running-my-ux-reserach-scripti-while-using-my-prototype’ AI buddy Curious to see how this evolves!
l
@Mattia Fregola 'mind reading' is something that I really wanted to evoke with this prototype. but I always try to steer really clear of the analogy of the model feeling like a 'person' with what i make. i feel like that whole 'artificial intelligence as a person' idea is something that will die as we get more used to artificial intelligence, and it'll feel dated at some point (tell me if I'm wrong in 5 years time). the reason for that is... i don't think it's what people actually want. 'people' are actually kinda hard and slow to work with. you're forced to communicate through slow clumsy words, and they're unpredictable. i feel like this reflects the currently available artificial intelligence products we have right now. but models can actually work with us in completely different ways, that we haven't even begun to explore yet. as a user, i want to: • see the branching futures of what I'm creating in realtime. • get specific types of help from a model without needing to engage in a bank and forth conversation
🔥 1
m
Cool cool @Lu Wilson. Prolly getting away from the original goal here, but feeling like adding more clumsy words to the mix. 👀 I disagree on ‘people don’t want people’. I see what you’re saying about there being ‘inefficiencies’ in human-to-human communication, though I think the same elements you bring as an example are so built into our being human that I struggle to picture them going away as way of interacting. If I think about it, any thought I have, any plan, design, project, even this short bit of text, is a loop of sloppy conversations (be inner or outer, with myself, other people or models) going from the brain to the World and through many interfaces back to other brains, and so on and so forth. I am not saying that there won’t be different and unexpected ways to interact with models, here I agree, and think it’s a great and fun challenge to discover them and invent them; I am saying that I can’t see the ‘AI as a person’ analogy ever going away. Sort of like flying on a plane can’t ever make walking redundant (not quite the perfect analogy but good enough convey the idea).
l
That's fair! and it'll be fun to see how right or wrong I am over time :) perhaps I could share two other perspectives I've picked up to try to explain where I'm coming from - i go to a lot of artificial intelligence meetups and events, and see a lot of demos. and 98% of them are: "we added a textbox that lets you talk to a 'person' artificial intelligence, and we've given them a cute name like 'Frenda'", and this thing gets very very boring to me and other people, and it's always very slow and cumbersome, and not great. it's hard to explain what you want. clicking through menus and buttons is ten times easier. so part of my job is to make demos and prototypes that have an impact, and get people to start thinking outside the box. so of course I'm gonna avoid the usual bare minimum approach. and i stick to this branding like "Make Real" or "Draw Fast" or "Lens" or "autocomplete for canvas" which I think is a really important part of a product. it changes your relationship to it. internally, I have really resisted personifying it, or adding textboxes. i sometimes sound like a broken record. it's not to say that textboxes are always bad, but i think it should be a last resort - an escape hatch - not your primary feature. by adopting this approach, I think the tldraw AI demos have had a bigger impact. it has forced me to explore more natural interactions, and it has cut through all the other prototypes out there.
🍰 1
that's perspective 1!
perspective 2 - in the history of personal computing, user interactions typically start out with a text input, and then gradually change towards 'pointing'. we started out having to type instructions into a command line and it was really cumbersome. but then some very clever people invented this thing called a 'mouse' and we never looked back. we could 'point' at what we wanted to do instead. sure, power users still use a terminal, or command palette. but that's not the default. with phones, we started out by typing into small tiny numbers, and eventually clunky plastic keyboards. it was the only way to interact, and it was very cumbersome. then some very clever people invented this thing called the finger and we never looked back. we can point with our finger on all our phones now. so now with AI, we're still in the very early days, and the primary way to interact is by typing text into a little text box. i think it's really cumbersome. i can't see this going differently from other technologies. i think we're gonna be 'pointing' eventually, by drawing, clicking, dragging, so on.
🍰 2
m
Awesome perspectives!
Relevant Figma bit here

https://youtu.be/n5gJgkO2Dg0?t=2641