https://futureofcoding.org/ logo
#of-ai
Title
# of-ai
m

Marcelle Rusu (they/them)

11/15/2023, 1:46 PM
I don't know much about LLMs, so I'm purely asking out of ignorance. AFAIK Open AI has a powerful LLM system in large part due to large compute power. Is there a future where small business / people can compete with mega-corps in the LLM space, or will we be renting GPT (or similar) until end the of time?
g

Greg Bylenok

11/15/2023, 3:29 PM
In the immediate future, likely no. As LLMs get bigger and bigger, the computing power grows non-linearly. We may reach a point where general intelligence is "good enough" for most tasks, and then model expansion is deemed uneconomical. We are no where near that point, though. Startups and individuals are more likely to compete in domain-specific areas that don't require general intelligence and don't benefit from expanded context windows.
m

Marcelle Rusu (they/them)

11/15/2023, 3:37 PM
We may reach a point where general intelligence is "good enough" for most tasks, and then model expansion is deemed uneconomical
So I'm clear, in this future when a model is good enough, can I go ahead & run it myself on my computer? And how does it solve my concern?
k

Kartik Agaram

11/15/2023, 4:14 PM
It feels like an open question. Like in any negotiation, companies will reasonably try to get what they can get away with. So consumers will have to vote with their dollars and adoption for models they can run locally. So far there are good signs that open source models might keep up. On compute requirements, two points: 1. The bulk of compute is needed for training. 2. There is definite economic pressure for training to yield efficient models that can run on phones. So I don't think we'll be constrained by laws of physics here, only the laws of geopolitics.
g

Greg Bylenok

11/15/2023, 4:16 PM
Re: "can I run locally": Yes. Individual's are currently running FB's open-source model (Llama2) on high-end personal hardware (say $10K) and just waiting for output. If Llama2 is "good enough" for your use case, then give it a few years for the hardware to catch up and allow for near-realtime inference. Look at the advancements in GPUs since the 1990s. Look at how mobile phones have evolved since the first iPhone. I wouldn't bet against similar happening here. However, in the meantime, companies like OpenAI and Cohere are expanding not only the model sizes but also the "context windows" available to their models. (early 2023: like 32k tokens. today: 100K tokens). They not only have the investment dollars to build out this infrastructure, but they can also monetize that cost over their entire customer base. An individual or small startup is at a huge disadvantage there.
k

Konrad Hinsen

11/16/2023, 6:28 AM
To complement @Kartik Agaram's observations, there's an interesting middle ground between training and using, and that is specializing pre-trained models for a specific domain. I expect this to become more important as it is so far the best path we have to obtaining more reliable output from LLMs. Open Source models could well become highly competitive if specialization can be done at low computational cost.
c

Christian Gill

11/16/2023, 2:44 PM
What is the point of running something locally if it was still trained by a big corporation? Yes, it might guarantee that they aren't using your interactions with the model to further train but they already used everybody's data without permission to train the version you are running now.
m

Marcelle Rusu (they/them)

11/16/2023, 2:46 PM
We're talking about AI as a shift in humanity, comparing it to the internet. But at least with the internet, I could connect my personal cheap computer to the internet & be a contributor. Where is that with LLMS? If this tech is purely dependent on the resources of big tech, I'm having a lot of trouble getting behind it.
c

Christian Gill

11/16/2023, 2:46 PM
I have an idea that is probably stupid since I don't know how the "training" part works, could the training be distributed across end user devices? Similarly to crypto mining. Now I don't know what the incentive would be, but say you get enough people to run their laptops overnight to contribute to training a model, that could add up to some considerable computer power.
k

Kartik Agaram

11/16/2023, 4:31 PM
Depends on what problem you're trying to solve. Where the training happens doesn't seem to affect the "violating copyright" problem.. 🤔
c

Christian Gill

11/16/2023, 5:47 PM
I meant it more as in "open source models can have the same compute power as OpenAI" problem. If it's OSS there's also more incentive to not infringe copyrights
m

Marcelle Rusu (they/them)

11/16/2023, 10:30 PM
Nightmare scenario if we can’t compete Company use open ai, its cheap & 10-100x productivity company pay for a per-user license to open ai so all developers & team can use open ai to design & create their product Prices go up when adoption grows and enough designers & devs lose their jobs Small companies can make their product in days or weeks, company gets VC funding, tons of users, hires & hires. Open ai jumps charges, company has no future to be profitable no company can scale pay X employees because the cost of using open ai will cost more than what companies can generate The pool of devs & designers have decreased, you’re running against best practices to run a non-ai based company - hiring, public knowledge sharing is harder, it becomes a lost art. This is inspired by whats happening with cloud cost currently
As a contrast, the utopia is that product developers become true generalists. We learn the develop a nuanced taste & respect for all aspects of development, and are able to understand how their systems work by utilizing ai for debugging, documentation etc. It is no longer to understand how C-like languages work, but to understand the core of design patterns, abstraction and additionally to be deeply understanding of design, typography, and hopefully even understand what makes a product useful to people.
Do other people see this, am i just blind to what we’re getting here? Ai initially scared me because ill lose my job Less scared but still a concern Now im scared itll absolutely destroy & control large portion of the economy
I apologize for all the messages but these are distinct thoughts. I think instead of thinking of AI as the new internet, we need to think of it as the new car. The car which destroyed not just the environment but cities, our cities are significantly less safe, less beautiful, much slower to maneuver, and importantly(!) more expensive. (too much to unpack in a comment, see “not just bikes” if this isn’t clear to you) Not saying cars shouldnt exist, but we made the wrong call by making it the cornerstone of society The streetcars, and more importantly the traintracks were ripped out for the car. Affordable modes (& now clearly more efficient modes) of transportation for a private, expensive, polluting one.
c

Christian Gill

11/16/2023, 11:00 PM
And it also gave people freedom to go wherever they want whenever they want. To a level that bikes don't give you (range is much more limited) and public transport cannot either (set schedule vs. taking your car and driving away)
Sorry I don't want to start a philosophical discussion on cars either haha
m

Marcelle Rusu (they/them)

11/16/2023, 11:01 PM
I can’t afford that privilege because i dont have a license, and not ready to drop what it would cost to own a car More importantly, many other less privilege can. Trains actually do this for people
c

Christian Gill

11/16/2023, 11:01 PM
I don't have a license either 😂
m

Marcelle Rusu (they/them)

11/16/2023, 11:02 PM
Please check out “not just bikes”, its a real serious problem, and i think theres a strong parallel here

1

2

3

<- some intro videos
c

Christian Gill

11/16/2023, 11:02 PM
I don't think it's either/or. I find public transport great, that's how I move around. But I do see the car as a freedom icon
Anyway, regarding AI. I used to he afraid it would cost my job as well then came to realize it'll take longer and if/when it takes my job that'll be the least of the problems
AI is dangerous, yes, but not for most of the reasons that tend to be discussed. I’m less worried about Terminator or economic upsets as I am about the further breakdown in sense- and meaning-making. [...] Specifically, I’m worried about the cultural effects: people with AI romantic partners, people getting arrested for mistreating their computers, people committing their life to the illusion that they’ll live on in a computer realm, people further reduced to despair and nihilism on the reductionistic assumption that consciousness is just computation, people electing computers to positions of power (and being castigated as backwards oppressors if they don’t acknowledge personhood in the algorithms), people bowing down and worshipping code (while paying its priests money), etc. In short, I’m worried much more about the human folly than the dangers of the technological advance itself.
Quoting https://www.brendangrahamdempsey.com/ but I don't know where this specific was posted
Ohh I just realized I had watched some videos of "not just bikes". I definitely like the Dutch style of city planning more than the American one, specially big cities there. My point was that cars, private ones, still have their place. But I don't do agree with you they shouldn't be the center of the economy.
I don't see the connection with AI. How would it be the new car? 🤔
k

Kartik Agaram

11/17/2023, 2:10 AM
I have the opposite question. Why is AI not like the internet? > I think instead of thinking of AI as the new internet, we need to think of it as the new car.. which destroyed not just the environment but cities Personally I think the internet has destroyed a lot, just like cars. Thinking in terms of any single technology seems misleading. Any new tech requires care in deployment, and humans don't have a great track record of thoughtful deployment. (Why? Here I am on more speculative ground, but my preferred part of the elephant to grab on to is that most people are really bad at pursuing their own long-term self-interest, and so get easily suborned by minority interests. In this I'm following the ideas of systems thinking (Donella Meadows and others) and computational thinking (Seymour Papert and others)
m

Marcelle Rusu (they/them)

11/17/2023, 3:24 AM
I want to give a thoughtful more in depth reply. But quickly why isnt AI like the internet? The internet is (at least in part) built on open standards not dependent on an entity, but an (well specified) idea.
k

Kartik Agaram

11/17/2023, 4:41 AM
Ah, the Internet as opposed to the www. Fair, but to me that just shows the limitations of open and standards. We started out with all these nice things, but things still went to heck. Nice things don't remain nice if people don't appreciate them.
If the past is a guide (I think so) we'll have plenty of open AI, but things will still suck. Less than they would without the open bits, but somehow it won't be much consolation to me.
k

Konrad Hinsen

11/17/2023, 12:41 PM
The car discussion is interesting because that's one of the main examples that Ivan Illich (in "Tools for conviviality") gives for what he calls a "radical monopoly": a technology that restricts the freedom of those who want to or have to use a competing technology (bikes, for example). Which is why discussing "freedom" associated with cars is a complicated story. Cars increase some people's freedom and restrict other people's freedom. Illich argues that such radical monopolies should always be subject to political deliberation, i.e. a society should consciously decide whether it accepts or rejects them. There's a good chance that AI will become a radical monopoly as well, so all that is quite relevant here. But the existence of an "open" version of the technology probably makes a difference - I'll have to think about that a bit more!
m

Marcelle Rusu (they/them)

11/17/2023, 2:37 PM
The open part is less important to me than the protocol bit. I think this is where the comparison of AI to internet really falls apart. The promise which falls apart with both AI & cars, is what they mean by "freedom". What they mean is "freedom from other people", more than anything else. The promise that I the individual should be able to create, be, build anything I want without downsides. With cars, came the suburbs - a mini mansion, a selfish paradise, but I've lost my community, I've grown to be scared of the people in my city. <- importantly we've known for some time that this way of life is horribly unsustainable. With AI, I can create entire products without interacting with anyone. Unfortunately, to me this is the least interesting thing about both cars, and AI. Cars are great for workers, artists, truly liberating for those who often have a lot of things they need to carry. Similarly, AI will be good for creatives who have big dreams - maybe one day I could actually make my musical. BUT AI will not make people creative, and this is where I see it falling apart. By the time it's too late, some (not all) will realize not everyone is good at art, music, products, but we'll have no way out. To make it extra clear, like the car, AI has its place, but it should not become the default way we do product / creative work. We should be careful to not repeat what happened with cars
c

Christian Gill

11/17/2023, 2:52 PM
Thanks for elaborating, I do get your point now
k

Kartik Agaram

11/17/2023, 2:54 PM
I think this is where the comparison of AI to internet really falls apart.
Sorry I keep harping on this, but I really don't think so. I think the failure of the internet has deep lessons to offer if you look. Elegant open protocols are not enough, they're just breeding grounds for gnarly proprietary protocols. The internet too offered freedom, freedom from other people. And it has led to its own suburbs and selfish paradises (lovely phrasing, btw). Don't discard one cautionary tale to focus on another. All cautionary tales are precious because they form so slowly over time. In the end they're just mirrors reflecting our own selves. We're going to need all the help/mirrors we can get to responsibly husband AI.
m

Marcelle Rusu (they/them)

11/17/2023, 2:59 PM
You're right that the internet has failed, and I guess I'm trying to say it could have been worse, and it could be better by "finishing" it as in protocols for distributed data & compute. I shouldn't be so kind to the internet you're right, but the point I am trying to say is that internet still feels salvageable as a foundation for humanity. Cars are not. Whether or not my hopes for the internet will manifest is rightly very up for debate, but I think its unique in that it could be foundation vs having to largely undo the damage of cars on our cities.
k

Konrad Hinsen

11/18/2023, 6:07 AM
For me the main lesson from radical monopolies that we already live with (cars, the Internet, ...) is that they develop so slowly that we (society at large) don't realize it before it is too late. We are happy to shape our tools, with best intentions, but don't notice how our tools are shaping us. There are always individuals who do see it early, and say so loudly, but they are ignored by the enormous majority who remains infatuated with the shiny new tools and doesn't even want to consider that they could do us harm later. Unfortunately, I have no idea how to break out of such dynamics.
2 Views