There are quite a lot of AI-sceptics in this thread. If you compare the situation to 10 years ago, isn’t it insane how far we’ve come since then?
Image generation, video generation, self-driving cars (Level 4 so the driver doesn’t need to pay attention at all times), capable text comprehension and generation. Whether it is used for translation, help with writing reports or coding. And to top it all off, we have open source models that are at least in a similar ballpark as the closed ones and those models can be run on consumer hardware.
Obviously AI is not a solved problem yet and there are lots of shortcomings (especially with LLMs and logic where they completely fail for even simple problems) but the progress is astonishing.
Lol. It doesn’t do video generation. It just takes existing video and makes it look weird. Image generation is about the same: they just take existing works and smash them together, often in an incoherent way. Half the text generation shit is just fine by underpaid people in Kenya Ave and similar places.
There are a few areas where llm could be useful, things like trawling large data sets, etc, but every bit of the stuff that is being hyped as “AI” is just spam generators.
Confidently incorrect.
I think a big obstacle to meaningfully using AI is going to be public perception. Understanding the difference between CHAT-GPT and open source models means that people like us will probably continue to find ways of using AI as it continues to improve, but what I keep seeing is botched applications, where neither the consumers nor the investors who are pushing AI really understand what it is or what it’s useful for. It’s like trying to dig a grave with a fork - people are going to throw away the fork and say it’s useless, not realising that that’s not how it’s meant to be used.
I’m concerned about the way the hype behaves because I wouldn’t be surprised if people got so sick of hearing about AI at all, let alone broken AI nonsense, that it hastens the next AI winter. I worry that legitimate development may be held back by all the nonsense.
I actually think public perception is not going to be that big a deal one way or the other. A lot of decisions about AI applications will be made by businessmen in boardrooms, and people will be presented with the results without necessarily even knowing that it’s AI.
Businessmen are just the public but with money.
I hope it collapses in a fire and we can just keep our foss local models with incremental improvements, that way both techbros and artbros eat shit
Unfortunately for that outcome, brute forcing with more compute is pretty helpful for now
And even if local small-scale models turn out to be optimal, that wouldn’t stop big business from using them. I’m not sure what “it” is being referred to with “I hope it collapses.”
I was referring to the hype bubble therefore the money surrounding it all
Those recent failures only come across as cracks for people who see AI as magic in the first place. What they’re really cracks in is people’s misperceptions about what AI can do.
Recent AI advances are still amazing and world-changing. People have been spoiled by science fiction, though, and are disappointed that it’s not the person-in-a-robot-body kind of AI that they imagined they were being promised. Turns out we don’t need to jump straight to that level to still get dramatic changes to society and the economy out of it.
I get strong “everything is amazing and nobody is happy” vibes from this sort of thing.
Also interesting is that most people don’t understand the advances it makes possible so when they hear people saying it’s amazing and then try it of course they’re going to think it’s not lived upto hype.
The big things are going to completely change things like how we use computers especially being able to describe how you want it to lay out ui and create custom tools on the fly.
I found this graph very clear
Well, natural language processing is placed in the trough of disillusionment and projected to stay there for years. ChatGPT was released in November 2022…