I think AI has mostly been about luring investors into pumping up share prices rather than offering something of genuine value to consumers.
Some people are gonna lose a lot of other people’s money over it.
Definitely. Many companies have implemented AI without thinking with 3 brain cells.
Great and useful implementation of AI exists, but it’s like 1/100 right now in products.
If my employer is anything to go by, much of it is just unimaginative businesspeople who are afraid of missing out on what everyone else is selling.
At work we were instructed to shove ChatGPT into our systems about a month after it became a thing. It makes no sense in our system and many of us advised management it was irresponsible since it’s giving people advice of very sensitive matters without any guarantee that advice is any good. But no matter, we had to shove it in there, with small print to cover our asses. I bet no one even uses it, but sales can tell customers the product is “AI-driven”.
My old company before they laid me off laid off our entire HR and Comms teams in exchange for ChatGPT Enterprise.
“We can just have an AI chatbot for HR and pay inquiries and ask Dall-e to create icons and other content”.
A friend who still works there told me they’re hiring a bunch of “prompt engineers” to improve the quality of the AI outputs haha
That’s an even worse ‘use case’ than I could imagine.
HR should be one of the most protected fields against AI, because you actually need a human resource.
And “prompt engineer” is so stupid. The “job” is only necessary because the AI doesn’t understand what you want to do well enough. The only productive guy you could hire would be a programmer or something, that could actually tinker with the AI.
I’m sorry. Hope you find a better job, on the inevitable downswing of the hype, when someone realizes that a prompt can’t replace a person in customer service. Customers will invest more time, i.e., even wait in a purposely engineered holding music hell, to have a real person listen to them.
God that sounds like hell.
Yes, I’m getting some serious dot-com bubble vibes from the whole AI thing. But the dot-com boom produced Amazon, and every company is basically going all-in in the hope they are the new Amazon while in the end most will end up like pets.com but it’s a risk they’re willing to take.
“You might lose all your money, but that is a risk I’m willing to take”
- visionairy AI techbro talking to investors
Investors pump money in a bunch of companies so the chances of at least one of them making it big and paying them back for all the failed investments is almost guaranteed. That’s what taking risks is all about.
Sure, but it SEEMS, that some investors are relying on buzzword and hype, without research and ignoring the fundamentals of investing, i.e. besides the ever evolving claims of the CEO, is the company well managed? What is their cash flow and where is it going a year from now? Do the upper level managers have coke habits?
You’re right, but these fundamentals don’t really matter anymore, investors are buying hype and hoping to sell a bigger hype for more money later.
Seeing the whole thing as Knowingly Trading in Hype is actually a really good insight.
Certainly it neatly explains a lot.
Also called a Ponzi scheme, where every participant knows it’s a scam, but hopes to find some more fools before it crashes and leave with positive balance.
If the whole sector turns out to be garbage it won’t matter which particular set of companies within it you invest in; you will get burned if you cash out after everyone else.
OpenAI will fail. StabilityAI will fail. CivitAI will prevail, mark my words.
I tried to find the advert but I see this on YouTube a lot - an Adobe AI ad which depicts, without shame, AI writing out a newsletter/promo for a business owner’s new product (cookies or ice cream or something), showing the owner putting no effort into their personal product and a customer happily consuming because they were attracted by the thoughtless promo.
How are producers/consumers okay with everything being so mediocre??
How are producers/consumers okay with everything being so mediocre??
“You’re always trying to make everything just a little bit worse so that you can feel good about having a lot more of it. I love it. It’s so human!” - The Good Place
How are producers/consumers okay with everything being so mediocre??
I’m not. My particular beef is with is with plastics and toxic materials and chemicals being ubiquitous in everything I buy. Systemic problem that I can do almost nothing about apart from make things myself out of raw materials.
A lot of it is follow the leader type bullshit. For companies in areas where AI is actually beneficial they have already been implementing it for years, quietly because it isn’t something new or exceptional. It is just the tool you use for solving certain problems.
Investors going to bubble though.
Yeah, can make some products better but most of the products these days that use AI, it doesn’t actually need them. It’s annoying to use products that actively shovel AI when it doesn’t even need it.
Ya know what pfoduct MIGHT be better with AI?
Toasters. They have ONE JOB, and everybody agrees their toaster is crap. But you’re not going to buy another toaster, because that too will be crap.
How about a toaster, that accurately, and evenly toasts your bread, and then DOESN’T give you a heart attack at 5am when you’re still half asleep???
IS THAT TOO MUCH TO ASK???
Nah. We already have AI toasters, and they’re ambitious, but rubbish.
Adding AI is just serious overkill for a toaster, especially when it wouldn’t add anything meaningful, not compared to just designing the toaster better.
It only needs one string of conditions that it can understand: don’t catch on fire. Turn yourself off IF smoke.
Sweet, I’m the one who gets to link the obligatory Technology Connections toaster video!
Aw man, now I want this toaster.
I said the exact same thing months ago when I saw that video. I don’t even use a toaster.
AI toasters are a Bad Idea
Did you want some toast?
This is the visionary we need. Take my venture capital millions on a magic carpet ride, time traveler!
My doorbell camera manufacturer now advertises their products as using, “Local AI” meaning, they’re not relying on a cloud service to look at your video in order to detect humans/faces/etc. Honestly, it seems like a good (marketing) move.
I’ve learned to hate companies that replaced their support staff with AI. I don’t mind if it supplements easy stuff, that should take like 15 seconds, but when I have to jump through a bunch of hoops to get to the one lone bastard stuck running the support desk on their own, I start to wonder why I give them any money at all.
I love it when I have to trick those stupid ai chatbots to let me talk to a human customer service rep
It has been getting so bad that even boring regular phone trees will hang up on you if you insist on talking to a human. If it’s ISP / cellular, nowadays I will typically just say I want to cancel my account, and then have cancellations route me to the correct department.
There really should be a right to adequate human support that’s not hidden behind multiple barriers. As you said, it can be a timesaver for the simple stuff, but there’s nothing worse than the dread when you know that your case is going to need some explanation and an actual human that is able to do more than just following a flowchart.
<greentext>
Be me
Early adopter of LLMs ever since a random tryout of Replika blew my mind and I set out to figure what the hell was generating its responses
Learn to fine-tune GPT-2 models and have a blast running 30+ subreddit parody bots on r/SubSimGPT2Interactive, including some that generate weird surreal imagery from post titles using VQGAN+CLIP
Have nagging concerns about the industry that produced these toys, start following Timnit Gebru
Begin to sense that something is going wrong when DALLE-2 comes out, clearly targeted at eliminating creative jobs in the bland corporate illustration market. Later, become more disturbed by Stable Diffusion making this, and many much worse things, possible, at massive scale
Try to do something about it by developing one of the first “AI Art” detection tools, intended for use by moderators of subreddits where such content is unwelcome. Get all of my accounts banned from Reddit immediately thereafter
Am dismayed by the viral release of ChatGPT, essentially the same thing as DALLE-2 but text
Grudgingly attempt to see what the fuss is about and install Github Copilot in VSCode. Waste hours of my time debugging code suggestions that turn out to be wrong in subtle, hard-to-spot ways. Switch to using Bing Copilot for “how-to” questions because at least it cites sources and lets me click through to the StackExchange post where the human provided the explanation I need. Admit the thing can be moderately useful and not just a fun dadaist shitposting machine. Have major FOMO about never capitalizing on my early adopter status in any money-making way
Get pissed off by Microsoft’s plans to shove Copilot into every nook and cranny of Windows and Office; casually turn on the Opympics and get bombarded by ads for Gemini and whatever the fuck it is Meta is selling
Start looking for an alternative to Edge despite it being the best-performing web browser by many metrics, as well as despite my history with “AI” and OK-ish experience with Copilot. Horrified to find that Mozilla and Brave are doing the exact same thing
Install Vivaldi, then realize that the Internet it provides access to is dead and enshittified anyway
Daydream about never touching a computer again despite my livelihood depending on it
</greentext>
I like the article I read were ww2 german soldiers were being generated by AI as asians, black woman, etc. Glad it doesn’t take context into consideration. lol
I haven’t seen any ai in firefox
deleted by creator
Give me a bunch of open AI models and a big GPU to play with and I’ll generate twenty gigabytes of weird anime fetish content.
This is the only true use of AI
You forgot to add “and post it to Lemmy”.
In your own words, tell me why you’re calling today.
My medication is in the wrong dosage.
You need to refill your medication is that right?
No, my medication is in the wrong dosage, it’s supposed to be tens and it came as 20s.
You need to change the pharmacy where you’re picking up your medication?
I need to speak to a human please.
I understand that you want to speak to an agent, is that right?
Yes.
Chorus, 5x. (Please give me your group number, or dial it in at the keypad. For this letter press that number for that letter press this number. No I’m driving, just connect me with an agent so I can verify over the phone)
I’m sorry, I can’t verify your identity please collect all your paperwork and try calling again. Click
Why ever would we be mad?
I went through a McDonald’s drive-thru the other day and had the most insane experience. For the context of this anecdote, I don’t do that often, so, what I experienced was just weird.
While not quite “AI,” the first thing that happened was an automated voice yells at me, “are you ordering using your mobile app today?”
There’s like three menu-speaker boxes, and due to where the car in front of me stopped, I’m like in between the last two. The other speaker begins to yell, “Are you ordering using your mobile app today?”
The person running drive-thru mumbles something about pull around. I do. Pass by the other menu “Are you ordering using your mobile app today?”
Dude walks out with a headset and starts taking orders from each car using a tablet.
I have no idea what is happening. I can’t even see a menu when the guy gets around to me. Turns the tablet around at me.
I realized that I was indeed ordering using the mobile app today.
To be fair, this is not new, unless you’re counting all answering machines as AI
Hardly. It used to be natural language dictation and decision tree. Now they’re trying to use LLM training to automatically pick up more edge cases and it’s pretty much b*******.
LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.
Often the answers are pretty good. But you never know if you got a good answer or a bad answer.
And the system doesn’t know either.
For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.
Accurate.
No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.
The worst for me was a fairly simple programming question. The class it used didn’t exist.
“You are correct, that class was removed in OLD version. Try this updated code instead.”
Gave another made up class name.
Repeated with a newer version number.
It knows what answers smell like, and the same with excuses. Unfortunately there’s no way of knowing whether it’s actually bullshit until you take a whiff of it yourself.
So instead of Prompt Engineer, the more accurate term should be AI Taste Tester?
From what I’ve seen you’ll need an iron stomach.
They really aren’t. Go ask about something in your area of expertise. At first glance, everything will look correct and in order, but the more you read the more it turns out to be complete bullshit. It’s good at getting broad strokes but the details are very often wrong.
Now imagine someone that doesn’t have your expertise reading that answer. They won’t recognize those details are wrong until it’s too late.
That is about the experience I have. I asked it for factual information in the field I work at. It didn’t gave correct answers. Or, it gave working protocols which were strange and would not be successful.
With proper framework, decent assertions are possible.
- It must cite the source and provide the quote, not just a summary.
- An adversarial review must be conducted
If that is done, the work on the human is very low.
That said, it’s STILL imperfect, but this is leagues better than one shot question and answer
Except LLMs don’t store sources.
They don’t even store sentences.
It’s all a stack of massive N-dimensional probability spaces roughly encoding the probabilities of certain tokens (which are mostly but not always words) appearing after groups of tokens in a certain order.
And all of that to just figure out “what’s the most likely next token”, an output which is then added to the input and fed into it again to get the next word and so on, producing sentences one word at a time.
Now, if you feed it as input a long, very precise sentence taken from a unique piece, maybe you’re luck and it will output the correct next word, but if you already have all that you don’t really need an LLM to give you the rest.
Maybe the “framework” you seek - which is quite akin to a indexer with a natural language interface - can be made with AI, but it’s not something you can do with LLMs because their structure is entirely unsuited for it.
The proper framework does, with data store, indexing and access functions.
The cutting edge work is absolutely using LLMs in post-rag pipelines.
Consumer grade chat interfaces def do not do this.
Edit if you worry about topics like context window, sentence splitting or source extraction, you aren’t using a best in class framework any more.
Sounds familiar. Citation please
“AI” is certainly a turn-off for me, I would ask a salesman “do you have one that doesn’t have that?” and I will now enumerate why:
-
LLMs are wrongness machines. They do have an almost miraculous ability to string words together to form coherent sentences but when they have no basis at all in truth it’s nothing but an extremely elaborate and expensive party trick. I don’t want actual services like web searches replaced with elaborate party tricks.
-
In a lot of cases it’s being used as a buzzword to mean basically anything computer controlled or networked. Last time I looked up they were using the word “smart” to mean that. A clothes dryer that can sense the humidity of the exhaust air to know when the clothes are dry isn’t any more “AI” than my 90’s microwave that can sense the puff of steam from a bag of popcorn. This is the kind of outright dishonest marketing I’d like to see fail so spectacularly that people in the advertising business go missing over it.
-
I already avoided “smart” appliances and will avoid “AI” appliances for the same reasons: The “smart” functionality doesn’t actually run locally, it has to connect to a server out on the internet to work, which means that while that server is still up and offering support to my device, I have a hole in my firewall. And then they’ll stop support ten minutes after the warranty expires and the device will no longer work. For many of these devices there’s no reason the “smart” functionality couldn’t run locally on some embedded ARM chip or talk to some application running on a PC that I own inside my firewall, other than “then we don’t get your data.”
-
AI is apparently consuming more electricity than air conditioning. In fact, I’m not convinced that power consumption isn’t the selling point they’re pushing at board meetings. “It’ll keep our friends in the pollution industry in business.”
Can you help me with problems this complex? Idk maybe we could use it to help make things better. Just most people prompt like things I can’t say because they aren’t nice. Oh by the way. Can you do it right now for $0 please? Thanks!
Edit. Also need it done now. If you’re reading this you were too slow.
Your response doesn’t apply to ANY of his 4 points…
And might be subject to 1. LLMS are wrongness machines.
Yeah, lmfao. I like the tech and all but ignoring criticism and jumping straight to gatekeeping is just soooo bad. Those people are one of the reasons why people dislike ai.
-
As I mentioned in another post, about the same topic:
Slapping the words “artificial intelligence” onto your product makes you look like those shady used cars salesmen: in the best hypothesis it’s misleading, in the worst it’s actually true but poorly done.
I find the tech interesting, but the rush to commercialize it was a bad idea. It’s not ready yet, total uncanny valley.
Literally only exciting use for it ive seen so far is that Skyrim companion. And even that doesn’t work right yet.
I have rolled back, uninstalled, opted-out, or ripped apart every AI that every company is trying to shove down our throats. I wish I could do the same for search engines, but who uses the internet broadly anymore anyway.
I am impressed by the tech, I think it’s amazing, but it’s still utterly useless.
I have never, ever needed to interrupt my day’s schedule to generate a convincing picture of Luke Skywalker fighting Batman while riding dinosaurs, I have never needed to have a text conversation with someone who seems “almost human,” I mean, christ that already describes half the people I know and wish were more normal. I have never needed an article summarized badly, I enjoy reading things, I enjoy writing emails, so I can’t figure out why they would make tools to take away the small pleasures we have. What exactly are they thinking?
Yesterday I gave it one more chance, asked one of the apps, I forget which, what tomorrow’s weather will be like, the thing forecasted a hurricane coming right for me, a news event from last year. I’m so over AI, please someone notify me when it’s really useful and can take over the menial, tedious tasks like managing my online accounts and offering financial advice or can actually help me find a job opening in my field.
All these things have been promised, and seem more out of reach than ever.
The MOST impressive thing I’ve seen AI do is make really, really convincing furry porn babes. The things are good at mixing features in images. Sometimes.
deleted by creator
but it’s still utterly useless.
this is purely false. There are so many applications that bring value and if you can’t admit that then you are biased in some way/shape/form.
As a sw dev, I use AI to speed up menial tasks or help me find different perspectives on certain things, shit it’s even helpful for debugging tricky things. You don’t need to be a coder to find value in AI though, things like auto-generated transcripts has been so fucking amazing, especially for podcasting in my case.
I could go on and on. To say it is UTTERLY USELESS is disingenuous at best.
The MOST impressive thing I’ve seen AI do is make really, really convincing furry porn babes. The things are good at mixing features in images. Sometimes.
You are quite literally telling on yourself here, you seem to have a limited view of AI application and are judging the entire technology/concept based on that narrow set of use-cases (which appear to be, from your comment, chat bots, porn generators, future weather predictors, not exactly the pinnacle of AI application).
I’m so over AI, please someone notify me when it’s really useful and can take over the menial, tedious tasks
Here you go again! You seem to be equating value to the ability for the tech to function without supervision or assistance. Does AI only provide value to you if it can do those things completely autonomously? What if working with the AI is faster than not using it at all? Is it still useless to you?
When someone is disappointed in something, the very worst way you can make progress in changing that person’s mind is flatly telling them they’re wrong.
You didn’t change anything with your reaction here, I still live in the world with useless, annoying AI. Like most people. I won’t now look at it and think “Hmn I should reconsider how I feel” As I try to refine my search results so it doesn’t feed me complete garbage.
I’m absolutely sure it’s helping some people in specific instances, but we’re not at the point yet where it’s helping people broadly, so I fucking DARE you to say any of this in a larger community where average people with non-coding jobs have to sift through AI bullshit all day.
I know it’s going to help in the future with a lot of things, but it’s also going to get worse before it gets better, and I’m not some lone voice, so you have to get over yourself here, you’re not anywhere close to the majority of opinion here, even on the tech/singularity/cult forums there are plenty of people fed up with the current state of marketing and AI being shoved into everything. I stand by every last thing I said here. People in your position are deliberately not reading the negative things people feel about this tech in it’s current state, but there are a LOT of people who share this feeling.
And you know what? You should embrace it.
Because if people didn’t voice their discontent, it won’t get better. The whinging pushback against criticism just boggles me, like people are so caught up in the cult that they can’t see it as another product that has to be refined and shaped before people can use it and enjoy it in any capacity.
Several top links from “How do people feel about AI”
https://www.forbes.com/advisor/business/artificial-intelligence-consumer-sentiment/
https://hbr.org/2024/05/ais-trust-problem
And of course, THIS VERY ARTICLE: https://futurism.com/the-byte/study-consumers-turned-off-products-ai
They keep using it for really stupid things. I agree all the image generators are bloody pointless, the quality isn’t good enough and you don’t have the control you need to make them useful.
Maybe I’d be more interested in AI if there was any I with the A. At the moment, there’s no more intelligence to these things than there is in a parrot with brain damage, or a human child. Language Models can mimic speech but are unable to formulate any original thoughts. Until they can, they aren’t AI and I won’t be the slightest bit interested beyond trying to break them into being slightly dirty (and therefore slightly funny).
Just so you know I totally agree with you but if you go far back enough in my comment history I had a really interesting (imo) discussion/argument with someone abt this very topic and the topic of how to determine if an AI ‘thinks’ or ‘reasons’ more broadly.
It can be helpful to approach this from the other direction. The part of the brain that works like an LLM.
This is because AI is usually used to reduce the human cost to the company, and rarely to reduce the human labour for the customer.
That, or mass surveillance.
AI has some pretty good uses.
But in the majority of junk on the market it is nothing but marketing bloatware.
It does and AI is being tarnished by the hype/marketing.
Not long ago Firefox announced it would deliver client-side “AI” to describe web pages to differently-abled users. This is awesome.
Some people on Lemmy conflated AI and Large Language Models and complained about the addition. I don’t blame them, not everyone is an IT pro and is equipped to understand the difference between Machine Learning Models, LLMs and such. I mentioned Firefox has “AI” for client-side translation and that’s a great thing. They wondered since when “AI” was used for translation. Machine learning/deep learning translation has been a thing for over a decade and it amazing. It’s not LLM (even if LLMs are really good at translation).
The market has pushed “AI” too hard making people cautious about it. They are turning it into the new “blockchain” were most people didn’t find any benefit from the hype, on the contrary, they saw the vast majority of it being scams.
even if LLMs are really good at translation
As someone that actually played japanese RPG games translated with AI on dlsite, bullshit.
I can’t really agree as a video producer. Luma, Krea, Runway, Ideogram, Udio, 11Labs, Perplexity, Claude, Firefly -> All worth more than they’re charging, most with daily free options. They save me a ton of time. Honestly, the one I’m considering dropping at the moment is ChatGPT.
The irony is companies are being forced to implement it. Like our board has told us we must have “AI in our product.”. It’s literally a solution looking for a problem that doesn’t exist.
It’s because automated trading bots trade companies whose names appear in headlines with the word AI upwards.
The stock market is an economic shitpost.
This just screams “The CEO read about it on linkedin while taking a dump and now feels it is vital to the company.”
My boss’s boss’s boss asked for a summary of our roadmap. He read it, and provided his takeaways… 3 of the 4 bullet points were AI-related, and we never once mentioned anything about AI in what we gave him 😑 so I guess we’re pivoting?
This is basically forcing AI based spying from the government
Developer: Am I out of touch?
No, it’s the consumers who are wrong.
DeveloperStackholder: Am I pushing the wrong ideas onto the managers?No, it’s the developers who don’t know how to implement the features I want.
I have no qualms about AI being used in products. But when you have to tell me that something is “powered by AI” as if that’s your main selling point, then you do not have a good product. Tell me what it does, not how it does it.