Yes, it does that because it was designed to sound convincing, and that is a good method for accomplishing that. That is the primary goal behind the design of all chatbots, and what the Turing Test was intended to gauge. Anyone who makes a chatbot wants it to sound good first and foremost.
Or it’s ChatGPT
Honestly, I think ChatGPT wouldn’t make that particular mistake. Sounding proper is its primary purpose. Maybe a cheap knockoff.
chatGPT just guesses the next word. stop anthropomorphizing it.
Humans are just electrified meat. Stop anthropomorphizing it.
🙄
Another example of why I hate techies
Found Andrew Ure’s account
it guesses the next word… based on examples created by humans. It’s not just making shit up out of thin air.
Yes, it does that because it was designed to sound convincing, and that is a good method for accomplishing that. That is the primary goal behind the design of all chatbots, and what the Turing Test was intended to gauge. Anyone who makes a chatbot wants it to sound good first and foremost.
Lol making a mistake isn’t unique to humans. Machines make mistakes.
Congratulations for knowing that a LLM isn’t the same as a human though, I guess!
I knew someone would say that.
TalkFOS
Fuck you’re probably right