Anecdotally speaking, I’ve been suspecting this was happening already with code related AI as I’ve been noticing a pretty steep decline in code quality of the code suggestions various AI tools have been providing.
Some of these tools, like GitHub’s AI product, are trained on their own code repositories. As more and more developers use AI to help generate code and especially as more novice level developers rely on AI to help learn new technologies, more of that AI generated code is getting added to the repos (in theory) that are used to train the AI. Not that all AI code is garbage, but there’s enough that is garbage in my experience, that I suspect it’s going to be a garbage in, garbage out affair sans human correction/oversight. Currently, as far as I can tell, these tools aren’t really using much in the way of good metrics to rate whether the code they are training on is quality or not, nor whether it actually even works or not.
More and more often I’m getting ungrounded output (the new term for hallucinations) when it comes to code, rather than the actual helpful and relevant stuff that had me so excited when I first started using these products. And I worry that it’s going to get worse. I hope not, of course, but it is a little concerning when the AI tools are more consistently providing useless / broken suggestions.
Anecdotally speaking, I’ve been suspecting this was happening already with code related AI as I’ve been noticing a pretty steep decline in code quality of the code suggestions various AI tools have been providing.
Some of these tools, like GitHub’s AI product, are trained on their own code repositories. As more and more developers use AI to help generate code and especially as more novice level developers rely on AI to help learn new technologies, more of that AI generated code is getting added to the repos (in theory) that are used to train the AI. Not that all AI code is garbage, but there’s enough that is garbage in my experience, that I suspect it’s going to be a garbage in, garbage out affair sans human correction/oversight. Currently, as far as I can tell, these tools aren’t really using much in the way of good metrics to rate whether the code they are training on is quality or not, nor whether it actually even works or not.
More and more often I’m getting ungrounded output (the new term for hallucinations) when it comes to code, rather than the actual helpful and relevant stuff that had me so excited when I first started using these products. And I worry that it’s going to get worse. I hope not, of course, but it is a little concerning when the AI tools are more consistently providing useless / broken suggestions.