Non-AI options can also have “hallucinations” i.e. false positives and false negatives, so if the AI one has a lower false positive/false negative rate, I’m all for it.
Non-AI options are comprehensible so we can understand why they’re failing. When it comes to AI systems we usually can’t reason about why or when they’ll fail.
Non-AI options can also have “hallucinations” i.e. false positives and false negatives, so if the AI one has a lower false positive/false negative rate, I’m all for it.
Non-AI options are comprehensible so we can understand why they’re failing. When it comes to AI systems we usually can’t reason about why or when they’ll fail.