Study highlights how AI models take potentially dangerous 'shortcuts' in solving complex recognition tasks
Deep convolutional neural networks (DCNNs) don't see objects the way humans do—using configural shape perception—and that could be dangerous in real-world AI applications, says Professor James Elder, co-author of a York University study published today.
Post a Comment