To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.
The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
This goes along the line of changing the term to “artifical instinct” instead of intelligence because of the reliance on observing others and experiencing instead of just rote repetition its more true learning.
I think this article overstates the case. We can’t explain how people work, and they fuck up all the time. Should we not be using air traffic controller AIs that crash plans a tenth of the time of their human counterparts, because we don’t know what they’re thinking?
I suspect the deeper reason is that we’ll have nobody to blame/sue. See = litigious American society.
Yeah, the title is a little sensational, although the article unpacks it and makes some good points. We can certainly understand how deep learning works. However, once we let a large network loose on mountains of data, we can no longer trace back a causal chain. At least, not easily. In machine learning lingo this gets into the difference between “prediction” and “inference”. The latter being the trickier part – and what the article focuses on.
Probably an aside, but I remember seeing those images of cats and whatnot generated by running neural networks backwards to find the strongest stimulus for that particular recognition category. Spooky.
That’s just damn terrifying.