Morlock Elloi on Fri, 29 Dec 2017 20:31:44 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: <nettime> Deep Fool


I have only casual understanding of ML, but it was always counter-intuitive to me that simple polynomial units can somehow produce magical macro effect which no one understands but it "just works". If it turns out that it's just a roundabout way of conditioning a linear system, the magic goes away.

incorrect answer with high confidence. Early attempts at explaining this
phenomenon focused on nonlinearity and overfitting. We argue instead
that the primary cause of neural networks' vulnerability to adversarial
perturbation is their linear nature. This explanation is supported by
new quantitative results while giving the first explanation of the most
intriguing fact about them: their generalization across architectures
and training sets. Moreover, this view yields a simple and fast method

#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject: