tag:blogger.com,1999:blog-5825758052688213474.post3311282216595193329..comments2019-01-14T17:44:13.020-08:00Comments on The Unofficial Google Data Science Blog: Attributing a deep networkâ€™s prediction to its input featuresAmir Najminoreply@blogger.comBlogger3125tag:blogger.com,1999:blog-5825758052688213474.post-42006161744548664012018-08-03T07:12:58.289-07:002018-08-03T07:12:58.289-07:00It seems that if one has a more traditional superv...It seems that if one has a more traditional supervised learning problem with a set of features and either numeric or categorical response, this method does not work?Liwen Ouyanghttps://www.blogger.com/profile/15655840352246505075noreply@blogger.comtag:blogger.com,1999:blog-5825758052688213474.post-41583300434636474732017-11-21T10:59:05.703-08:002017-11-21T10:59:05.703-08:00Dear dp1080,
I'd love to have a more detailed...Dear dp1080,<br /><br />I'd love to have a more detailed chat about what you mean with the stokes theorem references and also the p-value point. It sounds interesting, but I am not able to unpack all of it. Could we chat over mail (mukunds@google.com)?<br /><br />I am happy to update this blog post with a summary of our chat.<br /> <br />(1. we have played with absolute values in the past<br />2. ReLU based networks provably satisfy the degree of smoothness we need to run our method.)<br /><br />thanks!<br /><br /><br /> mukundhttps://www.blogger.com/profile/13602407747938225794noreply@blogger.comtag:blogger.com,1999:blog-5825758052688213474.post-68465315624256570742017-11-17T21:26:14.170-08:002017-11-17T21:26:14.170-08:00Hello! I came across this while studying for an i...Hello! I came across this while studying for an interview at google and this concept seems very interesting to me. I know highly regulated fields like in risk management and medicine ("old fashioned statistic") are adverse to using neural networking models because they currently are unexplainable in terms of the feature space. Defining metrics like these could go a long way in changing hearts and minds!<br /><br />A few constructive feedback pieces I would like to offer if it helps: <br /><br />1) Although noise might be considered meaningless here if the data is large enough, it would be useful to see how your metric infers confidence bounds on neural networks. Particularly, I would love to see if this constructs a GLM "p-value" of sorts which can gauge the probability that at least the function has a critical point at x_0.<br /><br />2) By assuming the function has a gradient it seems you aren't entirely adverse to believing the function is smooth. If I were designing this I think I would have come up with something similar actually except I would have used the absolute value of the partial derivative. Reason being that as I think the neural network is an analytic function (in its classical construction it is a finite composition of sigmoid functions) and showing an analytic function is zero uniformly on any closed curve implies it is zero on the interior. Have you considered this approach?<br />dp1080https://www.blogger.com/profile/06656010332760512823noreply@blogger.com