Node perturbation learning without noiseless baseline

Tatsuya Cho, Kentaro Katahira, Kazuo Okanoya, Masato Okada

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Node perturbation learning is a stochastic gradient descent method for neural networks. It estimates the gradient by comparing an evaluation of the perturbed output and the unperturbed output performance, which we call the baseline. Node perturbation learning has primarily been investigated without taking noise on the baseline into consideration. In real biological systems, however, neural activities are intrinsically noisy, and hence, the baseline is likely contaminated with the noise. In this paper, we propose an alternative learning method that does not require such a noiseless baseline. Our method uses a "second perturbation", which is calculated with different noise than the first perturbation. By comparing the evaluation of the outcomes with the first perturbation and with the second perturbation, the network weights are updated. We reveal that the learning speed showed only a linear decrease with the variance of the second perturbation. Moreover, using the second perturbation can lead to a decrease in residual error compared to the case of using the noiseless baseline.

Original languageEnglish
Pages (from-to)267-272
Number of pages6
JournalNeural Networks
Volume24
Issue number3
DOIs
StatePublished - Apr 2011
Externally publishedYes

Keywords

  • Learning curve
  • Linear perceptron
  • Node perturbation
  • Noiseless baseline
  • Stochastic gradient method

Fingerprint

Dive into the research topics of 'Node perturbation learning without noiseless baseline'. Together they form a unique fingerprint.

Cite this