Abstract
A typical biological neuron, such as a pyramidal neuron of the neocortex, receives thousands ofafferent synaptic inputs on its dendrite tree and sends the efferent axonal output downstream. Intypical artificial neural networks, dendrite trees are modeled as linear structures that funnel weightedsynaptic inputs to the cell bodies. However, numerous experimental and theoretical studies haveshown that dendritic arbors are far more than simple linear accumulators. That is, synaptic inputs canactively modulate their neighboring synaptic activities; therefore, the dendritic structures are highlynonlinear. In this study, we model such local nonlinearity of dendritic trees with our dendritic neuralnetwork (DENN) structure and apply this structure to typical machine learning tasks. Equipped withlocalized nonlinearities, DENNs can attain greater model expressivity than regular neural networkswhile maintaining efficient network inference. Such strength is evidenced by the increased fittingpower when we train DENNs with supervised machine learning tasks. We also empirically showthat the locality structure of DENNs can improve the generalization performance, as exemplified byDENNs outranking naive deep neural network architectures when tested on classification tasks fromthe UCI machine learning repository.