资源论文l1 -regularized Neural Networks are Improperly Learnable in Polynomial Time

l1 -regularized Neural Networks are Improperly Learnable in Polynomial Time

2020-03-06 | |  128 |   54 |   0

Abstract We study the improper learning of multi-layer neural networks. Suppose that the neural network to be learned has k hidden layers and that the l1 -norm of the incoming weights of any neuron is bounded by L. We present a kernel-based method, such that with probability at least 1 - δ, it learns a predictor whose generalization error is at most  worse than that of the neural network. The sample complexity and the time complexity of the presented method are polynomial in the input dimension and in (1/ε, log(1/δ), F (k, L)), where F (k, L) is a function depending on (k, L) and on the activation function, independent of the number of neurons. The algorithm applies to both sigmoid-like activation functions and ReLU-like activation functions. It implies that any sufficiently sparse neural network is learnable in polynomial time.

上一篇:Supervised and Semi-Supervised Text Categorization using LSTM for Region Embeddings

下一篇:Algorithms for Optimizing the Ratio of Submodular Functions

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...