Abstract In this paper, we propose a Enhanced Bayesian Compression method to flflexibly compress the deep networks via reinforcement learning. Unlike existing Bayesian compression methods which can not explicitly enforce quantization weights during training, our method learns flflexible codebooks in each layer for an optimal network quantization. To dynamically adjust the state of codebooks, we employ an Actor-Critic network to collaborate with the original deep network. Unlike most existing network quantization methods, our EBC doesn’t require re-training procedures after the quantization. Experimental results show that our method obtains low-bit precision with acceptable accuracy drop on MNIST, CIFAR and ImageNet