Bandit convex optimization is one of the fundamental problems in the field of online learning. The best algorithm for the general bandit convex optimization problem guarantees a regret of while the best known lower bound is Many attempts have been made to bridge the huge gap between these bounds. A particularly interesting special case of this problem assumes that the loss functions are smooth. In this case, the best known algorithm guarantees a ree We present an efficient algorithm for the bandit smooth convex optimization problem that guarantees a regret of Our result rules out an lower bound and takes a significant step towards the resolution of this open problem.