What properties of MDP allow for generalization in reinforcement learning? How does representation learning help with this? Even though we understand the analogous questions in supervised learning, providing theory for understanding these fundamental questions in reinforcement learning is challenging. In this talk, we will discuss a number of recent works on the statistical and computational views of these questions. We will start by exploring this from a statistical point of view, where we will see algorithmic ideas for sample efficient reinforcement learning. Then, we will move on to the computational land and give evidence that the computational and statistical views of RL are fundamentally different by showing a surprising computational-statistical gap in reinforcement learning. Along the way, we will make progress on one of the most fundamental questions in reinforcement learning with linear function approximation: Suppose the optimal value function (Q* or V*) is linear in a given d dimensional feature mapping, is efficient reinforcement learning possible?