One of the gnarliest challenges in reinforcement learning is exploration that scales to vast domains, where novelty-, or coverage-seeking behavior falls short. Goal-directed, purposeful behaviors are able to overcome this, but rely on a good goal space. The core challenge in goal discovery is finding the right balance between generality (not hand-crafted) and tractability (useful, not too many). Our approach explicitly seeks the middle ground, enabling the human designer to specify a vast but meaningful proto-goal space, and an autonomous discovery process to narrow this to a narrow space of controllable, reachable, novel, and relevant goals. The effectiveness of goal-conditioned exploration with the latter is then demonstrated in three challenging environments.