Benchmarks have proven integral for driving progress in artificial intelligence across areas like computer vision and natural language processing. However, evaluating autonomous agents for real-world applications poses challenges including system constraints, generalization, and reliability. In this talk, I introduce A2Perf, a benchmarking suite currently under development to evaluate agent performance on tasks derived from real-world domains. The design of A2Perf specifically targets metrics that reflect challenges observed in practice, such as inference latency, memory usage, and generalizability. I provide an overview of A2Perf's proposed domains including computer chip-floorplanning, web navigation, and quadruped locomotion. I will discuss the current status of this ongoing effort, our design goals, and future directions. We believe tailoring benchmark tasks and metrics to real-world needs in this way will ultimately help guide and accelerate research on deployable autonomous agents.