Choi, David2020-12-152020-12-152020-12-152020-12-11http://hdl.handle.net/10012/16552Games have historically been a fruitful area for artificial intelligence (AI) research, and StarCraft in particular has been an important grand challenge because of its strategic complexity, multi-agent dynamics, partial observability, large action spaces, delayed rewards, and robust human competitive scene. These complexities mean that approaches common in other game AIs, like Monte-Carlo Tree Search in Go or searching over the action space in Atari, cannot be easily applied to StarCraft. Thus, though there has been significant research, many approaches use handcrafted systems and no approach is competitive with even strong casual players. In this thesis, we go into detail on AlphaStar, the first AI system to reach the highest tier of human performance in a widely professionally played esport. AlphaStar combines new and existing approaches in imitation learning, reinforcement learning, and multi-agent learning at scale in a general agent with minimal handcrafting. AlphaStar reached a rating above 99.8% of active ranked human players. In particular, designing an effective interface is an essential component of AI research in games that has historically been under-explored. This thesis lists principles for designing effective interfaces and human-like constraints for deep learning research in games, and explores those principles with AlphaStar as a case study. Though the agent has minimal handcrafting, it needs to interact with the game through an interface that is human-like, expressive enough to capture the game's complexities, and amenable to deep learning in order to produce transferable research insights.enArtificial IntelligenceMachine LearningReinforcement LearningAlphaStar: Considerations and Human-like Constraints for Deep Learning Game InterfacesMaster Thesis