Van de Kleut, Alexander2021-04-272021-04-272021-04-272021-04-16http://hdl.handle.net/10012/16908We are interested in training goal-conditioned reinforcement learning agents to reach arbitrary goals specified as images. In order to make our agent fully general, we provide the agent with only images of the environment and the goal image. Prior methods in goal-conditioned reinforcement learning from images use a learned lower-dimensional representation of images. These learned latent representations are not necessary to solve a variety of goal-conditioned tasks from images. We show that a goal-conditioned reinforcement learning policy can be successfully trained end-to-end from pixels by using simple reward functions. In contrast to prior work, we demonstrate that using negative raw pixel distance as a reward function is a strong baseline. We also show that using the negative Euclidian distance between feature vectors produced by a random convolutional neural network outperforms learned latent representations like convolutional variational autoencoders.enreinforcement learningdeep reinforcement learningmachine learningaiartificial intelligencemachine visioncomputer visionself-supervisedgoal-conditionedmulti-goalrlLearning-Free Methods for Goal Conditioned Reinforcement Learning from ImagesMaster Thesis