I am a PhD student at WhiRL, advised by Shimon Whiteson. My research interests are in machine learning, robotics, and vision. Most recently I've focused on model-free deep reinforcement learning for real-world robotic control. I also spend some of my time working on open-source projects like Ray [https://github.com/ray-project/ray] and Softlearning [https://github.com/rail-berkeley/softlearning].
Prior to joining WhiRL, I did research at Robotic AI and Learning Lab (RAIL) at the University of California, Berkeley, where I worked with Sergey Levine and Tuomas Haarnoja on model-free reinforcement learning and robotics. I also spent a couple of years as a software engineer in the industry, building statistical analysis and machine learning products at Statwing and Qualtrics. I did my Bachelor's and Master's in Computer Science in Aalto University, Finland.
Publications and Preprints
 Kristian Hartikainen, Xinyang Geng, Tuomas Haarnoja*, and Sergey Levine*. Dynamical Distance Learning for Unsupervised and Semi-Supervised Skill Discovery. arXiv preprint arXiv:1907.08225. 2019.
 Tuomas Haarnoja*, Kristian Hartikainen*, Pieter Abbeel, and Sergey Levine. Latent Space Policies for Hierarchical Reinforcement Learning. International Conference on Machine Learning (ICML). 2018.
 Tuomas Haarnoja*, Aurick Zhou*, Kristian Hartikainen*, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Soft Actor-Critic Algorithms and Applications. arXiv preprint arXiv:1812.05905. 2018.
 Michael Ahn, Henry Zhu, Kristian Hartikainen, Hugo Ponte, Abhishek Gupta, Sergey Levine, Vikash Kumar. ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots. 3rd Conference on Robot Learning (CoRL). 2019.
 Avi Singh, Larry Yang, Kristian Hartikainen, Chelsea Finn, and Sergey Levine. End-to-End Robotic Reinforcement Learning without Reward Engineering. Robotics: Science and Systems (RSS). 2019.