Sergey Levine

Sergey Levine
OccupationComputer Science professor
EmployerUniversity of California, Berkeley
Academic background
EducationStanford University (BS)
Stanford University (MS)
Stanford University (PhD)
Doctoral advisorVladlen Koltun
Academic work
DisciplineComputer science
Sub-disciplineMachine learning, Robotics, Reinforcement learning
InstitutionsUniversity of California, Berkeley
Doctoral studentsChelsea Finn

Sergey Levine is a computer scientist and professor at UC Berkeley specializing in robotics and machine learning. He is a co-founder of the company Physical Intelligence.[1][2][3]

Education

Levine received his Bachelor of Science and Master of Science degrees in Computer Science from Stanford University. He completed his Ph.D. in Computer Science at Stanford, where his doctoral research focused on robot learning, optimal control, and data-driven methods for acquiring control policies in complex robotic systems. Levine was also a post-doctoral researcher at the Robot Learning Lab at UC Berkeley, with Pieter Abbeel.[4][5]

Academic career

In 2015, Levine joined Google as a part-time research scientists to work on machine intelligence.[6]

In 2016, Levine joined the faculty of the University of California, Berkeley, where he is an professor in the Department of Electrical Engineering and Computer Sciences. At Berkeley, he leads a research group working at the intersection of robotics, machine learning, and control.[7]

His research has centered on reinforcement learning, imitation learning, and scalable robot learning systems. Levine’s work has explored both model-based and model-free approaches, with an emphasis on enabling robots and other autonomous agents to acquire skills from large-scale data. His group has contributed to methods that directly map high-dimensional sensory inputs, such as images, to motor actions using deep neural networks.[8]

Research contributions

Levine worked on deep reinforcement learning for robotic control, including the development of guided policy search, which trains deep neural networks to execute complex robotic tasks.[9] He contributed to end-to-end visuomotor policy learning, model-based reinforcement learning for sample-efficient control, and offline reinforcement learning from large robot datasets. His work enables robots to learn complex behaviors directly from high-dimensional sensory inputs and generalize across tasks and environments.[10]

Awards

Sergey Levine has received the Presidential Early Career Award for Scientists and Engineers (PECASE) in 2025, and the MIT Technology Review’s Innovators Under 35 (2016).[2][1]

References

  1. ^ a b "Sergey Levine". MIT Technology Review. Retrieved 2026-02-27.
  2. ^ a b "Sergey Levine | NSF - U.S. National Science Foundation". U.S. National Science Foundation. Retrieved 2026-02-27.
  3. ^ "Sergey Levine". scholar.google.com. Retrieved 2026-02-27.
  4. ^ Sarazen, Michael (2021-12-07). "UC Berkeley's Sergey Levine Says Combining Self-Supervised and Offline RL Could Enable Algorithms That Understand the World Through Actions | Synced". Synced Review. Retrieved 2026-02-27.
  5. ^ "UC Berkeley Top Expert's Warning: Humans Have Only Five Years Left for Doable Jobs!". European Central Station. Retrieved 2026-02-27.
  6. ^ Brokaw, Alex (2016-03-09). "Google hooked 14 robot arms together so they can help each other learn". The Verge. Retrieved 2026-03-03.
  7. ^ "Sergey Levine | EECS at UC Berkeley". www2.eecs.berkeley.edu. Retrieved 2026-02-27.
  8. ^ Levine, Sergey; Pastor, Peter; Krizhevsky, Alex; Ibarz, Julian; Quillen, Deirdre (2018-04-01). "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection". The International Journal of Robotics Research. 37 (4–5): 421–436. arXiv:1603.02199. doi:10.1177/0278364917710318. ISSN 0278-3649.
  9. ^ Reynolds, Emily. "This dextrous robot can teach itself to spin a tube of coffee beans". Wired. ISSN 1059-1028. Retrieved 2026-03-03.
  10. ^ Luo, Jianlan; Xu, Charles; Wu, Jeffrey; Levine, Sergey (2025-08-20). "Precise and dexterous robotic manipulation via human-in-the-loop reinforcement learning". Science Robotics. 10 (105). doi:10.1126/scirobotics.ads5033. ISSN 2470-9476.