Searching for cart-pole : 2 results found | RSS Feed for this search

6.832 Underactuated Robotics (MIT) 6.832 Underactuated Robotics (MIT)

Description

Includes audio/video content: AV lectures. Robots today move far too conservatively, using control systems that attempt to maintain full control authority at all times. Humans and animals move much more aggressively by routinely executing motions which involve a loss of instantaneous control authority. Controlling nonlinear systems without complete control authority requires methods that can reason about and exploit the natural dynamics of our machines. This course discusses nonlinear dynamics and control of underactuated mechanical systems, with an emphasis on machine learning methods. Topics include nonlinear dynamics of passive robots (walkers, swimmers, flyers), motion planning, partial feedback linearization, energy-shaping control, analytical optimal control, reinforcement learning/a Includes audio/video content: AV lectures. Robots today move far too conservatively, using control systems that attempt to maintain full control authority at all times. Humans and animals move much more aggressively by routinely executing motions which involve a loss of instantaneous control authority. Controlling nonlinear systems without complete control authority requires methods that can reason about and exploit the natural dynamics of our machines. This course discusses nonlinear dynamics and control of underactuated mechanical systems, with an emphasis on machine learning methods. Topics include nonlinear dynamics of passive robots (walkers, swimmers, flyers), motion planning, partial feedback linearization, energy-shaping control, analytical optimal control, reinforcement learning/a

Subjects

underactuated robotics | underactuated robotics | actuated systems | actuated systems | nonlinear dynamics | nonlinear dynamics | simple pendulum | simple pendulum | optimal control | optimal control | double integrator | double integrator | quadratic regulator | quadratic regulator | Hamilton-Jacobi-Bellman sufficiency | Hamilton-Jacobi-Bellman sufficiency | minimum time control | minimum time control | acrobot | acrobot | cart-pole | cart-pole | partial feedback linearization | partial feedback linearization | energy shaping | energy shaping | policy search | policy search | open-loop optimal control | open-loop optimal control | trajectory stabilization | trajectory stabilization | iterative linear quadratic regulator | iterative linear quadratic regulator | differential dynamic programming | differential dynamic programming | walking models | walking models | rimless wheel | rimless wheel | compass gait | compass gait | kneed compass gait | kneed compass gait | feedback control | feedback control | running models | running models | spring-loaded inverted pendulum | spring-loaded inverted pendulum | Raibert hoppers | Raibert hoppers | motion planning | motion planning | randomized motion planning | randomized motion planning | rapidly-exploring randomized trees | rapidly-exploring randomized trees | probabilistic road maps | probabilistic road maps | feedback motion planning | feedback motion planning | planning with funnels | planning with funnels | linear quadratic regulator | linear quadratic regulator | function approximation | function approximation | state distribution dynamics | state distribution dynamics | state estimation | state estimation | stochastic optimal control | stochastic optimal control | aircraft | aircraft | swimming | swimming | flapping flight | flapping flight | randomized policy gradient | randomized policy gradient | model-free value methods | model-free value methods | temporarl difference learning | temporarl difference learning | Q-learning | Q-learning | actor-critic methods | actor-critic methods

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allavcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.832 Underactuated Robotics (MIT)

Description

Robots today move far too conservatively, using control systems that attempt to maintain full control authority at all times. Humans and animals move much more aggressively by routinely executing motions which involve a loss of instantaneous control authority. Controlling nonlinear systems without complete control authority requires methods that can reason about and exploit the natural dynamics of our machines. This course discusses nonlinear dynamics and control of underactuated mechanical systems, with an emphasis on machine learning methods. Topics include nonlinear dynamics of passive robots (walkers, swimmers, flyers), motion planning, partial feedback linearization, energy-shaping control, analytical optimal control, reinforcement learning/approximate optimal control, and the influen

Subjects

underactuated robotics | actuated systems | nonlinear dynamics | simple pendulum | optimal control | double integrator | quadratic regulator | Hamilton-Jacobi-Bellman sufficiency | minimum time control | acrobot | cart-pole | partial feedback linearization | energy shaping | policy search | open-loop optimal control | trajectory stabilization | iterative linear quadratic regulator | differential dynamic programming | walking models | rimless wheel | compass gait | kneed compass gait | feedback control | running models | spring-loaded inverted pendulum | Raibert hoppers | motion planning | randomized motion planning | rapidly-exploring randomized trees | probabilistic road maps | feedback motion planning | planning with funnels | linear quadratic regulator | function approximation | state distribution dynamics | state estimation | stochastic optimal control | aircraft | swimming | flapping flight | randomized policy gradient | model-free value methods | temporarl difference learning | Q-learning | actor-critic methods

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata