Searching for optimal control : 26 results found | RSS Feed for this search

1

6.832 Underactuated Robotics (MIT) 6.832 Underactuated Robotics (MIT)

Description

Includes audio/video content: AV lectures. Robots today move far too conservatively, using control systems that attempt to maintain full control authority at all times. Humans and animals move much more aggressively by routinely executing motions which involve a loss of instantaneous control authority. Controlling nonlinear systems without complete control authority requires methods that can reason about and exploit the natural dynamics of our machines. This course discusses nonlinear dynamics and control of underactuated mechanical systems, with an emphasis on machine learning methods. Topics include nonlinear dynamics of passive robots (walkers, swimmers, flyers), motion planning, partial feedback linearization, energy-shaping control, analytical optimal control, reinforcement learning/a Includes audio/video content: AV lectures. Robots today move far too conservatively, using control systems that attempt to maintain full control authority at all times. Humans and animals move much more aggressively by routinely executing motions which involve a loss of instantaneous control authority. Controlling nonlinear systems without complete control authority requires methods that can reason about and exploit the natural dynamics of our machines. This course discusses nonlinear dynamics and control of underactuated mechanical systems, with an emphasis on machine learning methods. Topics include nonlinear dynamics of passive robots (walkers, swimmers, flyers), motion planning, partial feedback linearization, energy-shaping control, analytical optimal control, reinforcement learning/a

Subjects

underactuated robotics | underactuated robotics | actuated systems | actuated systems | nonlinear dynamics | nonlinear dynamics | simple pendulum | simple pendulum | optimal control | optimal control | double integrator | double integrator | quadratic regulator | quadratic regulator | Hamilton-Jacobi-Bellman sufficiency | Hamilton-Jacobi-Bellman sufficiency | minimum time control | minimum time control | acrobot | acrobot | cart-pole | cart-pole | partial feedback linearization | partial feedback linearization | energy shaping | energy shaping | policy search | policy search | open-loop optimal control | open-loop optimal control | trajectory stabilization | trajectory stabilization | iterative linear quadratic regulator | iterative linear quadratic regulator | differential dynamic programming | differential dynamic programming | walking models | walking models | rimless wheel | rimless wheel | compass gait | compass gait | kneed compass gait | kneed compass gait | feedback control | feedback control | running models | running models | spring-loaded inverted pendulum | spring-loaded inverted pendulum | Raibert hoppers | Raibert hoppers | motion planning | motion planning | randomized motion planning | randomized motion planning | rapidly-exploring randomized trees | rapidly-exploring randomized trees | probabilistic road maps | probabilistic road maps | feedback motion planning | feedback motion planning | planning with funnels | planning with funnels | linear quadratic regulator | linear quadratic regulator | function approximation | function approximation | state distribution dynamics | state distribution dynamics | state estimation | state estimation | stochastic optimal control | stochastic optimal control | aircraft | aircraft | swimming | swimming | flapping flight | flapping flight | randomized policy gradient | randomized policy gradient | model-free value methods | model-free value methods | temporarl difference learning | temporarl difference learning | Q-learning | Q-learning | actor-critic methods | actor-critic methods

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allavcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.231 Dynamic Programming and Stochastic Control (MIT) 6.231 Dynamic Programming and Stochastic Control (MIT)

Description

This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages (finite and infinite horizon). We will also discuss some approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations. This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages (finite and infinite horizon). We will also discuss some approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Subjects

dynamic programming | dynamic programming | stochastic control | stochastic control | decision making | decision making | uncertainty | uncertainty | sequential decision making | sequential decision making | finite horizon | finite horizon | infinite horizon | infinite horizon | approximation methods | approximation methods | state space | state space | large state space | large state space | optimal control | optimal control | dynamical system | dynamical system | dynamic programming and optimal control | dynamic programming and optimal control | deterministic systems | deterministic systems | shortest path | shortest path | state information | state information | rollout | rollout | stochastic shortest path | stochastic shortest path | approximate dynamic programming | approximate dynamic programming

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allarchivedcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

15.093 Optimization Methods (SMA 5213) (MIT) 15.093 Optimization Methods (SMA 5213) (MIT)

Description

This course introduces the principal algorithms for linear, network, discrete, nonlinear, dynamic optimization and optimal control. Emphasis is on methodology and the underlying mathematical structures. Topics include the simplex method, network flow methods, branch and bound and cutting plane methods for discrete optimization, optimality conditions for nonlinear optimization, interior point methods for convex optimization, Newton's method, heuristic methods, and dynamic programming and optimal control methods. This course was also taught as part of the Singapore-MIT Alliance (SMA) programme as course number SMA 5213 (Optimisation Methods). This course introduces the principal algorithms for linear, network, discrete, nonlinear, dynamic optimization and optimal control. Emphasis is on methodology and the underlying mathematical structures. Topics include the simplex method, network flow methods, branch and bound and cutting plane methods for discrete optimization, optimality conditions for nonlinear optimization, interior point methods for convex optimization, Newton's method, heuristic methods, and dynamic programming and optimal control methods. This course was also taught as part of the Singapore-MIT Alliance (SMA) programme as course number SMA 5213 (Optimisation Methods).

Subjects

principal algorithms | principal algorithms | linear | linear | network | network | discrete | discrete | nonlinear | nonlinear | dynamic optimization | dynamic optimization | optimal control | optimal control | methodology and the underlying mathematical structures | methodology and the underlying mathematical structures | simplex method | simplex method | network flow methods | network flow methods | branch and bound and cutting plane methods for discrete optimization | branch and bound and cutting plane methods for discrete optimization | optimality conditions for nonlinear optimization | optimality conditions for nonlinear optimization | interior point methods for convex optimization | interior point methods for convex optimization | Newton's method | Newton's method | heuristic methods | heuristic methods | dynamic programming | dynamic programming | optimal control methods | optimal control methods | SMA 5213 | SMA 5213

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allarchivedcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

16.323 Principles of Optimal Control (MIT) 16.323 Principles of Optimal Control (MIT)

Description

This course studies basic optimization and the principles of optimal control. It considers deterministic and stochastic problems for both discrete and continuous systems. The course covers solution methods including numerical search algorithms, model predictive control, dynamic programming, variational calculus, and approaches based on Pontryagin's maximum principle, and it includes many examples and applications of the theory. This course studies basic optimization and the principles of optimal control. It considers deterministic and stochastic problems for both discrete and continuous systems. The course covers solution methods including numerical search algorithms, model predictive control, dynamic programming, variational calculus, and approaches based on Pontryagin's maximum principle, and it includes many examples and applications of the theory.

Subjects

nonlinear optimization | nonlinear optimization | dynamic programming | dynamic programming | HJB Equation | HJB Equation | calculus of variations | calculus of variations | constrained optimal control | constrained optimal control | singular arcs | singular arcs | stochastic optimal control | stochastic optimal control | LQG robustness | LQG robustness | feedback control systems | feedback control systems | model predictive control | model predictive control | line search methods | line search methods | Lagrange multipliers | Lagrange multipliers | discrete LQR | discrete LQR

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.832 Underactuated Robotics (MIT)

Description

Robots today move far too conservatively, using control systems that attempt to maintain full control authority at all times. Humans and animals move much more aggressively by routinely executing motions which involve a loss of instantaneous control authority. Controlling nonlinear systems without complete control authority requires methods that can reason about and exploit the natural dynamics of our machines. This course discusses nonlinear dynamics and control of underactuated mechanical systems, with an emphasis on machine learning methods. Topics include nonlinear dynamics of passive robots (walkers, swimmers, flyers), motion planning, partial feedback linearization, energy-shaping control, analytical optimal control, reinforcement learning/approximate optimal control, and the influen

Subjects

underactuated robotics | actuated systems | nonlinear dynamics | simple pendulum | optimal control | double integrator | quadratic regulator | Hamilton-Jacobi-Bellman sufficiency | minimum time control | acrobot | cart-pole | partial feedback linearization | energy shaping | policy search | open-loop optimal control | trajectory stabilization | iterative linear quadratic regulator | differential dynamic programming | walking models | rimless wheel | compass gait | kneed compass gait | feedback control | running models | spring-loaded inverted pendulum | Raibert hoppers | motion planning | randomized motion planning | rapidly-exploring randomized trees | probabilistic road maps | feedback motion planning | planning with funnels | linear quadratic regulator | function approximation | state distribution dynamics | state estimation | stochastic optimal control | aircraft | swimming | flapping flight | randomized policy gradient | model-free value methods | temporarl difference learning | Q-learning | actor-critic methods

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.231 Dynamic Programming and Stochastic Control (MIT) 6.231 Dynamic Programming and Stochastic Control (MIT)

Description

This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). Approximation methods for problems involving large state spaces are also presented and discussed. This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). Approximation methods for problems involving large state spaces are also presented and discussed.

Subjects

dynamic programming | dynamic programming | | stochastic control | | stochastic control | | mathematics | optimization | | | mathematics | optimization | | algorithms | | algorithms | | probability | | probability | | Markov chains | | Markov chains | | optimal control | optimal control | stochastic control | stochastic control | mathematics | mathematics | optimization | optimization | algorithms | algorithms | probability | probability | Markov chains | Markov chains

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allarchivedcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

16.31 Feedback Control Systems (MIT) 16.31 Feedback Control Systems (MIT)

Description

The goal of this subject is to teach the fundamentals of control design and analysis using state-space methods. This includes both the practical and theoretical aspects of the topic. By the end of the course, students should be able to design controllers using state-space methods and evaluate whether these controllers are "robust," that is, if they are likely to work well in practice. The goal of this subject is to teach the fundamentals of control design and analysis using state-space methods. This includes both the practical and theoretical aspects of the topic. By the end of the course, students should be able to design controllers using state-space methods and evaluate whether these controllers are "robust," that is, if they are likely to work well in practice.

Subjects

feedback control | feedback control | feedback control system | feedback control system | state-space | state-space | controllability | controllability | observability | observability | transfer functions | transfer functions | canonical forms | canonical forms | controllers | controllers | pole-placement | pole-placement | optimal control | optimal control | Kalman filter | Kalman filter

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allarchivedcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.231 Dynamic Programming and Stochastic Control (MIT) 6.231 Dynamic Programming and Stochastic Control (MIT)

Description

The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations. The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Subjects

dynamic programming | dynamic programming | stochastic control | stochastic control | algorithms | algorithms | finite-state | finite-state | continuous-time | continuous-time | imperfect state information | imperfect state information | suboptimal control | suboptimal control | finite horizon | finite horizon | infinite horizon | infinite horizon | discounted problems | discounted problems | stochastic shortest path | stochastic shortest path | approximate dynamic programming | approximate dynamic programming

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allcourses-6.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.241J Dynamic Systems and Control (MIT) 6.241J Dynamic Systems and Control (MIT)

Description

The course addresses dynamic systems, i.e., systems that evolve with time. Typically these systems have inputs and outputs; it is of interest to understand how the input affects the output (or, vice-versa, what inputs should be given to generate a desired output). In particular, we will concentrate on systems that can be modeled by Ordinary Differential Equations (ODEs), and that satisfy certain linearity and time-invariance conditions. We will analyze the response of these systems to inputs and initial conditions. It is of particular interest to analyze systems obtained as interconnections (e.g., feedback) of two or more other systems. We will learn how to design (control) systems that ensure desirable properties (e.g., stability, performance) of the interconnection with a given dynamic s The course addresses dynamic systems, i.e., systems that evolve with time. Typically these systems have inputs and outputs; it is of interest to understand how the input affects the output (or, vice-versa, what inputs should be given to generate a desired output). In particular, we will concentrate on systems that can be modeled by Ordinary Differential Equations (ODEs), and that satisfy certain linearity and time-invariance conditions. We will analyze the response of these systems to inputs and initial conditions. It is of particular interest to analyze systems obtained as interconnections (e.g., feedback) of two or more other systems. We will learn how to design (control) systems that ensure desirable properties (e.g., stability, performance) of the interconnection with a given dynamic s

Subjects

dynamic systems | dynamic systems | multiple inputs | multiple inputs | multiple outputs | multiple outputs | MIMO | MIMO | feedback | feedback | control systems | control systems | linear time-invariant | linear time-invariant | optimal control | optimal control | robust control | robust control | linear algebra | linear algebra | least squares | least squares

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allcourses-6.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.838 Algorithms for Computer Animation (MIT) 6.838 Algorithms for Computer Animation (MIT)

Description

Animation is a compelling and effective form of expression; it engages viewers and makes difficult concepts easier to grasp. Today's animation industry creates films, special effects, and games with stunning visual detail and quality. This graduate class will investigate the algorithms that make these animations possible: keyframing, inverse kinematics, physical simulation, optimization, optimal control, motion capture, and data-driven methods. Our study will also reveal the shortcomings of these sophisticated tools. The students will propose improvements and explore new methods for computer animation in semester-long research projects. The course should appeal to both students with general interest in computer graphics and students interested in new applications of machine learning, robo Animation is a compelling and effective form of expression; it engages viewers and makes difficult concepts easier to grasp. Today's animation industry creates films, special effects, and games with stunning visual detail and quality. This graduate class will investigate the algorithms that make these animations possible: keyframing, inverse kinematics, physical simulation, optimization, optimal control, motion capture, and data-driven methods. Our study will also reveal the shortcomings of these sophisticated tools. The students will propose improvements and explore new methods for computer animation in semester-long research projects. The course should appeal to both students with general interest in computer graphics and students interested in new applications of machine learning, robo

Subjects

algorithms | algorithms | computer animation | computer animation | keyframing | keyframing | inverse kinematics | inverse kinematics | physical simulation | physical simulation | optimization | optimization | optimal control | optimal control | motion capture | motion capture | data-driven methods | data-driven methods

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allcourses-6.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

14.451 Dynamic Optimization Methods with Applications (MIT) 14.451 Dynamic Optimization Methods with Applications (MIT)

Description

This course focuses on dynamic optimization methods, both in discrete and in continuous time. We approach these problems from a dynamic programming and optimal control perspective. We also study the dynamic systems that come from the solutions to these problems. The course will illustrate how these techniques are useful in various applications, drawing on many economic examples. However, the focus will remain on gaining a general command of the tools so that they can be applied later in other classes. This course focuses on dynamic optimization methods, both in discrete and in continuous time. We approach these problems from a dynamic programming and optimal control perspective. We also study the dynamic systems that come from the solutions to these problems. The course will illustrate how these techniques are useful in various applications, drawing on many economic examples. However, the focus will remain on gaining a general command of the tools so that they can be applied later in other classes.

Subjects

vector spaces | vector spaces | principle of optimality | principle of optimality | concavity of the value function | concavity of the value function | differentiability of the value function | differentiability of the value function | Euler equations | Euler equations | deterministic dynamics | deterministic dynamics | models with constant returns to scale | models with constant returns to scale | nonstationary models | nonstationary models | stochastic dynamic programming | stochastic dynamic programming | stochastic Euler equations | stochastic Euler equations | stochastic dynamics | stochastic dynamics | calculus of variations | calculus of variations | the maximum principle | the maximum principle | discounted infinite-horizon optimal control | discounted infinite-horizon optimal control | saddle-path stability | saddle-path stability

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

14.451 Macroeconomic Theory I (MIT) 14.451 Macroeconomic Theory I (MIT)

Description

Introduction to the theories of economic growth. Topics will include basic facts of economic growth and long-run economic development; brief overview of optimal control theory and dynamic programming; basic neoclassical growth model under a variety of market structures; human capital and economic growth; endogenous growth models; models with endogenous technology; models of directed technical change; competition, market structure and growth; financial and economic development; international trade and economic growth; institutions and economic development. This is a half-term subject. The class size is limited. Introduction to the theories of economic growth. Topics will include basic facts of economic growth and long-run economic development; brief overview of optimal control theory and dynamic programming; basic neoclassical growth model under a variety of market structures; human capital and economic growth; endogenous growth models; models with endogenous technology; models of directed technical change; competition, market structure and growth; financial and economic development; international trade and economic growth; institutions and economic development. This is a half-term subject. The class size is limited.

Subjects

macroeconomic theory | macroeconomic theory | macroeconomics | macroeconomics | solow growth model | solow growth model | neoclassical growth model | neoclassical growth model | endogenous growth | endogenous growth | human capital | human capital | Bellman equation | Bellman equation | theory of optimal control | theory of optimal control | dynamic programming | dynamic programming | GDP | GDP | per capita income | per capita income | asset pricing | asset pricing | public finance | public finance | overlappiing generations | overlappiing generations | AK | AK | spillovers | spillovers | expanding variety models | expanding variety models | Sala-i-Martin | Sala-i-Martin | Daron Acemoglu | Daron Acemoglu | Barro | Barro

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.838 Algorithms for Computer Animation (MIT) 6.838 Algorithms for Computer Animation (MIT)

Description

Animation is a compelling and effective form of expression; it engages viewers and makes difficult concepts easier to grasp. Today's animation industry creates films, special effects, and games with stunning visual detail and quality. This graduate class will investigate the algorithms that make these animations possible: keyframing, inverse kinematics, physical simulation, optimization, optimal control, motion capture, and data-driven methods. Our study will also reveal the shortcomings of these sophisticated tools. The students will propose improvements and explore new methods for computer animation in semester-long research projects. The course should appeal to both students with general interest in computer graphics and students interested in new applications of machine learning, robo Animation is a compelling and effective form of expression; it engages viewers and makes difficult concepts easier to grasp. Today's animation industry creates films, special effects, and games with stunning visual detail and quality. This graduate class will investigate the algorithms that make these animations possible: keyframing, inverse kinematics, physical simulation, optimization, optimal control, motion capture, and data-driven methods. Our study will also reveal the shortcomings of these sophisticated tools. The students will propose improvements and explore new methods for computer animation in semester-long research projects. The course should appeal to both students with general interest in computer graphics and students interested in new applications of machine learning, robo

Subjects

algorithms | algorithms | computer animation | computer animation | keyframing | keyframing | inverse kinematics | inverse kinematics | physical simulation | physical simulation | optimization | optimization | optimal control | optimal control | motion capture | motion capture | data-driven methods | data-driven methods

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-alltraditionalchinesecourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.231 Dynamic Programming and Stochastic Control (MIT) 6.231 Dynamic Programming and Stochastic Control (MIT)

Description

Includes audio/video content: AV special element video. The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations. Includes audio/video content: AV special element video. The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Subjects

dynamic programming | dynamic programming | stochastic control | stochastic control | algorithms | algorithms | finite-state | finite-state | continuous-time | continuous-time | imperfect state information | imperfect state information | suboptimal control | suboptimal control | finite horizon | finite horizon | infinite horizon | infinite horizon | discounted problems | discounted problems | stochastic shortest path | stochastic shortest path | approximate dynamic programming | approximate dynamic programming

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allavcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.231 Dynamic Programming and Stochastic Control (MIT)

Description

This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages (finite and infinite horizon). We will also discuss some approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Subjects

dynamic programming | stochastic control | decision making | uncertainty | sequential decision making | finite horizon | infinite horizon | approximation methods | state space | large state space | optimal control | dynamical system | dynamic programming and optimal control | deterministic systems | shortest path | state information | rollout | stochastic shortest path | approximate dynamic programming

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allarchivedcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

15.093 Optimization Methods (SMA 5213) (MIT)

Description

This course introduces the principal algorithms for linear, network, discrete, nonlinear, dynamic optimization and optimal control. Emphasis is on methodology and the underlying mathematical structures. Topics include the simplex method, network flow methods, branch and bound and cutting plane methods for discrete optimization, optimality conditions for nonlinear optimization, interior point methods for convex optimization, Newton's method, heuristic methods, and dynamic programming and optimal control methods. This course was also taught as part of the Singapore-MIT Alliance (SMA) programme as course number SMA 5213 (Optimisation Methods).

Subjects

principal algorithms | linear | network | discrete | nonlinear | dynamic optimization | optimal control | methodology and the underlying mathematical structures | simplex method | network flow methods | branch and bound and cutting plane methods for discrete optimization | optimality conditions for nonlinear optimization | interior point methods for convex optimization | Newton's method | heuristic methods | dynamic programming | optimal control methods | SMA 5213

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allarchivedcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

16.323 Principles of Optimal Control (MIT)

Description

This course studies basic optimization and the principles of optimal control. It considers deterministic and stochastic problems for both discrete and continuous systems. The course covers solution methods including numerical search algorithms, model predictive control, dynamic programming, variational calculus, and approaches based on Pontryagin's maximum principle, and it includes many examples and applications of the theory.

Subjects

nonlinear optimization | dynamic programming | HJB Equation | calculus of variations | constrained optimal control | singular arcs | stochastic optimal control | LQG robustness | feedback control systems | model predictive control | line search methods | Lagrange multipliers | discrete LQR

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.231 Dynamic Programming and Stochastic Control (MIT)

Description

This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). Approximation methods for problems involving large state spaces are also presented and discussed.

Subjects

dynamic programming | | stochastic control | | mathematics | optimization | | algorithms | | probability | | Markov chains | | optimal control | stochastic control | mathematics | optimization | algorithms | probability | Markov chains

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allarchivedcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

16.31 Feedback Control Systems (MIT)

Description

The goal of this subject is to teach the fundamentals of control design and analysis using state-space methods. This includes both the practical and theoretical aspects of the topic. By the end of the course, students should be able to design controllers using state-space methods and evaluate whether these controllers are "robust," that is, if they are likely to work well in practice.

Subjects

feedback control | feedback control system | state-space | controllability | observability | transfer functions | canonical forms | controllers | pole-placement | optimal control | Kalman filter

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allarchivedcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

14.451 Dynamic Optimization Methods with Applications (MIT)

Description

This course focuses on dynamic optimization methods, both in discrete and in continuous time. We approach these problems from a dynamic programming and optimal control perspective. We also study the dynamic systems that come from the solutions to these problems. The course will illustrate how these techniques are useful in various applications, drawing on many economic examples. However, the focus will remain on gaining a general command of the tools so that they can be applied later in other classes.

Subjects

vector spaces | principle of optimality | concavity of the value function | differentiability of the value function | Euler equations | deterministic dynamics | models with constant returns to scale | nonstationary models | stochastic dynamic programming | stochastic Euler equations | stochastic dynamics | calculus of variations | the maximum principle | discounted infinite-horizon optimal control | saddle-path stability

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

14.451 Macroeconomic Theory I (MIT)

Description

Introduction to the theories of economic growth. Topics will include basic facts of economic growth and long-run economic development; brief overview of optimal control theory and dynamic programming; basic neoclassical growth model under a variety of market structures; human capital and economic growth; endogenous growth models; models with endogenous technology; models of directed technical change; competition, market structure and growth; financial and economic development; international trade and economic growth; institutions and economic development. This is a half-term subject. The class size is limited.

Subjects

macroeconomic theory | macroeconomics | solow growth model | neoclassical growth model | endogenous growth | human capital | Bellman equation | theory of optimal control | dynamic programming | GDP | per capita income | asset pricing | public finance | overlappiing generations | AK | spillovers | expanding variety models | Sala-i-Martin | Daron Acemoglu | Barro

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.231 Dynamic Programming and Stochastic Control (MIT)

Description

The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Subjects

dynamic programming | stochastic control | algorithms | finite-state | continuous-time | imperfect state information | suboptimal control | finite horizon | infinite horizon | discounted problems | stochastic shortest path | approximate dynamic programming

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.241J Dynamic Systems and Control (MIT)

Description

The course addresses dynamic systems, i.e., systems that evolve with time. Typically these systems have inputs and outputs; it is of interest to understand how the input affects the output (or, vice-versa, what inputs should be given to generate a desired output). In particular, we will concentrate on systems that can be modeled by Ordinary Differential Equations (ODEs), and that satisfy certain linearity and time-invariance conditions. We will analyze the response of these systems to inputs and initial conditions. It is of particular interest to analyze systems obtained as interconnections (e.g., feedback) of two or more other systems. We will learn how to design (control) systems that ensure desirable properties (e.g., stability, performance) of the interconnection with a given dynamic s

Subjects

dynamic systems | multiple inputs | multiple outputs | MIMO | feedback | control systems | linear time-invariant | optimal control | robust control | linear algebra | least squares

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.838 Algorithms for Computer Animation (MIT)

Description

Animation is a compelling and effective form of expression; it engages viewers and makes difficult concepts easier to grasp. Today's animation industry creates films, special effects, and games with stunning visual detail and quality. This graduate class will investigate the algorithms that make these animations possible: keyframing, inverse kinematics, physical simulation, optimization, optimal control, motion capture, and data-driven methods. Our study will also reveal the shortcomings of these sophisticated tools. The students will propose improvements and explore new methods for computer animation in semester-long research projects. The course should appeal to both students with general interest in computer graphics and students interested in new applications of machine learning, robo

Subjects

algorithms | computer animation | keyframing | inverse kinematics | physical simulation | optimization | optimal control | motion capture | data-driven methods

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.231 Dynamic Programming and Stochastic Control (MIT)

Description

The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Subjects

dynamic programming | stochastic control | algorithms | finite-state | continuous-time | imperfect state information | suboptimal control | finite horizon | infinite horizon | discounted problems | stochastic shortest path | approximate dynamic programming

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allarchivedcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata