Searching for state information : 6 results found | RSS Feed for this search

6.231 Dynamic Programming and Stochastic Control (MIT) 6.231 Dynamic Programming and Stochastic Control (MIT)

Description

This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages (finite and infinite horizon). We will also discuss some approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations. This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages (finite and infinite horizon). We will also discuss some approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Subjects

dynamic programming | dynamic programming | stochastic control | stochastic control | decision making | decision making | uncertainty | uncertainty | sequential decision making | sequential decision making | finite horizon | finite horizon | infinite horizon | infinite horizon | approximation methods | approximation methods | state space | state space | large state space | large state space | optimal control | optimal control | dynamical system | dynamical system | dynamic programming and optimal control | dynamic programming and optimal control | deterministic systems | deterministic systems | shortest path | shortest path | state information | state information | rollout | rollout | stochastic shortest path | stochastic shortest path | approximate dynamic programming | approximate dynamic programming

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allarchivedcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.231 Dynamic Programming and Stochastic Control (MIT) 6.231 Dynamic Programming and Stochastic Control (MIT)

Description

The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations. The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Subjects

dynamic programming | dynamic programming | stochastic control | stochastic control | algorithms | algorithms | finite-state | finite-state | continuous-time | continuous-time | imperfect state information | imperfect state information | suboptimal control | suboptimal control | finite horizon | finite horizon | infinite horizon | infinite horizon | discounted problems | discounted problems | stochastic shortest path | stochastic shortest path | approximate dynamic programming | approximate dynamic programming

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allcourses-6.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.231 Dynamic Programming and Stochastic Control (MIT) 6.231 Dynamic Programming and Stochastic Control (MIT)

Description

Includes audio/video content: AV special element video. The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations. Includes audio/video content: AV special element video. The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Subjects

dynamic programming | dynamic programming | stochastic control | stochastic control | algorithms | algorithms | finite-state | finite-state | continuous-time | continuous-time | imperfect state information | imperfect state information | suboptimal control | suboptimal control | finite horizon | finite horizon | infinite horizon | infinite horizon | discounted problems | discounted problems | stochastic shortest path | stochastic shortest path | approximate dynamic programming | approximate dynamic programming

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htm

Site sourced from

http://ocw.mit.edu/rss/all/mit-allavcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.231 Dynamic Programming and Stochastic Control (MIT)

Description

This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages (finite and infinite horizon). We will also discuss some approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Subjects

dynamic programming | stochastic control | decision making | uncertainty | sequential decision making | finite horizon | infinite horizon | approximation methods | state space | large state space | optimal control | dynamical system | dynamic programming and optimal control | deterministic systems | shortest path | state information | rollout | stochastic shortest path | approximate dynamic programming

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allarchivedcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.231 Dynamic Programming and Stochastic Control (MIT)

Description

The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Subjects

dynamic programming | stochastic control | algorithms | finite-state | continuous-time | imperfect state information | suboptimal control | finite horizon | infinite horizon | discounted problems | stochastic shortest path | approximate dynamic programming

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata

6.231 Dynamic Programming and Stochastic Control (MIT)

Description

The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.

Subjects

dynamic programming | stochastic control | algorithms | finite-state | continuous-time | imperfect state information | suboptimal control | finite horizon | infinite horizon | discounted problems | stochastic shortest path | approximate dynamic programming

License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htm

Site sourced from

https://ocw.mit.edu/rss/all/mit-allarchivedcourses.xml

Attribution

Click to get HTML | Click to get attribution | Click to get URL

All metadata

See all metadata