Searching for stochastic : 176 results found | RSS Feed for this search

Description

This course focuses on dynamic optimization methods, both in discrete and in continuous time. We approach these problems from a dynamic programming and optimal control perspective. We also study the dynamic systems that come from the solutions to these problems. The course will illustrate how these techniques are useful in various applications, drawing on many economic examples. However, the focus will remain on gaining a general command of the tools so that they can be applied later in other classes. This course focuses on dynamic optimization methods, both in discrete and in continuous time. We approach these problems from a dynamic programming and optimal control perspective. We also study the dynamic systems that come from the solutions to these problems. The course will illustrate how these techniques are useful in various applications, drawing on many economic examples. However, the focus will remain on gaining a general command of the tools so that they can be applied later in other classes.Subjects

vector spaces | vector spaces | principle of optimality | principle of optimality | concavity of the value function | concavity of the value function | differentiability of the value function | differentiability of the value function | Euler equations | Euler equations | deterministic dynamics | deterministic dynamics | models with constant returns to scale | models with constant returns to scale | nonstationary models | nonstationary models | stochastic dynamic programming | stochastic dynamic programming | stochastic Euler equations | stochastic Euler equations | stochastic dynamics | stochastic dynamics | calculus of variations | calculus of variations | the maximum principle | the maximum principle | discounted infinite-horizon optimal control | discounted infinite-horizon optimal control | saddle-path stability | saddle-path stabilityLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata15.070J Advanced Stochastic Processes (MIT) 15.070J Advanced Stochastic Processes (MIT)

Description

This class covers the analysis and modeling of stochastic processes. Topics include measure theoretic probability, martingales, filtration, and stopping theorems, elements of large deviations theory, Brownian motion and reflected Brownian motion, stochastic integration and Ito calculus and functional limit theorems. In addition, the class will go over some applications to finance theory, insurance, queueing and inventory models. This class covers the analysis and modeling of stochastic processes. Topics include measure theoretic probability, martingales, filtration, and stopping theorems, elements of large deviations theory, Brownian motion and reflected Brownian motion, stochastic integration and Ito calculus and functional limit theorems. In addition, the class will go over some applications to finance theory, insurance, queueing and inventory models.Subjects

analysis | analysis | modeling | modeling | stochastic processes | stochastic processes | theoretic probability | theoretic probability | martingales | martingales | filtration | filtration | stopping theorems | stopping theorems | large deviations theory | large deviations theory | Brownian motion | Brownian motion | reflected Brownian motion | reflected Brownian motion | stochastic integration | stochastic integration | Ito calculus | Ito calculus | functional limit theorems | functional limit theorems | applications | applications | finance theory | finance theory | insurance | insurance | queueing | queueing | inventory models | inventory modelsLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadataDescription

The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations. The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.Subjects

dynamic programming | dynamic programming | stochastic control | stochastic control | algorithms | algorithms | finite-state | finite-state | continuous-time | continuous-time | imperfect state information | imperfect state information | suboptimal control | suboptimal control | finite horizon | finite horizon | infinite horizon | infinite horizon | discounted problems | discounted problems | stochastic shortest path | stochastic shortest path | approximate dynamic programming | approximate dynamic programmingLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allcourses-6.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadataDescription

This course surveys a variety of reasoning, optimization, and decision-making methodologies for creating highly autonomous systems and decision support aids. The focus is on principles, algorithms, and their applications, taken from the disciplines of artificial intelligence and operations research. Reasoning paradigms include logic and deduction, heuristic and constraint-based search, model-based reasoning, planning and execution, reasoning under uncertainty, and machine learning. Optimization paradigms include linear, integer and dynamic programming. Decision-making paradigms include decision theoretic planning, and Markov decision processes. This course is offered both to undergraduate (16.410) students as a professional area undergraduate subject, in the field of aerospace information This course surveys a variety of reasoning, optimization, and decision-making methodologies for creating highly autonomous systems and decision support aids. The focus is on principles, algorithms, and their applications, taken from the disciplines of artificial intelligence and operations research. Reasoning paradigms include logic and deduction, heuristic and constraint-based search, model-based reasoning, planning and execution, reasoning under uncertainty, and machine learning. Optimization paradigms include linear, integer and dynamic programming. Decision-making paradigms include decision theoretic planning, and Markov decision processes. This course is offered both to undergraduate (16.410) students as a professional area undergraduate subject, in the field of aerospace informationSubjects

autonomy | autonomy | decision | decision | decision-making | decision-making | reasoning | reasoning | optimization | optimization | autonomous | autonomous | autonomous systems | autonomous systems | decision support | decision support | algorithms | algorithms | artificial intelligence | artificial intelligence | a.i. | a.i. | operations | operations | operations research | operations research | logic | logic | deduction | deduction | heuristic search | heuristic search | constraint-based search | constraint-based search | model-based reasoning | model-based reasoning | planning | planning | execution | execution | uncertainty | uncertainty | machine learning | machine learning | linear programming | linear programming | dynamic programming | dynamic programming | integer programming | integer programming | network optimization | network optimization | decision analysis | decision analysis | decision theoretic planning | decision theoretic planning | Markov decision process | Markov decision process | scheme | scheme | propositional logic | propositional logic | constraints | constraints | Markov processes | Markov processes | computational performance | computational performance | satisfaction | satisfaction | learning algorithms | learning algorithms | system state | system state | state | state | search treees | search treees | plan spaces | plan spaces | model theory | model theory | decision trees | decision trees | function approximators | function approximators | optimization algorithms | optimization algorithms | limitations | limitations | tradeoffs | tradeoffs | search and reasoning | search and reasoning | game tree search | game tree search | local stochastic search | local stochastic search | stochastic | stochastic | genetic algorithms | genetic algorithms | constraint satisfaction | constraint satisfaction | propositional inference | propositional inference | rule-based systems | rule-based systems | rule-based | rule-based | model-based diagnosis | model-based diagnosis | neural nets | neural nets | reinforcement learning | reinforcement learning | web-based | web-based | search trees | search treesLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allarchivedcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadataDescription

This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages (finite and infinite horizon). We will also discuss some approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations. This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages (finite and infinite horizon). We will also discuss some approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.Subjects

dynamic programming | dynamic programming | stochastic control | stochastic control | decision making | decision making | uncertainty | uncertainty | sequential decision making | sequential decision making | finite horizon | finite horizon | infinite horizon | infinite horizon | approximation methods | approximation methods | state space | state space | large state space | large state space | optimal control | optimal control | dynamical system | dynamical system | dynamic programming and optimal control | dynamic programming and optimal control | deterministic systems | deterministic systems | shortest path | shortest path | state information | state information | rollout | rollout | stochastic shortest path | stochastic shortest path | approximate dynamic programming | approximate dynamic programmingLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allarchivedcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata15.099 Readings in Optimization (MIT) 15.099 Readings in Optimization (MIT)

Description

In keeping with the tradition of the last twenty-some years, the Readings in Optimization seminar will focus on an advanced topic of interest to a portion of the MIT optimization community: randomized methods for deterministic optimization. In contrast to conventional optimization algorithms whose iterates are computed and analyzed deterministically, randomized methods rely on stochastic processes and random number/vector generation as part of the algorithm and/or its analysis. In the seminar, we will study some very recent papers on this topic, many by MIT faculty, as well as some older papers from the existing literature that are only now receiving attention. In keeping with the tradition of the last twenty-some years, the Readings in Optimization seminar will focus on an advanced topic of interest to a portion of the MIT optimization community: randomized methods for deterministic optimization. In contrast to conventional optimization algorithms whose iterates are computed and analyzed deterministically, randomized methods rely on stochastic processes and random number/vector generation as part of the algorithm and/or its analysis. In the seminar, we will study some very recent papers on this topic, many by MIT faculty, as well as some older papers from the existing literature that are only now receiving attention.Subjects

deterministic optimization; algorithms; stochastic processes; random number generation; simplex method; nonlinear; convex; complexity analysis; semidefinite programming; heuristic; global optimization; Las Vegas algorithm; randomized algorithm; linear programming; search techniques; hit and run; NP-hard; approximation | deterministic optimization; algorithms; stochastic processes; random number generation; simplex method; nonlinear; convex; complexity analysis; semidefinite programming; heuristic; global optimization; Las Vegas algorithm; randomized algorithm; linear programming; search techniques; hit and run; NP-hard; approximation | deterministic optimization | deterministic optimization | algorithms | algorithms | stochastic processes | stochastic processes | random number generation | random number generation | simplex method | simplex method | nonlinear | nonlinear | convex | convex | complexity analysis | complexity analysis | semidefinite programming | semidefinite programming | heuristic | heuristic | global optimization | global optimization | Las Vegas algorithm | Las Vegas algorithm | randomized algorithm | randomized algorithm | linear programming | linear programming | search techniques | search techniques | hit and run | hit and run | NP-hard | NP-hard | approximation | approximationLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata15.070 Advanced Stochastic Processes (MIT) 15.070 Advanced Stochastic Processes (MIT)

Description

The class covers the analysis and modeling of stochastic processes. Topics include measure theoretic probability, martingales, filtration, and stopping theorems, elements of large deviations theory, Brownian motion and reflected Brownian motion, stochastic integration and Ito calculus and functional limit theorems. In addition, the class will go over some applications to finance theory, insurance, queueing and inventory models. The class covers the analysis and modeling of stochastic processes. Topics include measure theoretic probability, martingales, filtration, and stopping theorems, elements of large deviations theory, Brownian motion and reflected Brownian motion, stochastic integration and Ito calculus and functional limit theorems. In addition, the class will go over some applications to finance theory, insurance, queueing and inventory models.Subjects

analysis | analysis | modeling | modeling | stochastic processes | stochastic processes | theoretic probability | theoretic probability | martingales | martingales | filtration | filtration | stopping theorems | stopping theorems | large deviations theory | large deviations theory | Brownian motion | Brownian motion | reflected Brownian motion | reflected Brownian motion | stochastic integration | stochastic integration | Ito calculus | Ito calculus | functional limit theorems | functional limit theorems | applications | applications | finance theory | finance theory | insurance | insurance | queueing | queueing | inventory models | inventory modelsLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allarchivedcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadataDescription

This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). Approximation methods for problems involving large state spaces are also presented and discussed. This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). Approximation methods for problems involving large state spaces are also presented and discussed.Subjects

dynamic programming | dynamic programming | | stochastic control | | stochastic control | | mathematics | optimization | | | mathematics | optimization | | algorithms | | algorithms | | probability | | probability | | Markov chains | | Markov chains | | optimal control | optimal control | stochastic control | stochastic control | mathematics | mathematics | optimization | optimization | algorithms | algorithms | probability | probability | Markov chains | Markov chainsLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allarchivedcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadataDescription

This course surveys a variety of reasoning, optimization, and decision-making methodologies for creating highly autonomous systems and decision support aids. The focus is on principles, algorithms, and their applications, taken from the disciplines of artificial intelligence and operations research. Reasoning paradigms include logic and deduction, heuristic and constraint-based search, model-based reasoning, planning and execution, reasoning under uncertainty, and machine learning. Optimization paradigms include linear, integer and dynamic programming. Decision-making paradigms include decision theoretic planning, and Markov decision processes. This course is offered both to undergraduate (16.410) students as a professional area undergraduate subject, in the field of aerospace information This course surveys a variety of reasoning, optimization, and decision-making methodologies for creating highly autonomous systems and decision support aids. The focus is on principles, algorithms, and their applications, taken from the disciplines of artificial intelligence and operations research. Reasoning paradigms include logic and deduction, heuristic and constraint-based search, model-based reasoning, planning and execution, reasoning under uncertainty, and machine learning. Optimization paradigms include linear, integer and dynamic programming. Decision-making paradigms include decision theoretic planning, and Markov decision processes. This course is offered both to undergraduate (16.410) students as a professional area undergraduate subject, in the field of aerospace informationSubjects

autonomy | autonomy | decision | decision | decision-making | decision-making | reasoning | reasoning | optimization | optimization | autonomous | autonomous | autonomous systems | autonomous systems | decision support | decision support | algorithms | algorithms | artificial intelligence | artificial intelligence | a.i. | a.i. | operations | operations | operations research | operations research | logic | logic | deduction | deduction | heuristic search | heuristic search | constraint-based search | constraint-based search | model-based reasoning | model-based reasoning | planning | planning | execution | execution | uncertainty | uncertainty | machine learning | machine learning | linear programming | linear programming | dynamic programming | dynamic programming | integer programming | integer programming | network optimization | network optimization | decision analysis | decision analysis | decision theoretic planning | decision theoretic planning | Markov decision process | Markov decision process | scheme | scheme | propositional logic | propositional logic | constraints | constraints | Markov processes | Markov processes | computational performance | computational performance | satisfaction | satisfaction | learning algorithms | learning algorithms | system state | system state | state | state | search treees | search treees | plan spaces | plan spaces | model theory | model theory | decision trees | decision trees | function approximators | function approximators | optimization algorithms | optimization algorithms | limitations | limitations | tradeoffs | tradeoffs | search and reasoning | search and reasoning | game tree search | game tree search | local stochastic search | local stochastic search | stochastic | stochastic | genetic algorithms | genetic algorithms | constraint satisfaction | constraint satisfaction | propositional inference | propositional inference | rule-based systems | rule-based systems | rule-based | rule-based | model-based diagnosis | model-based diagnosis | neural nets | neural nets | reinforcement learning | reinforcement learning | web-based | web-based | search trees | search treesLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allarchivedcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadataDescription

Includes audio/video content: AV special element video. The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations. Includes audio/video content: AV special element video. The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.Subjects

dynamic programming | dynamic programming | stochastic control | stochastic control | algorithms | algorithms | finite-state | finite-state | continuous-time | continuous-time | imperfect state information | imperfect state information | suboptimal control | suboptimal control | finite horizon | finite horizon | infinite horizon | infinite horizon | discounted problems | discounted problems | stochastic shortest path | stochastic shortest path | approximate dynamic programming | approximate dynamic programmingLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allavcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata14.451 Dynamic Optimization Methods with Applications (MIT)

Description

This course focuses on dynamic optimization methods, both in discrete and in continuous time. We approach these problems from a dynamic programming and optimal control perspective. We also study the dynamic systems that come from the solutions to these problems. The course will illustrate how these techniques are useful in various applications, drawing on many economic examples. However, the focus will remain on gaining a general command of the tools so that they can be applied later in other classes.Subjects

vector spaces | principle of optimality | concavity of the value function | differentiability of the value function | Euler equations | deterministic dynamics | models with constant returns to scale | nonstationary models | stochastic dynamic programming | stochastic Euler equations | stochastic dynamics | calculus of variations | the maximum principle | discounted infinite-horizon optimal control | saddle-path stabilityLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htmSite sourced from

https://ocw.mit.edu/rss/all/mit-allcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata16.322 Stochastic Estimation and Control (MIT) 16.322 Stochastic Estimation and Control (MIT)

Description

The major themes of this course are estimation and control of dynamic systems. Preliminary topics begin with reviews of probability and random variables. Next, classical and state-space descriptions of random processes and their propagation through linear systems are introduced, followed by frequency domain design of filters and compensators. From there, the Kalman filter is employed to estimate the states of dynamic systems. Concluding topics include conditions for stability of the filter equations. The major themes of this course are estimation and control of dynamic systems. Preliminary topics begin with reviews of probability and random variables. Next, classical and state-space descriptions of random processes and their propagation through linear systems are introduced, followed by frequency domain design of filters and compensators. From there, the Kalman filter is employed to estimate the states of dynamic systems. Concluding topics include conditions for stability of the filter equations.Subjects

probability | probability | stochastic estimation | stochastic estimation | estimation | estimation | random variables | random variables | random processes | random processes | state space | state space | Wiener filter | Wiener filter | control system design | control system design | Kalman filter | Kalman filterLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata6.231 Dynamic Programming and Stochastic Control (MIT)

Description

This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages (finite and infinite horizon). We will also discuss some approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.Subjects

dynamic programming | stochastic control | decision making | uncertainty | sequential decision making | finite horizon | infinite horizon | approximation methods | state space | large state space | optimal control | dynamical system | dynamic programming and optimal control | deterministic systems | shortest path | state information | rollout | stochastic shortest path | approximate dynamic programmingLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htmSite sourced from

https://ocw.mit.edu/rss/all/mit-allarchivedcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata2.717J Optical Engineering (MIT) 2.717J Optical Engineering (MIT)

Description

This course concerns the theory and practice of optical methods in engineering and system design, with an emphasis on diffraction, statistical optics, holography, and imaging. It provides the engineering methodology skills necessary to incorporate optical components in systems serving diverse areas such as precision engineering and metrology, bio-imaging, and computing (sensors, data storage, communication in multi-processor systems). Experimental demonstrations and a design project are included. This course concerns the theory and practice of optical methods in engineering and system design, with an emphasis on diffraction, statistical optics, holography, and imaging. It provides the engineering methodology skills necessary to incorporate optical components in systems serving diverse areas such as precision engineering and metrology, bio-imaging, and computing (sensors, data storage, communication in multi-processor systems). Experimental demonstrations and a design project are included.Subjects

optical methods in engineering and system design | optical methods in engineering and system design | diffraction | statistical optics | holography | and imaging | diffraction | statistical optics | holography | and imaging | Statistical Optics | Inverse Problems (i.e. theory of imaging) | Statistical Optics | Inverse Problems (i.e. theory of imaging) | applications in precision engineering and metrology | bio-imaging | and computing (sensors | data storage | communication in multi-processor systems) | applications in precision engineering and metrology | bio-imaging | and computing (sensors | data storage | communication in multi-processor systems) | Fourier optics | Fourier optics | probability | probability | stochastic processes | stochastic processes | light statistics | light statistics | theory of light coherence | theory of light coherence | van Cittert-Zernicke Theorem | van Cittert-Zernicke Theorem | statistical optics applications | statistical optics applications | inverse problems | inverse problems | information-theoretic views | information-theoretic views | information theory | information theory | 2.717 | 2.717 | MAS.857 | MAS.857License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-alllifesciencescourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadataDescription

The aim of this course is to introduce the principles of the Global Positioning System and to demonstrate its application to various aspects of Earth Sciences. The specific content of the course depends each year on the interests of the students in the class. In some cases, the class interests are towards the geophysical applications of GPS and we concentrate on high precision(millimeter level) positioning on regional and global scales. In other cases, the interests have been more toward engineering applications of kinematic positioning with GPS in which case the concentration is on positioning with slightly less accuracy but being able to do so for a moving object. In all cases, we concentrate on the fundamen The aim of this course is to introduce the principles of the Global Positioning System and to demonstrate its application to various aspects of Earth Sciences. The specific content of the course depends each year on the interests of the students in the class. In some cases, the class interests are towards the geophysical applications of GPS and we concentrate on high precision(millimeter level) positioning on regional and global scales. In other cases, the interests have been more toward engineering applications of kinematic positioning with GPS in which case the concentration is on positioning with slightly less accuracy but being able to do so for a moving object. In all cases, we concentrate on the fundamenSubjects

Global Positioning System | Global Positioning System | Earth Sciences | Earth Sciences | geophysical applications | geophysical applications | GPS | GPS | engineering applications | engineering applications | kinematic positioning | kinematic positioning | precision | precision | accuracy | accuracy | moving objects | moving objects | coordinate | coordinate | time | time | systems | systems | satellite | satellite | geodetic | geodetic | orbital | orbital | motions | motions | pseudo ranges | pseudo ranges | carrier phases | carrier phases | stochastic | stochastic | mathematics | mathematics | models | models | data | data | analysis | analysis | estimation | estimationLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allarchivedcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadataDescription

This course is an introduction to the theory and application of large-scale dynamic programming. Topics include Markov decision processes, dynamic programming algorithms, simulation-based algorithms, theory and algorithms for value function approximation, and policy search methods. The course examines games and applications in areas such as dynamic resource allocation, finance and queueing networks. This course is an introduction to the theory and application of large-scale dynamic programming. Topics include Markov decision processes, dynamic programming algorithms, simulation-based algorithms, theory and algorithms for value function approximation, and policy search methods. The course examines games and applications in areas such as dynamic resource allocation, finance and queueing networks.Subjects

algorithm | algorithm | markov decision process | markov decision process | dynamic programming | dynamic programming | stochastic models | stochastic models | policy iteration | policy iteration | Q-Learning | Q-Learning | reinforcement learning | reinforcement learning | Lyapunov function | Lyapunov function | ODE | ODE | TD-Learning | TD-Learning | value function approximation | value function approximation | linear programming | linear programming | policy search | policy search | policy gradient | policy gradient | actor-critic | actor-critic | experts algorithm | experts algorithm | regret minimization and calibration | regret minimization and calibration | games. | games.License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata6.832 Underactuated Robotics (MIT) 6.832 Underactuated Robotics (MIT)

Description

Includes audio/video content: AV lectures. Robots today move far too conservatively, using control systems that attempt to maintain full control authority at all times. Humans and animals move much more aggressively by routinely executing motions which involve a loss of instantaneous control authority. Controlling nonlinear systems without complete control authority requires methods that can reason about and exploit the natural dynamics of our machines. This course discusses nonlinear dynamics and control of underactuated mechanical systems, with an emphasis on machine learning methods. Topics include nonlinear dynamics of passive robots (walkers, swimmers, flyers), motion planning, partial feedback linearization, energy-shaping control, analytical optimal control, reinforcement learning/a Includes audio/video content: AV lectures. Robots today move far too conservatively, using control systems that attempt to maintain full control authority at all times. Humans and animals move much more aggressively by routinely executing motions which involve a loss of instantaneous control authority. Controlling nonlinear systems without complete control authority requires methods that can reason about and exploit the natural dynamics of our machines. This course discusses nonlinear dynamics and control of underactuated mechanical systems, with an emphasis on machine learning methods. Topics include nonlinear dynamics of passive robots (walkers, swimmers, flyers), motion planning, partial feedback linearization, energy-shaping control, analytical optimal control, reinforcement learning/aSubjects

underactuated robotics | underactuated robotics | actuated systems | actuated systems | nonlinear dynamics | nonlinear dynamics | simple pendulum | simple pendulum | optimal control | optimal control | double integrator | double integrator | quadratic regulator | quadratic regulator | Hamilton-Jacobi-Bellman sufficiency | Hamilton-Jacobi-Bellman sufficiency | minimum time control | minimum time control | acrobot | acrobot | cart-pole | cart-pole | partial feedback linearization | partial feedback linearization | energy shaping | energy shaping | policy search | policy search | open-loop optimal control | open-loop optimal control | trajectory stabilization | trajectory stabilization | iterative linear quadratic regulator | iterative linear quadratic regulator | differential dynamic programming | differential dynamic programming | walking models | walking models | rimless wheel | rimless wheel | compass gait | compass gait | kneed compass gait | kneed compass gait | feedback control | feedback control | running models | running models | spring-loaded inverted pendulum | spring-loaded inverted pendulum | Raibert hoppers | Raibert hoppers | motion planning | motion planning | randomized motion planning | randomized motion planning | rapidly-exploring randomized trees | rapidly-exploring randomized trees | probabilistic road maps | probabilistic road maps | feedback motion planning | feedback motion planning | planning with funnels | planning with funnels | linear quadratic regulator | linear quadratic regulator | function approximation | function approximation | state distribution dynamics | state distribution dynamics | state estimation | state estimation | stochastic optimal control | stochastic optimal control | aircraft | aircraft | swimming | swimming | flapping flight | flapping flight | randomized policy gradient | randomized policy gradient | model-free value methods | model-free value methods | temporarl difference learning | temporarl difference learning | Q-learning | Q-learning | actor-critic methods | actor-critic methodsLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allavcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata2.008 Design and Manufacturing II (MIT) 2.008 Design and Manufacturing II (MIT)

Description

Integration of design, engineering, and management disciplines and practices for analysis and design of manufacturing enterprises. Emphasis is on the physics and stochastic nature of manufacturing processes and systems, and their effects on quality, rate, cost, and flexibility. Topics include process physics and control, design for manufacturing, and manufacturing systems. Group project requires design and fabrication of parts using mass-production and assembly methods to produce a product in quantity. Integration of design, engineering, and management disciplines and practices for analysis and design of manufacturing enterprises. Emphasis is on the physics and stochastic nature of manufacturing processes and systems, and their effects on quality, rate, cost, and flexibility. Topics include process physics and control, design for manufacturing, and manufacturing systems. Group project requires design and fabrication of parts using mass-production and assembly methods to produce a product in quantity.Subjects

manufacturing enterprises | manufacturing enterprises | physics | physics | stochastic nature of manufacturing processes | stochastic nature of manufacturing processes | quality | quality | rate | rate | cost | cost | flexibility | flexibility | process physics | process physics | process control | process controlLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata14.471 Public Economics I (MIT) 14.471 Public Economics I (MIT)

Description

This course covers theory and evidence on government taxation policy. Topics include tax incidence, optimal tax theory, the effect of taxation on labor supply and savings, taxation and corporate behavior, and tax expenditure policy. This course covers theory and evidence on government taxation policy. Topics include tax incidence, optimal tax theory, the effect of taxation on labor supply and savings, taxation and corporate behavior, and tax expenditure policy.Subjects

economic analysis | economic analysis | taxation | taxation | wealth | wealth | financial policy | financial policy | income | income | investment | investment | asset | asset | political economy | political economy | labor | labor | capital | capital | public policy | public policy | corporate finance | corporate finance | tax reform | tax reform | optimal commodity taxes | optimal commodity taxes | optimal corrective taxation | optimal corrective taxation | optimal stochastic taxes | optimal stochastic taxes | dynamic consistency issues | dynamic consistency issues | debt | debt | equity | equityLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata14.123 Microeconomic Theory III (MIT) 14.123 Microeconomic Theory III (MIT)

Description

This half-semester course discusses decision theory and topics in game theory. We present models of individual decision-making under certainty and uncertainty. Topics include preference orderings, expected utility, risk, stochastic dominance, supermodularity, monotone comparative statics, background risk, game theory, rationalizability, iterated strict dominance multi-stage games, sequential equilibrium, trembling-hand perfection, stability, signaling games, theory of auctions, global games, repeated games, and correlation. This half-semester course discusses decision theory and topics in game theory. We present models of individual decision-making under certainty and uncertainty. Topics include preference orderings, expected utility, risk, stochastic dominance, supermodularity, monotone comparative statics, background risk, game theory, rationalizability, iterated strict dominance multi-stage games, sequential equilibrium, trembling-hand perfection, stability, signaling games, theory of auctions, global games, repeated games, and correlation.Subjects

microeconomics | microeconomics | microeconomic theory | microeconomic theory | preference | preference | utility representation | utility representation | expected utility | expected utility | positive interpretation | positive interpretation | normative interpretation | normative interpretation | risk | risk | stochastic dominance | stochastic dominance | insurance | insurance | finance | finance | supermodularity | supermodularity | comparative statics | comparative statics | decision theory | decision theory | game theory | game theory | rationalizability | rationalizability | iterated strict dominance | iterated strict dominance | iterated conditional dominance | iterated conditional dominance | bargaining | bargaining | equilibrium | equilibrium | sequential equilibrium | sequential equilibrium | trembling-hand perfection | trembling-hand perfection | signaling games | signaling games | auctions | auctions | global games | global games | repeated games | repeated games | correlation | correlationLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allarchivedcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadataDescription

Mathematical introduction to neural coding and dynamics. Convolution, correlation, linear systems, Fourier analysis, signal detection theory, probability theory, and information theory. Applications to neural coding, focusing on the visual system. Hodgkin-Huxley and related models of neural excitability, stochastic models of ion channels, cable theory, and models of synaptic transmission. Mathematical introduction to neural coding and dynamics. Convolution, correlation, linear systems, Fourier analysis, signal detection theory, probability theory, and information theory. Applications to neural coding, focusing on the visual system. Hodgkin-Huxley and related models of neural excitability, stochastic models of ion channels, cable theory, and models of synaptic transmission.Subjects

neural coding | neural coding | dynamics | dynamics | convolution | convolution | correlation | correlation | linear systems | linear systems | Fourier analysis | Fourier analysis | signal detection theory | signal detection theory | probability theory | probability theory | information theory | information theory | neural excitability | neural excitability | stochastic models | stochastic models | ion channels | ion channels | cable theory | cable theory | 9.29 | 9.29 | 8.261 | 8.261License

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allarchivedcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata14.471 Public Economics I (MIT) 14.471 Public Economics I (MIT)

Description

Theory and evidence on government taxation policy. Topics include tax incidence, optimal tax theory, the effect of taxation on labor supply and savings, taxation and corporate behavior, and tax expenditure policy. Theory and evidence on government taxation policy. Topics include tax incidence, optimal tax theory, the effect of taxation on labor supply and savings, taxation and corporate behavior, and tax expenditure policy.Subjects

economic analysis | economic analysis | taxation | taxation | wealth | wealth | financial policy | financial policy | income | income | investment | investment | asset | asset | political economy | political economy | labor | labor | capital | capital | public policy | public policy | corporate finance | corporate finance | tax reform | tax reform | optimal commodity taxes | optimal commodity taxes | optimal corrective taxation | optimal corrective taxation | optimal stochastic taxes | optimal stochastic taxes | dynamic consistency issues | dynamic consistency issues | debt | debt | equity | equityLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allarchivedcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata14.147 Topics in Game Theory (MIT) 14.147 Topics in Game Theory (MIT)

Description

This course is an advanced topics course on market and mechanism design. We will study existing or new market institutions, understand their properties, and think about whether they can be re-engineered or improved. Topics discussed include mechanism design, auction theory, one-sided matching in house allocation, two-sided matching, stochastic matching mechanisms, student assignment, and school choice. This course is an advanced topics course on market and mechanism design. We will study existing or new market institutions, understand their properties, and think about whether they can be re-engineered or improved. Topics discussed include mechanism design, auction theory, one-sided matching in house allocation, two-sided matching, stochastic matching mechanisms, student assignment, and school choice.Subjects

game theory | game theory | mechanism design | mechanism design | auction theory | auction theory | one-sided matching | one-sided matching | house allocation | house allocation | market problems | market problems | two-sided matching | two-sided matching | stability | stability | many-to-one | many-to-one | one-to-one | one-to-one | small cores | small cores | large markets | large markets | stochastic matching mechanisms | stochastic matching mechanisms | student assignment | student assignment | school choice | school choice | resale markets | resale markets | dynamics | dynamics | simplicity | simplicity | robustness | robustness | limited rationality | limited rationality | message spaces | message spaces | sharing risk | sharing risk | decentralized exchanges | decentralized exchanges | over-the-counter exchanges | over-the-counter exchangesLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata6.231 Dynamic Programming and Stochastic Control (MIT)

Description

The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages. This includes systems with finite or infinite state spaces, as well as perfectly or imperfectly observed systems. We will also discuss approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations.Subjects

dynamic programming | stochastic control | algorithms | finite-state | continuous-time | imperfect state information | suboptimal control | finite horizon | infinite horizon | discounted problems | stochastic shortest path | approximate dynamic programmingLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see https://ocw.mit.edu/terms/index.htmSite sourced from

https://ocw.mit.edu/rss/all/mit-allarchivedcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata14.462 Advanced Macroeconomics II (MIT) 14.462 Advanced Macroeconomics II (MIT)

Description

Professor Blanchard will discuss shocks, labor markets and unemployment, and dynamic stochastic general equilibrium models (DSGE models). Professor Lorenzoni will cover demand shocks, macroeconomic effects of news (with or without nominal rigidities), investment with credit constraints, and liquidity with its aggregate effects. Professor Blanchard will discuss shocks, labor markets and unemployment, and dynamic stochastic general equilibrium models (DSGE models). Professor Lorenzoni will cover demand shocks, macroeconomic effects of news (with or without nominal rigidities), investment with credit constraints, and liquidity with its aggregate effects.Subjects

macroeconomics | macroeconomics | advanced | advanced | Shocks | Shocks | Reallocation | Reallocation | unemployment | unemployment | Dynamic stochastic general equilibrium models | Dynamic stochastic general equilibrium models | DSGE | DSGE | Investment with credit constraints | Investment with credit constraints | Liquidity | Liquidity | aggregate effects | aggregate effectsLicense

Content within individual OCW courses is (c) by the individual authors unless otherwise noted. MIT OpenCourseWare materials are licensed by the Massachusetts Institute of Technology under a Creative Commons License (Attribution-NonCommercial-ShareAlike). For further information see http://ocw.mit.edu/terms/index.htmSite sourced from

http://ocw.mit.edu/rss/all/mit-allcourses.xmlAttribution

Click to get HTML | Click to get attribution | Click to get URLAll metadata

See all metadata