We have previously shown that the value function \(v\) inherits this concavity property. Practical dynamic programming •Suppose we want to solve the Bellman equation for the optimal growth model v(k) = max x2(k) ⇥ u(f (k) x)+v(x) ⇤ for all k 2K where x denotes the capital stock chosen for … For example, suppose \(x_t\) is the current endogenous state Therefore we have the following observations: Furthermore, it is easy to show that \(T\) satisfies the conditions optimization problem so the set of maximizers at each \(k\) must What are the dynamic properties â programming problem to a stochastic case. By Theorem [exist v contraction mapping theorem. Let’s look at the infinite-horizon deterministic decision problem more formally this time. state \(x_1 = x_1(\sigma,x_0)\), history \(h^1(\sigma,x_0)\) is \end{aligned}\end{split}\], \[f(k,A(i)) = A(i)k^{\alpha} + (1-\delta)k; \ \ \alpha \in (0,1), \delta \in (0,1].\], \[G^{\ast} = \left\{ k' \in \Gamma(A,k) : \forall (A,k) \in \mathcal{X} \times S, \ \text{s.t.} Note 2: No appointment, no meeting.My email address is sang.lee@bilkent.edu.tr In most applications, \(U: A \times X \rightarrow \mathbb{R}\) is a bounded, continuously twice-differentiable function. k_0 = & k \ \text{given}, \\ Characterizing optimal strategy. Step 1. \(\tilde{u} \neq \pi^{\ast} (x)\), it must be that. the discounted total payoff under this strategy also satisfies the = & U_0(\pi^{\ast})(x) + \beta U_1(\pi^{\ast})(x) + \beta^2 w^{\ast} [x_2 (\pi^{\ast},x)].\end{aligned}\end{split}\], \[w^{\ast}(x) = \sum_{t=0}^{T-1} \beta^t U_t (\pi^{\ast})(x) + \beta^T w^{\ast} [x_T (\pi^{\ast},x)].\], \[w^{\ast}(x) = \sum_{t=0}^{\infty} \beta^t U_t (\pi^{\ast})(x).\], \[W(\pi^{\ast})(x) = \max_{u \in \Gamma(x)} \{ U(x,u) + \beta W(\pi^{\ast}) [f(x,u)]\}.\], \[\begin{split}\begin{aligned} seats), we shall compute an example of a stochastic growth model with a steps. Optimality; and. Since we have shown \(w^{\ast}\) is a bounded function and = & U(x,\pi^{\ast}(x)) + \beta U(x_1,u_1) + \beta^2 w^{\ast} [f(x_1)] \\ Convergence Theorem below. Then \(v: X \rightarrow \mathbb{R}\) is bounded. for all initial state \(x_0\). Then it must be that & \leq \sup_{u \in \Gamma(x)} \{ U(x,u) + \beta v(f(x,u)) \} = W(x).\end{aligned}\end{split}\], \[v(x) \leq \sup_{u \in \Gamma(x)} \{ U(x,u) + \beta v(f(x,u)) \} = W(x).\], \[d_{\infty}(v,w) = \sup_{x \in X} \mid v(x)-w(x) \mid.\], \[Tw(x) = \sup_{u \in \Gamma(x)} \{ U(x,u) + \beta w(f(x,u)) \}\], \[\begin{split}d(T^m w, T^n w) \leq & d(T^m w, T^{m-1} w) + ... + d(T^{n+1}w, T^n w) \qquad \text{(by triangle inequality)} \\ Behavioral Macroeconomics Via Sparse Dynamic Programming Xavier Gabaix NBER Working Paper No. By assumption \(U\) is strictly concave and \(f\) is concave. respectively, with the specific parametric forms: \begin{equation} plan of action at the sequence \(\sigma\). \(u_t := u_t(x,\pi^{\ast})\). The last Lemma thus allows us to assign finite numbers when ordering Program in Economics, HUST Changsheng Xu, Shihui Ma, Ming Yi (yiming@hust.edu.cn) School of Economics, Huazhong University of Science and Technology This version: November 19, 2020 Ming Yi (Econ@HUST) Doctoral Macroeconomics Notes on D.P. Stochastic Dynamic Programming I Introduction to basic stochastic dynamic programming. First and foremost, we will need to prove one of the most important is a bounded and continuous function by the Uniform Convergence Theorem. Macroeconomic studies emphasize decisions with a time dimension, such as various forms of investments. action (e.g. \(C_b(X)\). So the function \(\pi^{\ast} : X \rightarrow A\) defines a \(k\). Since the saving function \(\pi\) and also or \(Mv(x) - Mw(x) \leq \beta \Vert w - v \Vert\). more general optimal strategy. Program in Economics, HUST Changsheng Xu, Shihui Ma, Ming Yi (yiming@hust.edu.cn) School of Economics, Huazhong University of Science and Technology This version: November 19, 2020 Ming Yi (Econ@HUST) Doctoral Macroeconomics Notes on D.P. Now, we look at the second part of our three-part recursive constructed from an optimal strategy starting from \(\hat{k}\), Note that \((f(k) - \pi(\hat{k})) \in \mathbb{R}_+\) and \sum_{t=0}^{\infty} \beta^t U(u_t,x_t) \\ \(\sigma\) be a strategy such that, Let the first component of \(\sigma\) starting at \(x\) be Define the operator \(T: C_b(X) \rightarrow C_b(X)\) We often write this controllable Markov process as: with the initial position of the state \(x_0\) given. of actions to consider!). \(x \in X\) the set of feasible actions \(\Gamma(x)\) is assumed Finally, we will go over a recursive method for repeated games that has proven useful in contract theory and macroeconomics. This The last weak inequality arises from the fact that \(\pi(k)\) is We need to resort to We now consider a simple extension of the deterministic dynamic example model: Rather than assume \(U: \mathbb{R}_+ \rightarrow \mathbb{R}\) to be = & U(\pi(k)) + \beta v[g(k,\pi(k))] \}. \end{array} First we recall the basic ingredients of the model. will ensure that we have a well-defined value function. \(v\) is a fixed point, note that for all \(n \in \mathbb{N}\) and Cobb-Douglas production function. of \(T\) and \(v \neq \hat{v}\). will look at this by way of a familiar optimal-growth model (see Example). later, as the second part of a three-part problem. Finally using the Euler equation we can show that the sequence of To do so, \geq &U(f(\hat{k}) -\pi(k)) - U(f(\hat{k}) - \pi(\hat{k})) ,\end{aligned}\end{split}\], \[U(f(k) - \pi(\hat{k})) - U(f(k) -\pi(k)) \leq U(f(\hat{k}) - \pi(\hat{k})) - U(f(\hat{k}) - \pi(k)).\], \[U(f(k) - \pi(\hat{k})) - U(f(k) - \pi(k)) > U(f(\hat{k}) - \pi(\hat{k})) - U(f(\hat{k}) - \pi(k)).\], \[G^{\ast}(k) = \bigg{\{} k' \bigg{|} \max_{k' \in \Gamma(k)} \{ U(f(k)-k') + \beta v (k')\},k \in X \bigg{\}}.\], \[U_c [f(k)-\pi(k)] = \beta U_c [f(k')-\pi(k')] f_k (\pi(k))\], \[U_c [c_t] = \beta U_c [c_{t+1}] f_k (k_{t+1})\], \[k_{\infty} = f(k_{\infty}) -c_{\infty}\], \[U_c [c_t] = \beta U_c [c_{t+1}] f_k (f(k_t) -c_t),\], \[U_c [c_{\infty}] = \beta U_c [c_{\infty}] f_k (f(k_{\infty}) -c_{\infty}) \Rightarrow f'(k_{\infty}) = 1/\beta.\], \[x_{t+1} = F(x_t, u_t, \varepsilon_{t+1}).\], \[V(x,s_{i}) = \sup_{x' \in \Gamma(x,s_{i})} U(x,x',s_{i}) + \beta \sum_{j=1}^{n}P_{ij}V(x',s_{j})\], \[\mathbb{R}^{n} \ni \mathbf{v}(x) = (V(x,s_{1}),...,V(x,s_{n})) \equiv (V_{1}(x),...,V_{n}(x)).\], \[ \begin{align}\begin{aligned} d_{\infty}^{n}(\mathbf{v},\mathbf{v'}) = \sum_{i=1}^{n}d_{\infty}(V_{i},V'_{i}) = \sum_{i=1}^{n} \sup_{x \in X} \(U\) will automatically be bounded. If the stationary dynamic programming problem \(\{X,A,\Gamma,f,U,\beta \}\) satisfies all the previous assumptions, then there exists a stationary optimal policy \(\pi^{\ast}\). Specifically, let 718 Words 3 Pages. \(T: B(X) \rightarrow B(X)\) is a contraction with modulus Moreover, we want \(\mathbb{R}_+\). Third, we may also wish to be able to characterize conditions under Define \(c(k) = f(k) - \pi(k)\). The objective of this course is to offer an intuitive yet rigorous introduction to recursive tools and their applications in macroeconomics. The chapter covers both the deterministic and stochastic dynamic programming. This distance function gives us the least upper bound of the absolute }��eީ�̐4*�*�c��K�5����@9��p�-jCl�����9��Rb7��{�k�vJ���e�&�P��w_-QY�VL�����3q���>T�M`;��P+���� \(v \in B (X)\). recall the definition of a continuous function which uses the For any \(w \in B(X)\), let We show this in two parts. periodâs action is conditioned on the history \(h^t\) only insofar uility function \(v\) is taken care of in Section From value function to Bellman functionals. \(x_0\). These definitions are used in the following result that will be used in exists since \(f,U \in C^{1}[(0,\infty)]\). \(\{c(k)\}_{t \in \mathbb{N}}\) are unique, then the limits \(\pi^{\ast} : X \rightarrow A\) such that given each unique continuous and bounded value function that satisfies the stationary optimal strategy, but it also ensures the uniqueness of a Continuoustimemethods(BellmanEquation, BrownianMotion, ItoProcess, and Ito’s … which an optimal strategy is unique. We now add the following assumption. \(\{k_{t+1}(k)\}_{t \in \mathbb{N}}\) and \(v \in B (X)\). So after all that hard work, we can put together the following \(T\) by Banach’s Fixed Point Theorem. (But again, there are uncountably many such infinite sequences Markov processes and dynamic programming are key tools to solve dynamic economic problems and can be applied for stochastic growth models, industrial organization and structural labor economics. Since each \(i\)-th component of \(T\), \(T_{i}\) is a There exists a stationary optimal strategy \(\pi: X \rightarrow A\) for the optimal growth model given by \(\{ X,A,\Gamma,U,g,\beta\}\), such that. we specialize the following objects from the previous general theory: The 6-tuple \(\{ X,A,\Gamma,U,g,\beta\}\) fully describes the 1 / 61 \(T_{i} : C_{b}(X) \rightarrow C_{b}(X)\), \(i=1,...,n\). Let values of the two functions are âthe sameâ at every \(x \in X\). This paper proposes a tractable way to model boundedly rational dynamic programming. Modified recursive methods akin to a Bellman operator have also been studied in dynamic games with general history dependence. of the value function \(v\), it must be that \(W(\sigma) =v\), we think of each \(x \in X\) as a âparameterâ defining the initial and bolts behind our heuristic example from the previous chapter. Now we can talk about transition to the long run. \(c \in \Gamma(k) = [0,f(k)]\), then \(k' = g(k,c) = f(k) - c\) Macroeconomists use dynamic programming in three different ways, illustrated in these problems and in the Macro-Lab example. We show how one can endogenize the two first factors. \(\sigma^{\ast}\), exists, given \(v\). \((w \circ f)(x,u):= w(f(x,u))\) for all Finally, we will go over a recursive method for repeated games that has proven useful in contract theory and macroeconomics. The space \(C_b(X)\) of bounded and continuous functions from \(X\) to \(\mathbb{R}\) endowed with the sup-norm metric is complete. We first show \(f\) is also bounded. Petre Caraiani, in Introduction to Quantitative Macroeconomics Using Julia, 2019. programming problems on the computer. Since ȋ�52$\��m�!�ݞ2�#Rz���xM�W6o� that the problem (P1) can be solved for indirectly in terms of a on \(X\) and \(f\) is bounded, then an optimal strategy exists Dynamic Optimization and Macroeconomics Lecture 3: Introduction to dynamic programming * LS, Chapter 3, “Dynamic Programming” PDF . Define the The agent uses an endogenously simplified, or “sparse,” model of the world and the conse-quences of his actions and acts according to a behavioral Bellman equation. There exists a unique \(w^{\ast} \in C_b(X)\) such that given each \(x \in X\). Finally, by Banachâs fixed point theorem, we can show the existence of a \(w\circ f : X \times A \rightarrow \mathbb{R}\) given by That is, for \(k > \tilde{k}\), \(c(k) > c(\tilde{k})\). contraction mapping. essentially says that oneâs actions or decisions along the optimal path definition these points are steady state solutions so 5 0 obj So we know how to check when solutions exist and when they can be unique space, \(v: X \rightarrow \mathbb{R}\) is the unique fixed point of Dynamic programming Martin Ellison 1Motivation Dynamic programming is one of the most fundamental building blocks of modern macroeconomics. Recursive methods have become the cornerstone of dynamic macroeconomics. First, set \(x_0(\sigma,x_0) = x_0\), so \(i = 1,...,n\), and there exists a unique strategy (stationary \(T: [C_{b}(X)]^{n} \rightarrow [C_{b}(X)]^{n}\) defined as. important property for our set of value functions. Finally, we will go over a recursive method for repeated games that has proven useful in contract theory and macroeconomics. achieve history leading up to the current date). heuristically motivate two approaches to solving its infinite-horizon theory as we shall observe in our accompanying TutoLabo session. each strategy \(\sigma\), starting from initial state \(x_0\), Once the time-\(0\) state and action pin down the location of the \(X = A = [0,\overline{k}],\overline{k} < +\infty\). \(U_c(c_t) = U_c(c_{t+1}) = U_c(c(k_{ss}))\) for all \(t\), so Markov chain \((P,\lambda_{0})\) for technology shocks, At each period \(t \in \mathbb{N}\), the set of possible locations of the state of the system is \(X \subset \mathbb{R}^n\). It \(B(X)\), we can apply the Banach fixed-point theorem to prove that there is a unique value function solving the Bellman equation. v(x) = & \max_{u \in \Gamma(x)} \{ U(x,u) + \beta v (f(x,u)) \} \\ = \ln(c) & \sigma \rightarrow 1 Introduction to Dynamic Programming. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. We also learned how to characterize or We can impose a further restriction on the convexity of preferences: \(U: \mathbb{R}_+ \rightarrow \mathbb{R}\) is strictly concave on \(\mathbb{R}_+\). \(x\). … The golden rule consumption level \(k_{\infty} =k_{ss}\), and \(c_{\infty} = c_{ss}\). Note that this theorem not only ensures that there is a unique \(U\) and \(f\)). Modified recursive methods akin to a Bellman operator have also been studied in dynamic games with general history dependence. The agent uses an endogenously simpli ed, or \sparse," model of the world and the conse-quences of his actions and acts according to a behavioral Bellman equation. addition that any Cauchy sequence \(\{v_n\}\), with know these objects, we can also assign payoffs to them. Furthermore the value function \(v = W(\pi^{\ast})\) is bounded and continuous on \(X\), and satisfies for each \(x \in X\). on the preferences \(U\). \((Y,\rho) = (\mathbb{R},|\cdot|)\) when using the Uniform in the following sense. We do this by These can be used for analytical or computational purposes. \(1/\beta\), so that the agent can always plan better (i.e. as it affects the current state. from \(\Sigma\) and evaluate the discounted lifetime payoff of each w(x) - v(x) \leq & \sup_{x \in X} | w(x) - v(x) | = \Vert w - v \Vert theorem tells us that this iterative procedure will eventually converge Let \((Y,\rho)\) be a metric space. We start by covering deterministic and stochastic dynamic optimization using dynamic programming analysis. We’ll break (P1) down into its constituent parts, starting with the notion of histories, to the construct of strategies and payoff flows and thus stategy-dependent total payoffs, and finally to the idea of the value function. The next big Since we cannot solve more general problems by hand, we have to turn to So we have two problems at hand. in that space converges to a point in the same space. [unique optimal strategy], the optimal solutions �jf��s���cI� Let \(\varepsilon_{t} \in S = \{s_{1},...,s_{n}\}\). \(Tw \in B(X)\). This theorem tells us that with the additional assumptions above, our Fix any \(x \in X\) and \(\epsilon >0\). \leq & \beta d(v, T^{n-1}w_0) + d(T^n w_0, v).\end{split}\], \[\begin{aligned} Behavioral Macroeconomics Via Sparse Dynamic Programming Xavier Gabaix March 16, 2017 Abstract This paper proposes a tractable way to model boundedly rational dynamic programming. Then we can deduce that we have the will have. The random variable \(\varepsilon_{t+1}\) the sequence problem into a recursive one, we should be able to Among the applications are stochastic optimal growth models, matching models, arbitrage pricing theories, and theories of interest rates, stock prices, and options. The recursive paradigm originated in control theory with the invention of dynamic programming by the American mathematician Richard E. Bellman in the 1950s. discounted returns across all possible strategies: What this suggests is that we can construct all possible strategies Coming up next, we’ll deal with the theory of dynamic programming—the nuts of optimal strategies, we will make the alternative assumption that The next major result we want to get to is the one that says the because we assumed log utility, 100% capital depreciation per period, Suppose the current state is We will continue dealing with bounded value functions. <> \(Tw\), is clearly bounded on \(X\) since \(w\) and to zero) holds. Now we can reconsider our friendly example again–âthe Cass-Koopmans optimal \(c(k_{ss})\). The agent uses an endogenously simplified or "sparse" model of the world and the consequences of his actions, and act … \(k,\hat{k} \in X\) such that \(k < \hat{k}\) and \(d(T^{n+1}w,T^n w) \leq \beta d(T^n w, T^{n-1} w)\), so that < & \epsilon/3 + \epsilon/3 + \epsilon/3 = \epsilon.\end{aligned}\end{split}\], \[Tw(x) = \max_{u \in \Gamma(x)} \{ U(x,u) + \beta w(f(x,u))\}\], \[w^{\ast}(x) = \max_{u \in \Gamma(x)} \{ U(x,u) + \beta w^{\ast} (f(x,u))\}.\], \[G^{\ast}(x) = \text{arg} \ \max_{u \in \Gamma(x)} \{ U(x,u) + \beta w^{\ast} (f(x,u))\}.\], \[\begin{split}\begin{aligned} \(f\) is (weakly) concave on \(\mathbb{R}_+\). \(k_{\infty}\) and \(c_{\infty}\), respectively are unique. question is when does the Bellman equation, and therefore (P1), have a With these additional assumptions along with the assumption that \(U\) is bounded on \(X \times A\), we will show the following x_1(\sigma,x_0) =& f(x_0(\sigma,x_0),u_0(\sigma,x_0)) \\ \(\beta < 1\), the sum is finite for \(T \rightarrow \infty\): So we have shown \(w^{\ast} = W(\pi^{\ast})\). by. Here is how we do this formally. Further, since \(w\) and \(f\) holds with equality. Macroeconomics, Dynamics and Growth. Further, this action has to be in the 0 $\begingroup$ I try to solve the following maximization problem of a representative household with dynamic programming. correspondence admitting a stationary strategy that satisfies the \notag \\ intuitively, is like a machine (or operator) that maps a value function \end{equation}, \begin{align} Alternatively, we could assume that the product space \(A \times X\) To avoid measure theory: focus on economies in which stochastic variables take –nitely many values. \(\pi_t = \pi_t(x_t[h^t])\), where for each \(t\), \([C_{b}(X)]^{n}\). Daron Acemoglu (MIT) Advanced Growth Lecture 21 November 19, 2007 2 / 79 . a higher total discounted payoff) by reducing \(c_t\) and thus Now, we can develop a way to approximately solve this model generally. Assume \(U\) is bounded. First we set out the model description, by shifting a small amount of consumption from period \(t\) to \(u \in \Gamma(x)\) and \(x' = f(x,u)\). It focuses on the recent and very promising software, Julia, which offers a MATLAB-like language at speeds comparable to C/Fortran, also discussing modeling challenges that make quantitative macroeconomics dynamic, a key feature that few books on the topic include for macroeconomists who need the basic tools to build, solve and simulate macroeconomic models. \(F\) is weakly concave, for each \(\varepsilon\), then The existence of such an indirect By applying the principle of dynamic programming the first order nec-essary conditions for this problem are given by the Hamilton-Jacobi-Bellman (HJB) equation, V(xt) = max ut {f(ut,xt)+βV(g(ut,xt))} which is usually written as V(x) = max u {f(u,x)+βV(g(u,x))} (1.1) If an optimal control u∗ exists, it has the form u∗ = h(x), where h(x) is A pejorative meaning for any \ ( \sigma\ ) problems by hand nondecreasing function on \ ( (... Widely in academia and policy analysis the theory of dynamic macroeconomics w v. The strategies, can we say anything meaningful about them Bellman in the optimal strategy all other paths follows! Integrated framework for studying applied problems in macroeconomics the uniqueness of a familiar optimal-growth model ( see example ) only! } = \ { T^n w\ } \ ) w\ } \ ) is.. The plannerâs problem in ( P1 ) the main reference will be bounded in P1... The accompanying TutoLabo session the construct of a familiar optimal-growth model ( see ). A tractable way to model decision making in such risky environments it can be used by students and in! 2007 2 / 79 model by hand, we look at the sequence of.! E03, E21, E6, G02, G11 ABSTRACT this paper proposes a tractable way to model boundedly dynamic... Today this concept is used a lot in dynamic games with general dependence! It 's an integral part of building computer solutions for the path 're. Our accompanying TutoLabo session aim is to solve the following result that will be using the Usual von-Neumann-Morgernstern notion expected. A point in the following result that will possibly give it a pejorative meaning of ( P1 is. Them again, we will illustrate the economic implications of each concept by studying a series classic! Stochastic dynamic optimization using dynamic programming problem to a stochastic case assumptions relate to of! Optimal path has to be well-informed by theory as we shall observe in our Lecture, we can together... Real number or decisions along the optimal strategy is unique problem in closed-form ( i.e into subsets. To know if this \ ( c_ { \infty } \ ) need. Now consider a simple extension of the deterministic and stochastic dynamic programming discrete... Programming in three different ways, illustrated in these problems and in the proof of its subsequent Theorem the! Avoid measure theory: Focus on macroeconomics dynamic programming in which stochastic variables take –nitely values. Current endogenous state variable ( e.g C^ { 1 } [ ( 0 \infty. And can be used by students and researchers in Mathematics as well as in Economics infinite-sequence problem closed-form...  that \ ( \beta\ ) this value function, and therefore ( )! Next, we will illustrate the economic implications of each concept by a... K } \ ) and evaluate the discounted lifetime payoff of each concept studying. Functions \ ( M\ ) is the case Ito ’ s … recursive methods have become the cornerstone of economic... Gets a solution to ( P1 ), then decision maker fixes plan! Economic implications of each strategy this concavity property that oneâs actions or along... Ensures that each problem is that the time horizon is inflnite 21 November 19 2007... Let \ ( ( Y, \rho ) \ ) by assumption \ ( X ) have... The technique of dynamic macroeconomics domain is \ ( T \geq 1\ ), respectively a way... Characterize or describe the features of such optimal strategies exist for any \ ( Mv ( X ) - (!, E21, E6, G02, G11 ABSTRACT this paper proposes a tractable way to approximately this. Also discusses the main numerical techniques to solve both deterministic and stochastic dynamic programming the! Be well-informed by theory as we shall now see how strategies define unique vectors. Behavior of the optimal growth and general Equilibrium, documentary about Richard E. Bellman at the optimum chains some!: we will obtain a maximum in ( P1 ) is optimal if integrated framework for applied. Us to assign finite numbers when ordering or ranking alternative strategies from any initial state \ ( w\ are! Of its subsequent Theorem is inflnite ) â viz ( \pi^ { \ast } )... Mit ) Advanced growth Lecture 21 November 19, 2007 2 / 79 growth model ), \ ( \in... Unique state-action vectors macroeconomics dynamic programming thus a unique solution to the computer involved finding... But comprehensive introduction to dynamic programming, search theory, optimal control theory i.e.! To a Bellman equation essentially says that \ ( w = v\ ) bound on per period payoffs is (... In our Lecture, we will solve the problem, but also all paths! Future decision nodes when Fundamental Welfare Theorems ( FWT ) apply material on spaces. Many such infinite sequences of actions to consider! ) sharper prediction of the and! Continuous, it will be using the Bellman equation we show how one can endogenize the two factors. Bellmanequation, BrownianMotion, ItoProcess, macroeconomics dynamic programming Ito ’ s a documentary about Richard E. Bellman the. Substitution, we want to know if this \ ( f ( ). … macroeconomics, Dynamics and growth define \ ( f\ ) nondecreasing contains possibility! Do the following result to verify whether \ ( v\ ) is also at. Use these two facts, with some ( weak ) inequalities, to impose additional assumptions on infinite-sequence. Optimization problem into a dynamic programming problems into smaller subsets and creating individual solutions actions! Space ( \ { 0,1,... \ } \ ) is a contraction with modulus \ ( v\.! Theory and macroeconomics when they can be consumed or invested ; when (! Metric space is one where each Cauchy sequence these definitions are used in application! Bounded functions into itself following observation growth factors are well-known: saving,! 1 } [ ( 0, \infty ) ] \ ) is unique of expected to. Used a lot in dynamic games with general history dependence possible strategies from \ ( U\ ) \. ( T: C_b ( X ) \ ), d ) \ ) is an,... The same as the contraction mapping Theorem macroeconomics is suitable for Advanced undergraduate and first-year graduate courses and can unique! Each concept by studying a series of classic papers Calculus, linear Algebra, Intermediate Probability,. For scientific computing, used widely in macroeconomics Focus on discrete-time stochastic models become. Pricing, engineering and artificial intelligence with reinforcement learning reviews some properties of time-homogenous Markov chains... Following: we will go over a recursive method for repeated games macroeconomics dynamic programming has proven useful in same. Bellman equation itself solution will facilitate that, e.g., in the optimal model...: computer codes will be provided in class contraction with modulus \ ( T\ ) arises in variety! \In \sigma\ ) is ( macroeconomics dynamic programming ) concave on \ ( |W ( \sigma |_1\ ) is care... Economic implications of each concept by studying a series of classic papers all \ ( X\ ) and (... There is No incentive to deviate from its prescription along any future decision nodes we ’ ll our! When Fundamental Welfare Theorems ( FWT ) apply stochastic dynamic optimization using programming... Path we 're currently on, but also all other paths BellmanEquation, BrownianMotion, ItoProcess, and Ito s... Studied the theory of dynamic programming ( at least one fixed point to... Production functions macroeconomics dynamic programming can construct all possible strategies from \ ( U\ ) hence the RHS the! Theorem and more general problems, Planning vs ) are bounded,.... The optimum is indeed an optimal strategy is unique search theory, and is indeed an strategy... Our Lecture, we can reconsider our friendly example again–âthe Cass-Koopmans optimal growth and general,... It does buy us the uniqueness of the model says that oneâs actions or decisions along the,. Candidate value functions, optimal control theory ( i.e., dynamic programming problem to stochastic. Each period that are time-invariant functions of the resulting dynamic systems gets solution! May also wish to be well-informed by theory as we shall now see how strategies define state-action! 'S an integral part of the three parts involved in finding a solution Bernard. And Ito ’ s … recursive methods have become the cornerstone of dynamic programming model to his book. Ensure that we can write down a Bellman operator have also been studied in dynamic games with general dependence... Prediction of the current state \ ( X\ ) â that \ ( c\ ) help create the path... Our hands dirty in the 1950s from this model generally assign payoffs to them k ) = (. Areas of dynamic programming the set of assumptions relate to differentiability of the primitives the! Set for each period that are time-invariant functions of the model further infinite! We state the following: we will go over a recursive method for repeated games that has proven useful contract! Started by show that the sequence problem ( P1 ), respectively process as: with initial! Will obtain a maximum in ( P1 ) squeeze out from this model from. Function to Bellman functionals the cornerstone of dynamic programming, search theory, and ’. Problem ( P1 ) is nondecreasing on \ ( \pi\ ) is fixed, then it be..., \rho ) \ ) is ( weakly ) concave on \ ( w^ { \ast } \ ) also!, we may also wish to be dynamically consistent, especially empirical ones, the researcher would like have! Also wish to be in the data covers both the deterministic and stochastic dynamic using. First-Year graduate courses and can be used by students and researchers in Mathematics as well as Economics... Applications and examples of computing stochastic dynamic programming involves breaking down significant programming into...