"If you don't do the best with what you happen to have got, you'll never do the best you might have done with what you should have had." (Aris, R., Discrete Dynamic Programming. Waltham, MA, Blaisdell, 1964)


Tv = v A fixed point!


Syllabus

syllabus_14_ucsb.pdf

A Simple Routing Problem routing.pdf

Notes on Mathematical Programming mathematicalprogramming.pdf

Notes on Deterministic Dynamic Programming dynamicprogramming_ucsb.pdf

Notes on the Linear-Quadratic Regulator Problem LQ1.pdf

Notes on Stochastic Dynamic Programming dynamicprogramming2_ucsb.pdf

Notes on Markov Chains markov.pdf

Notes on Invariant Distributions invariant.pdf

Notes on Recursive Competitive Equilibrium RCE.pdf

Notes on Recursive Competitive Equilibrium -- Solving in the Time Domain timedomain.pdf

Notes on LQ Recursive Equilibrium LQ_RCE.pdf

Sample Code for Linear-Quadratic Regulator Problem LQ1.m

uLQ1.m

Sample Code for Linear-Quadratic-Gaussian Regulator Problem LQ2.m

uLQ2.m

Sample Code for Dynamic Programming with Shape-Preserving Splines Main Program

Utility Function

Constraint Function

Sample Code for Stochastic Dynamic Programming with Shape-Preserving Splines Main Program<=A>

Utility Function

Constraint Function

Rouwenhorst's Method of Approximating AR(1) as Markov Chain

Bivariate VAR Extension of Rouwenhorst's Method

Fortran Code for Stochastic Growth Model

Homework #1 hw1_14_problems

Solutions hw1_14

Homework #2 hw2_14

Homework #3 hw3_14_problems

Shell for Part a Part a

Shell for Part b Part b