한국보건사회연구원 전자도서관

한국보건사회연구원 전자도서관

자료검색

  1. 메인
  2. 자료검색
  3. 통합검색

통합검색

단행본Okonometrie und Unternehmensforschung. Econometrics and operations research 9

Dynamic programming of economic decisions

서명/저자사항
Dynamic programming of economic decisions
개인저자
Beckmann, Martin J
발행사항
Berlin; New York [etc.] : Springer-Verlag, 1968.
형태사항
xii, 143 p. : illus. ; 24 cm.
ISBN
9783642864513
주기사항
Includes bibliographies
소장정보
위치등록번호청구기호 / 출력상태반납예정일
이용 가능 (1)
자료실WM020006대출가능-
이용 가능 (1)
  • 등록번호
    WM020006
    상태/반납예정일
    대출가능
    -
    위치/청구기호(출력)
    자료실
책 소개
Dynamic Programming is the analysis of multistage decision in the sequential mode. It is now widely recognized as a tool of great versatility and power, and is applied to an increasing extent in all phases of economic analysis, operations research, technology, and also in mathematical theory itself. In economics and operations research its impact may someday rival that of linear programming. The importance of this field is made apparent through a growing number of publications. Foremost among these is the pioneering work of Bellman. It was he who originated the basic ideas, formulated the principle of optimality, recognized its power, coined the terminology, and developed many of the present applications. Since then mathe­ maticians, statisticians, operations researchers, and economists have come in, laying more rigorous foundations [KARLIN, BLACKWELL], and developing in depth such application as to the control of stochastic processes [HoWARD, JEWELL]. The field of inventory control has almost split off as an independent branch of Dynamic Programming on which a great deal of effort has been expended [ARRoW, KARLIN, SCARF], [WIDTIN] , [WAGNER]. Dynamic Programming is also playing an in­ creasing role in modem mathematical control theory [BELLMAN, Adap­ tive Control Processes (1961)]. Some of the most exciting work is going on in adaptive programming which is closely related to sequential statistical analysis, particularly in its Bayesian form. In this monograph the reader is introduced to the basic ideas of Dynamic Programming.

Dynamic Programming is the analysis of multistage decision in the sequential mode. It is now widely recognized as a tool of great versatility and power, and is applied to an increasing extent in all phases of economic analysis, operations research, technology, and also in mathematical theory itself. In economics and operations research its impact may someday rival that of linear programming. The importance of this field is made apparent through a growing number of publications. Foremost among these is the pioneering work of Bellman. It was he who originated the basic ideas, formulated the principle of optimality, recognized its power, coined the terminology, and developed many of the present applications. Since then mathe­ maticians, statisticians, operations researchers, and economists have come in, laying more rigorous foundations [KARLIN, BLACKWELL], and developing in depth such application as to the control of stochastic processes [HoWARD, JEWELL]. The field of inventory control has almost split off as an independent branch of Dynamic Programming on which a great deal of effort has been expended [ARRoW, KARLIN, SCARF], [WIDTIN] , [WAGNER]. Dynamic Programming is also playing an in­ creasing role in modem mathematical control theory [BELLMAN, Adap­ tive Control Processes (1961)]. Some of the most exciting work is going on in adaptive programming which is closely related to sequential statistical analysis, particularly in its Bayesian form. In this monograph the reader is introduced to the basic ideas of Dynamic Programming.

목차

One. Finite Alternatives.- 1. Introduction.- 2. Geometric Interpretation.- 3. Principle of Optimality.- 4. Value Functions for Infinite Horizons: Value Iteration.- 5. Policy Iteration.- 6. Stability Properties.- 7. Problems without Discount and with Infinite Horizon.- 8. Automobile Replacement.- 9. Linear Programming and Dynamic Programming.- References and Selected Reading to Part One.- Two. Risk.- 10. Basic Concepts.- 11. The Value Function.- 12. The Principle of Optimality.- 13. Policy Iteration.- 14. Stability Properties.- 15. Solution by Linear Programming.- 16. Machine Care.- 17. Inventory Control.- 18. Uncertainty: Adaptive Programming.- 19. Exponential Weighting.- References and Selected Reading to Part Two.- Three. Continuous Decision Variable.- 20. An Allocation Problem.- 21. General Theory.- 22. Linear Inhomogeneous Problems.- 23. A Turnpike Theorem.- 24. Sequential Programming.- 25. Risk.- 26. Quadratic Criterion Function.- References and Selected Reading to Part Three.- Four. Decision Processes in Continuous Time.- 27. Discrete Action.- 28. Variable Level.- 29. Risk of Termination.- 30. Discontinuous Processes-Repetitive Decisions.- 31. Continuous Time Inventory Control.- 32. Continuous Action-Steady State Problems.- 33. The Principle of Optimality in Differential Equation Form.- 34. Dynamic Programming and the Calculus of Variations.- 35. Variation under Constraints: The Maximum Principle.- References and Selected Reading to Part Four.- Author Index.