NIST

dynamic programming

(algorithmic technique)

Definition: Solve an optimization problem by caching subproblem solutions (memoization) rather than recomputing them.

Aggregate parent (I am a part of or used in ...)
Smith-Waterman algorithm Solves these problems: matrix-chain multiplication problem, longest common substring, longest common subsequence.

See also greedy algorithm, principle of optimality.

Note: From Algorithms and Theory of Computation Handbook, page 1-26, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC.

Author: CRC-A

Implementation

Mark Nelson's tutorial to using C++ Hash Table Memoization: [for] Simplifying Dynamic Programming (C++). Oleg Kiselyov's program to optimally lay out a page (C++) using dynamic programming.
Go to the Dictionary of Algorithms and Data Structures home page.

If you have suggestions, corrections, or comments, please get in touch with Paul Black.

Entry modified 23 March 2015.
HTML page formatted Mon Mar 23 10:36:23 2015.

Cite this as:
Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "dynamic programming", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 23 March 2015. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/dynamicprog.html