From b3ffc4fc8cad5bab8a1e243e102b9bc72e082368 Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Wed, 29 Nov 2023 05:21:39 -0500 Subject: [PATCH 01/11] Added article and updated README.md and navigation.md Added article and made appropriate changes to README.md and navigation.md --- README.md | 2 +- src/dynamic_programming/intro-to-dp.md | 145 +++++++++++++++++++++++++ src/navigation.md | 1 + 3 files changed, 147 insertions(+), 1 deletion(-) create mode 100644 src/dynamic_programming/intro-to-dp.md diff --git a/README.md b/README.md index d85732013..567672f4d 100644 --- a/README.md +++ b/README.md @@ -25,7 +25,7 @@ Compiled pages are published at [https://cp-algorithms.com/](https://cp-algorith - January 16, 2022: Switched to the [MkDocs](https://www.mkdocs.org/) site generator with the [Material for MkDocs](https://squidfunk.github.io/mkdocs-material/) theme, which give the website a more modern look, brings a couple of new features (dark mode, better search, ...), makes the website more stable (in terms of rendering math formulas), and makes it easier to contribute. ### New articles - +- (29 November 2023) [Introduction to Dynamic Programming] (https://cp-algorithms.com/dynamic_programming/intro-to-dp.html) - (10 September 2023) [Tortoise and Hare Algorithm](https://cp-algorithms.com/others/tortoise_and_hare.html) - (12 July 2023) [Finding faces of a planar graph](https://cp-algorithms.com/geometry/planar.html) - (18 April 2023) [Bit manipulation](https://cp-algorithms.com/algebra/bit-manipulation.html) diff --git a/src/dynamic_programming/intro-to-dp.md b/src/dynamic_programming/intro-to-dp.md new file mode 100644 index 000000000..cd733a244 --- /dev/null +++ b/src/dynamic_programming/intro-to-dp.md @@ -0,0 +1,145 @@ +--- +tags: + - Original +--- + +# Introduction to Dynamic Programming + +The essence of dynamic programming is to avoid repeated calculation. Oftentimes, dynamic programming problems are naturally solvable by recursion. In such cases, it's easiest to write the recursive solution, then save repeated states in a lookup table. This process is known as top down dynamic programming with memoization. That's read "memoization" (like we are writing in a memo pad) not memorization . + +One of the most basic, classic examples of this process is the fibonacci sequence. It's recursive formulation is $f(n) = f(n-1) + f(n-2) where n>=2 and f(0)==0 and f(1)==1$. In C++ with would be expressed as: + +```cpp +int f(int n) { + if (n==0) return 0; + if (n==1) return 1; + return f(n-1)+f(n-2); +} +``` + +The runtime of this recursive function is exponential approximately $O(2^n)$. $T(n) = T(n-1)+T(n-2) + O(1)$. This approximately (upper bound) $T(n) = T(n-1) + T(n-1) + O(1) = 2*T(n-1)+O(1)$. By master theorem, we have $O(2^n)$ complexity. + +Speeding up Fibonacci with Dynamic Programming (Memoization) + +Our recursive function currently solves fibonacci in exponential time. This means that we can only handle small input values before the problem becomes intractable. For instance, f(29) results in over 1 million function calls! + +To increase the speed, we recognize that the number of subproblems is only $O(n)$. That is, in order to calculate $f(n)$ we only need to know $f(n-1),f(n-2)...f(0)$. Therefore, instead of recalculating these subproblems, we solve them once and then save the result in a lookup table. Subsequent calls will use this lookup table and immediately return a result, thus eliminating exponential work! + +Each recursive call will check against a lookup table to see if the value has been calculated. This is done is $O(1)$ time. If we have previously calcuated it, return the result, otherwise, we calculate the function normally. The overall runtime is $O(n)$! This is an enormous improvement over our previous exponential time algorithm! + +```cpp +const int MAXN = 100; +bool found[MAXN]; +int memo[MAXN]; + +int f(int n) { + if (found[n]) return memo[n]; + if (n==0) return 0; + if (n==1) return 1; + + found[n]=true; + return memo[n]=f(n-1)+f(n-2); +} +``` + +With our new memoized recursive function, $f(29)$, which used to result in over 1 million calls, now results in only 57 calls, nearly 20,000 times fewer function calls! Ironically, we are now limited by our data type. $f(46)$ is the last fibonacci number that can fit into a signed 32 bit integer. + +## *** Important Note *** + +Typically, we try to save states in arrays,if possible, since the lookup time is $O(1)$ with minimal overhead. However, more generically, we can save states anyway we like. Other examples include maps (binary search trees) or unordered_maps (hash tables). + +An example of this might be: + +```cpp +unordered_mapmemo; +int f(int n) { + if (memo.count(n)) return memo[n]; + if (n==0) return 0; + if (n==1) return 1; + + return memo[n]=f(n-1)+f(n-2); +} +``` + +Or analogously: + +```cpp +mapmemo; +int f(int n) { + if (memo.count(n)) return memo[n]; + if (n==0) return 0; + if (n==1) return 1; + + return memo[n]=f(n-1)+f(n-2); +} +``` + +Both of these will almost always be slower than the array based version for a generic memoized recursive function. +These alternative ways of saving state are primarily useful when saving vectors or strings as part of the state space. + +## *** Very important note*** + +The layman's way of analyzing the runtime of a memoized recursive function is: +** (work per subproblem) * (number of subproblems) ** + + +Using a binary search tree (map in C++) to save states will technically result in $O(n*log(n))$ as each lookup and insertion will take $O(log(n))$ work and with $O(n)$ unique subproblems we have $O(n*log(n))$ time. + +## Bottom up Dynamic Programming + +Until now you've only seen top down dynamic programming with memoization. However, we can also solve problems with bottom up dynamic programming. + +To create a bottom up approach for fibonacci numbers, we initilized the base cases in an array. Then, we simply use the recursive definition on array: + + +```cpp +const int MAXN = 100; +int fib[MAXN]; + +int f(int n) { + + fib[0]=0; + fib[1]=1; + for(int i=2;i<=n;i++) fib[i]=fib[i-1]+fib[i-2]; + + return fib[n]; +} +``` + +Of course, as written, this is a bit silly for two reasons: +Firstly,we do repeated work if we call the function more than once. +Secondly, we only need to use the two previous values to calculate the current element. Therefore, we can reduce our memory from $O(n)$ to $O(1)$. + +An example of a bottom up dynamic programing solution for fibonacci which uses $O(1)$ might be: + +```cpp +const int MAX_SAVE = 3; +int fib[MAX_SAVE]; + +int f2(int n) { + + fib[0]=0; + fib[1]=1; + for(int i=2;i<=n;i++) + fib[i % MAX_SAVE]=fib[(i-1) % MAX_SAVE]+fib[(i-2) % MAX_SAVE]; + + return fib[n % MAX_SAVE]; +} +``` + +Note that we've changed the constant from $MAXN$ TO $MAX_SAVE$. This is because the total number of elements we need to have access to is only 3. It no longer scales with the size of input and is, by definition, $O(1)$ memory. Additionally, we use a common trick (using the modulo operator) only maintain the values we need. + +That's it. That's the basics of dynamic programming: Don't repeat work you've done before. + +One of the tricks to getting better at dynamic programming is to study some of the classic examples: +- 0-1 Knapsack +- Subset sum +- Longest Increasing Subsequence +- Counting all possible paths from top left to bottom right corner of a matrix +- Longest Common Subsequence (though suffix automatons are faster) +- Longest Path in a Directed Acyclic Graph (DAG) +- Coin Change +- Longest Palindromic Subsequence +- Rod Cutting + +Of course, the most important trick is to practice. \ No newline at end of file diff --git a/src/navigation.md b/src/navigation.md index 7a1c1961f..0f11c2c64 100644 --- a/src/navigation.md +++ b/src/navigation.md @@ -61,6 +61,7 @@ search: - Advanced - [Deleting from a data structure in O(T(n) log n)](data_structures/deleting_in_log_n.md) - Dynamic Programming + [Introducton to Dynamic Programming](dynamic_programming/intro-to-dp.md) - DP optimizations - [Divide and Conquer DP](dynamic_programming/divide-and-conquer-dp.md) - [Knuth's Optimization](dynamic_programming/knuth-optimization.md) From 408b9dbd309737bcdd83f269e9facb99cdf79543 Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Thu, 7 Dec 2023 07:23:31 -0500 Subject: [PATCH 02/11] Simplify descriptions add mathjax --- src/dynamic_programming/intro-to-dp.md | 38 ++++++++++++++------------ 1 file changed, 20 insertions(+), 18 deletions(-) diff --git a/src/dynamic_programming/intro-to-dp.md b/src/dynamic_programming/intro-to-dp.md index cd733a244..3380ee8b1 100644 --- a/src/dynamic_programming/intro-to-dp.md +++ b/src/dynamic_programming/intro-to-dp.md @@ -5,9 +5,9 @@ tags: # Introduction to Dynamic Programming -The essence of dynamic programming is to avoid repeated calculation. Oftentimes, dynamic programming problems are naturally solvable by recursion. In such cases, it's easiest to write the recursive solution, then save repeated states in a lookup table. This process is known as top down dynamic programming with memoization. That's read "memoization" (like we are writing in a memo pad) not memorization . +The essence of dynamic programming is to avoid repeated calculation. Often, dynamic programming problems are naturally solvable by recursion. In such cases, it's easiest to write the recursive solution, then save repeated states in a lookup table. This process is known as top-down dynamic programming with memoization. That's read "memoization" (like we are writing in a memo pad) not memorization. -One of the most basic, classic examples of this process is the fibonacci sequence. It's recursive formulation is $f(n) = f(n-1) + f(n-2) where n>=2 and f(0)==0 and f(1)==1$. In C++ with would be expressed as: +One of the most basic, classic examples of this process is the fibonacci sequence. It's recursive formulation is $f(n) = f(n-1) + f(n-2)$ where $n \ge 2$ and $f(0)=0$ and $f(1)=1$. In C++, this would be expressed as: ```cpp int f(int n) { @@ -17,13 +17,13 @@ int f(int n) { } ``` -The runtime of this recursive function is exponential approximately $O(2^n)$. $T(n) = T(n-1)+T(n-2) + O(1)$. This approximately (upper bound) $T(n) = T(n-1) + T(n-1) + O(1) = 2*T(n-1)+O(1)$. By master theorem, we have $O(2^n)$ complexity. +The runtime of this recursive function is exponential - approximately $O(2^n)$ since one function call ( $f(n)$ ) results in 2 similarly sized function calls ($f(n-1)$ and $f(n-2)$ ). -Speeding up Fibonacci with Dynamic Programming (Memoization) +## Speeding up Fibonacci with Dynamic Programming (Memoization) -Our recursive function currently solves fibonacci in exponential time. This means that we can only handle small input values before the problem becomes intractable. For instance, f(29) results in over 1 million function calls! +Our recursive function currently solves fibonacci in exponential time. This means that we can only handle small input values before the problem becomes too difficult. For instance, $f(29)$ results in *over 1 million function* calls! -To increase the speed, we recognize that the number of subproblems is only $O(n)$. That is, in order to calculate $f(n)$ we only need to know $f(n-1),f(n-2)...f(0)$. Therefore, instead of recalculating these subproblems, we solve them once and then save the result in a lookup table. Subsequent calls will use this lookup table and immediately return a result, thus eliminating exponential work! +To increase the speed, we recognize that the number of subproblems is only $O(n)$. That is, in order to calculate $f(n)$ we only need to know $f(n-1),f(n-2), \dots ,f(0)$. Therefore, instead of recalculating these subproblems, we solve them once and then save the result in a lookup table. Subsequent calls will use this lookup table and immediately return a result, thus eliminating exponential work! Each recursive call will check against a lookup table to see if the value has been calculated. This is done is $O(1)$ time. If we have previously calcuated it, return the result, otherwise, we calculate the function normally. The overall runtime is $O(n)$! This is an enormous improvement over our previous exponential time algorithm! @@ -42,9 +42,9 @@ int f(int n) { } ``` -With our new memoized recursive function, $f(29)$, which used to result in over 1 million calls, now results in only 57 calls, nearly 20,000 times fewer function calls! Ironically, we are now limited by our data type. $f(46)$ is the last fibonacci number that can fit into a signed 32 bit integer. +With our new memoized recursive function, $f(29)$, which used to result in *over 1 million calls*, now results in *only 57 calls*, *nearly 20,000 times fewer* function calls! Ironically, we are now limited by our data type. $f(46)$ is the last fibonacci number that can fit into a signed 32-bit integer. -## *** Important Note *** +### *** Important Note *** Typically, we try to save states in arrays,if possible, since the lookup time is $O(1)$ with minimal overhead. However, more generically, we can save states anyway we like. Other examples include maps (binary search trees) or unordered_maps (hash tables). @@ -74,23 +74,22 @@ int f(int n) { } ``` -Both of these will almost always be slower than the array based version for a generic memoized recursive function. +Both of these will almost always be slower than the array-based version for a generic memoized recursive function. These alternative ways of saving state are primarily useful when saving vectors or strings as part of the state space. -## *** Very important note*** +### *** Very important note*** The layman's way of analyzing the runtime of a memoized recursive function is: ** (work per subproblem) * (number of subproblems) ** -Using a binary search tree (map in C++) to save states will technically result in $O(n*log(n))$ as each lookup and insertion will take $O(log(n))$ work and with $O(n)$ unique subproblems we have $O(n*log(n))$ time. +Using a binary search tree (map in C++) to save states will technically result in $O(n * log(n))$ as each lookup and insertion will take $O(log(n))$ work and with $O(n)$ unique subproblems we have $O(n*log(n))$ time. -## Bottom up Dynamic Programming +## Bottom-up Dynamic Programming -Until now you've only seen top down dynamic programming with memoization. However, we can also solve problems with bottom up dynamic programming. - -To create a bottom up approach for fibonacci numbers, we initilized the base cases in an array. Then, we simply use the recursive definition on array: +Until now you've only seen top-down dynamic programming with memoization. However, we can also solve problems with bottom-up dynamic programming. +To create a bottom-up approach for fibonacci numbers, we initilize the base cases in an array. Then, we simply use the recursive definition on array: ```cpp const int MAXN = 100; @@ -127,19 +126,22 @@ int f2(int n) { } ``` -Note that we've changed the constant from $MAXN$ TO $MAX_SAVE$. This is because the total number of elements we need to have access to is only 3. It no longer scales with the size of input and is, by definition, $O(1)$ memory. Additionally, we use a common trick (using the modulo operator) only maintain the values we need. +Note that we've changed the constant from *MAXN* TO *MAX_SAVE*. This is because the total number of elements we need to have access to is only 3. It no longer scales with the size of input and is, by definition, $O(1)$ memory. Additionally, we use a common trick (using the modulo operator) only maintaining the values we need. That's it. That's the basics of dynamic programming: Don't repeat work you've done before. -One of the tricks to getting better at dynamic programming is to study some of the classic examples: +One of the tricks to getting better at dynamic programming is to study some of the classic examples. + +## Classic Dynamic Programming Problems - 0-1 Knapsack - Subset sum - Longest Increasing Subsequence - Counting all possible paths from top left to bottom right corner of a matrix -- Longest Common Subsequence (though suffix automatons are faster) +- Longest Common Subsequence - Longest Path in a Directed Acyclic Graph (DAG) - Coin Change - Longest Palindromic Subsequence - Rod Cutting +- Edit Distance Of course, the most important trick is to practice. \ No newline at end of file From d4c2fd34b79f209d4d94b0b76576e9eea71b2a22 Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Thu, 7 Dec 2023 20:57:15 -0500 Subject: [PATCH 03/11] put code through formatter --- src/dynamic_programming/intro-to-dp.md | 28 +++++++++++++------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/src/dynamic_programming/intro-to-dp.md b/src/dynamic_programming/intro-to-dp.md index 3380ee8b1..8d375fd30 100644 --- a/src/dynamic_programming/intro-to-dp.md +++ b/src/dynamic_programming/intro-to-dp.md @@ -11,9 +11,9 @@ One of the most basic, classic examples of this process is the fibonacci sequenc ```cpp int f(int n) { - if (n==0) return 0; - if (n==1) return 1; - return f(n-1)+f(n-2); + if (n == 0) return 0; + if (n == 1) return 1; + return f(n - 1) + f(n - 2); } ``` @@ -21,7 +21,7 @@ The runtime of this recursive function is exponential - approximately $O(2^n)$ s ## Speeding up Fibonacci with Dynamic Programming (Memoization) -Our recursive function currently solves fibonacci in exponential time. This means that we can only handle small input values before the problem becomes too difficult. For instance, $f(29)$ results in *over 1 million function* calls! +Our recursive function currently solves fibonacci in exponential time. This means that we can only handle small input values before the problem becomes too difficult. For instance, $f(29)$ results in *over 1 million* function calls! To increase the speed, we recognize that the number of subproblems is only $O(n)$. That is, in order to calculate $f(n)$ we only need to know $f(n-1),f(n-2), \dots ,f(0)$. Therefore, instead of recalculating these subproblems, we solve them once and then save the result in a lookup table. Subsequent calls will use this lookup table and immediately return a result, thus eliminating exponential work! @@ -34,11 +34,11 @@ int memo[MAXN]; int f(int n) { if (found[n]) return memo[n]; - if (n==0) return 0; - if (n==1) return 1; - - found[n]=true; - return memo[n]=f(n-1)+f(n-2); + if (n == 0) return 0; + if (n == 1) return 1; + + found[n] = true; + return memo[n] = f(n - 1) + f(n - 2); } ``` @@ -51,13 +51,13 @@ Typically, we try to save states in arrays,if possible, since the lookup time is An example of this might be: ```cpp -unordered_mapmemo; +unordered_map memo; int f(int n) { if (memo.count(n)) return memo[n]; - if (n==0) return 0; - if (n==1) return 1; - - return memo[n]=f(n-1)+f(n-2); + if (n == 0) return 0; + if (n == 1) return 1; + + return memo[n] = f(n - 1) + f(n - 2); } ``` From d242c9036fe2f6b5500e74df25f1c92fcad9cca1 Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Sat, 16 Dec 2023 22:45:59 -0500 Subject: [PATCH 04/11] Beautify + Add problems --- src/dynamic_programming/intro-to-dp.md | 54 ++++++++++++++------------ 1 file changed, 30 insertions(+), 24 deletions(-) diff --git a/src/dynamic_programming/intro-to-dp.md b/src/dynamic_programming/intro-to-dp.md index 8d375fd30..932a44409 100644 --- a/src/dynamic_programming/intro-to-dp.md +++ b/src/dynamic_programming/intro-to-dp.md @@ -42,7 +42,7 @@ int f(int n) { } ``` -With our new memoized recursive function, $f(29)$, which used to result in *over 1 million calls*, now results in *only 57 calls*, *nearly 20,000 times fewer* function calls! Ironically, we are now limited by our data type. $f(46)$ is the last fibonacci number that can fit into a signed 32-bit integer. +With our new memoized recursive function, $f(29)$, which used to result in *over 1 million calls*, now results in *only 57* calls, nearly *20,000 times* fewer function calls! Ironically, we are now limited by our data type. $f(46)$ is the last fibonacci number that can fit into a signed 32-bit integer. ### *** Important Note *** @@ -64,13 +64,13 @@ int f(int n) { Or analogously: ```cpp -mapmemo; +map memo; int f(int n) { if (memo.count(n)) return memo[n]; - if (n==0) return 0; - if (n==1) return 1; - - return memo[n]=f(n-1)+f(n-2); + if (n == 0) return 0; + if (n == 1) return 1; + + return memo[n] = f(n - 1) + f(n - 2); } ``` @@ -80,10 +80,10 @@ These alternative ways of saving state are primarily useful when saving vectors ### *** Very important note*** The layman's way of analyzing the runtime of a memoized recursive function is: -** (work per subproblem) * (number of subproblems) ** +$$** (work per subproblem) * (number of subproblems) **$$ -Using a binary search tree (map in C++) to save states will technically result in $O(n * log(n))$ as each lookup and insertion will take $O(log(n))$ work and with $O(n)$ unique subproblems we have $O(n*log(n))$ time. +Using a binary search tree (map in C++) to save states will technically result in $O(n \log n)$ as each lookup and insertion will take $O(\log n)$ work and with $O(n)$ unique subproblems we have $O(n \log n)$ time. ## Bottom-up Dynamic Programming @@ -96,17 +96,16 @@ const int MAXN = 100; int fib[MAXN]; int f(int n) { - - fib[0]=0; - fib[1]=1; - for(int i=2;i<=n;i++) fib[i]=fib[i-1]+fib[i-2]; - + fib[0] = 0; + fib[1] = 1; + for (int i = 2; i <= n; i++) fib[i] = fib[i - 1] + fib[i - 2]; + return fib[n]; } ``` Of course, as written, this is a bit silly for two reasons: -Firstly,we do repeated work if we call the function more than once. +Firstly, we do repeated work if we call the function more than once. Secondly, we only need to use the two previous values to calculate the current element. Therefore, we can reduce our memory from $O(n)$ to $O(1)$. An example of a bottom up dynamic programing solution for fibonacci which uses $O(1)$ might be: @@ -115,18 +114,17 @@ An example of a bottom up dynamic programing solution for fibonacci which uses $ const int MAX_SAVE = 3; int fib[MAX_SAVE]; -int f2(int n) { - - fib[0]=0; - fib[1]=1; - for(int i=2;i<=n;i++) - fib[i % MAX_SAVE]=fib[(i-1) % MAX_SAVE]+fib[(i-2) % MAX_SAVE]; - +int f(int n) { + fib[0] = 0; + fib[1] = 1; + for (int i = 2; i <= n; i++) + fib[i % MAX_SAVE] = fib[(i - 1) % MAX_SAVE] + fib[(i - 2) % MAX_SAVE]; + return fib[n % MAX_SAVE]; } ``` -Note that we've changed the constant from *MAXN* TO *MAX_SAVE*. This is because the total number of elements we need to have access to is only 3. It no longer scales with the size of input and is, by definition, $O(1)$ memory. Additionally, we use a common trick (using the modulo operator) only maintaining the values we need. +Note that we've changed the constant from `MAXN` TO `MAX_SAVE`. This is because the total number of elements we need to have access to is only 3. It no longer scales with the size of input and is, by definition, $O(1)$ memory. Additionally, we use a common trick (using the modulo operator) only maintaining the values we need. That's it. That's the basics of dynamic programming: Don't repeat work you've done before. @@ -134,7 +132,7 @@ One of the tricks to getting better at dynamic programming is to study some of t ## Classic Dynamic Programming Problems - 0-1 Knapsack -- Subset sum +- Subset Sum - Longest Increasing Subsequence - Counting all possible paths from top left to bottom right corner of a matrix - Longest Common Subsequence @@ -143,5 +141,13 @@ One of the tricks to getting better at dynamic programming is to study some of t - Longest Palindromic Subsequence - Rod Cutting - Edit Distance +- Bitmask Dynamic Programming +- Digit Dynamic Programming +- Dynamic Programming on Trees + +Of course, the most important trick is to practice. -Of course, the most important trick is to practice. \ No newline at end of file +## Practice Problems +* [LeetCode - 1137. N-th Tribonacci Number](https://leetcode.com/problems/n-th-tribonacci-number/description/) +* [LeetCode - 118. Pascal's Triangle](https://leetcode.com/problems/pascals-triangle/description/) +* [LeetCode - 1025. Divisor Game](https://leetcode.com/problems/divisor-game/description/) \ No newline at end of file From 148942a47430c0b8c96f48bcf8ef778a3bcde842 Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Sat, 16 Dec 2023 22:49:34 -0500 Subject: [PATCH 05/11] Updated Article Date --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 567672f4d..98a33063a 100644 --- a/README.md +++ b/README.md @@ -25,7 +25,7 @@ Compiled pages are published at [https://cp-algorithms.com/](https://cp-algorith - January 16, 2022: Switched to the [MkDocs](https://www.mkdocs.org/) site generator with the [Material for MkDocs](https://squidfunk.github.io/mkdocs-material/) theme, which give the website a more modern look, brings a couple of new features (dark mode, better search, ...), makes the website more stable (in terms of rendering math formulas), and makes it easier to contribute. ### New articles -- (29 November 2023) [Introduction to Dynamic Programming] (https://cp-algorithms.com/dynamic_programming/intro-to-dp.html) +- (16 December 2023) [Introduction to Dynamic Programming] (https://cp-algorithms.com/dynamic_programming/intro-to-dp.html) - (10 September 2023) [Tortoise and Hare Algorithm](https://cp-algorithms.com/others/tortoise_and_hare.html) - (12 July 2023) [Finding faces of a planar graph](https://cp-algorithms.com/geometry/planar.html) - (18 April 2023) [Bit manipulation](https://cp-algorithms.com/algebra/bit-manipulation.html) From f988084b5938e683db8fc9eff4fb52d5b95361bf Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Sun, 17 Dec 2023 00:41:10 -0500 Subject: [PATCH 06/11] Update README.md corrected navigation link --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 60c6b5e4a..c8861b356 100644 --- a/README.md +++ b/README.md @@ -26,7 +26,7 @@ Compiled pages are published at [https://cp-algorithms.com/](https://cp-algorith ### New articles -- (16 December 2023) [Introduction to Dynamic Programming] (https://cp-algorithms.com/dynamic_programming/intro-to-dp.html) +- (16 December 2023) [Introduction to Dynamic Programming](https://cp-algorithms.com/dynamic_programming/intro-to-dp.html) - (8 December 2023) [Hungarian Algorithm](https://cp-algorithms.com/graph/hungarian-algorithm.html) - (10 September 2023) [Tortoise and Hare Algorithm](https://cp-algorithms.com/others/tortoise_and_hare.html) - (12 July 2023) [Finding faces of a planar graph](https://cp-algorithms.com/geometry/planar.html) From 6039af54cd1e98aece17853b194137139d764fd2 Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Sun, 17 Dec 2023 01:07:59 -0500 Subject: [PATCH 07/11] Update navigation.md update Intro to dp navigation (list item) --- src/navigation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/navigation.md b/src/navigation.md index fcaa1941b..de682c536 100644 --- a/src/navigation.md +++ b/src/navigation.md @@ -61,7 +61,7 @@ search: - Advanced - [Deleting from a data structure in O(T(n) log n)](data_structures/deleting_in_log_n.md) - Dynamic Programming - [Introducton to Dynamic Programming](dynamic_programming/intro-to-dp.md) + - [Introducton to Dynamic Programming](dynamic_programming/intro-to-dp.md) - DP optimizations - [Divide and Conquer DP](dynamic_programming/divide-and-conquer-dp.md) - [Knuth's Optimization](dynamic_programming/knuth-optimization.md) From d7bd15d2e1e7cfc034560522f4cfd0f809cc7871 Mon Sep 17 00:00:00 2001 From: Michael Hayter Date: Sun, 17 Dec 2023 01:52:09 -0500 Subject: [PATCH 08/11] Update intro-to-dp.md Update (mathjax) equation formatting --- src/dynamic_programming/intro-to-dp.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/dynamic_programming/intro-to-dp.md b/src/dynamic_programming/intro-to-dp.md index 932a44409..51f6fabd8 100644 --- a/src/dynamic_programming/intro-to-dp.md +++ b/src/dynamic_programming/intro-to-dp.md @@ -80,7 +80,7 @@ These alternative ways of saving state are primarily useful when saving vectors ### *** Very important note*** The layman's way of analyzing the runtime of a memoized recursive function is: -$$** (work per subproblem) * (number of subproblems) **$$ +$${ \text{work per subproblem} * \text{number of subproblems} }$$ Using a binary search tree (map in C++) to save states will technically result in $O(n \log n)$ as each lookup and insertion will take $O(\log n)$ work and with $O(n)$ unique subproblems we have $O(n \log n)$ time. @@ -150,4 +150,4 @@ Of course, the most important trick is to practice. ## Practice Problems * [LeetCode - 1137. N-th Tribonacci Number](https://leetcode.com/problems/n-th-tribonacci-number/description/) * [LeetCode - 118. Pascal's Triangle](https://leetcode.com/problems/pascals-triangle/description/) -* [LeetCode - 1025. Divisor Game](https://leetcode.com/problems/divisor-game/description/) \ No newline at end of file +* [LeetCode - 1025. Divisor Game](https://leetcode.com/problems/divisor-game/description/) From 3760b394ef5f4f126c8aa1cedd098cd3534982dc Mon Sep 17 00:00:00 2001 From: Oleksandr Kulkov Date: Thu, 4 Jan 2024 14:37:08 +0100 Subject: [PATCH 09/11] update to redeploy preview --- src/dynamic_programming/intro-to-dp.md | 1 + 1 file changed, 1 insertion(+) diff --git a/src/dynamic_programming/intro-to-dp.md b/src/dynamic_programming/intro-to-dp.md index 51f6fabd8..b55cc81a7 100644 --- a/src/dynamic_programming/intro-to-dp.md +++ b/src/dynamic_programming/intro-to-dp.md @@ -151,3 +151,4 @@ Of course, the most important trick is to practice. * [LeetCode - 1137. N-th Tribonacci Number](https://leetcode.com/problems/n-th-tribonacci-number/description/) * [LeetCode - 118. Pascal's Triangle](https://leetcode.com/problems/pascals-triangle/description/) * [LeetCode - 1025. Divisor Game](https://leetcode.com/problems/divisor-game/description/) + From 64743c0bff5d546335c627dd240e28dc41892225 Mon Sep 17 00:00:00 2001 From: Jakob Kogler Date: Sun, 28 Jan 2024 20:44:24 +0100 Subject: [PATCH 10/11] Update intro-to-dp.md fix formula and add explanation --- src/dynamic_programming/intro-to-dp.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/src/dynamic_programming/intro-to-dp.md b/src/dynamic_programming/intro-to-dp.md index b55cc81a7..86e7639fd 100644 --- a/src/dynamic_programming/intro-to-dp.md +++ b/src/dynamic_programming/intro-to-dp.md @@ -77,17 +77,18 @@ int f(int n) { Both of these will almost always be slower than the array-based version for a generic memoized recursive function. These alternative ways of saving state are primarily useful when saving vectors or strings as part of the state space. -### *** Very important note*** - The layman's way of analyzing the runtime of a memoized recursive function is: -$${ \text{work per subproblem} * \text{number of subproblems} }$$ +$$\text{work per subproblem} * \text{number of subproblems}$$ Using a binary search tree (map in C++) to save states will technically result in $O(n \log n)$ as each lookup and insertion will take $O(\log n)$ work and with $O(n)$ unique subproblems we have $O(n \log n)$ time. +This approach is called top-down, as we can call the function with a query value and the calculation starts going from the top (queried value) down to the bottom (base cases of the recursion), and makes shortcuts via memorization on the way. + ## Bottom-up Dynamic Programming Until now you've only seen top-down dynamic programming with memoization. However, we can also solve problems with bottom-up dynamic programming. +Bottom up is exactly the opposite of top-down, you start at the bottom (base cases of the recursion), and extend it to more and more values. To create a bottom-up approach for fibonacci numbers, we initilize the base cases in an array. Then, we simply use the recursive definition on array: @@ -126,7 +127,7 @@ int f(int n) { Note that we've changed the constant from `MAXN` TO `MAX_SAVE`. This is because the total number of elements we need to have access to is only 3. It no longer scales with the size of input and is, by definition, $O(1)$ memory. Additionally, we use a common trick (using the modulo operator) only maintaining the values we need. -That's it. That's the basics of dynamic programming: Don't repeat work you've done before. +That's it. That's the basics of dynamic programming: Don't repeat work you've done before. One of the tricks to getting better at dynamic programming is to study some of the classic examples. From 2093f3f7bd0d99fc347aa8fd14609a262e60ad15 Mon Sep 17 00:00:00 2001 From: Jakob Kogler Date: Sun, 28 Jan 2024 20:45:15 +0100 Subject: [PATCH 11/11] Update README.md fix date --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index c8861b356..2aeaa5d77 100644 --- a/README.md +++ b/README.md @@ -26,7 +26,7 @@ Compiled pages are published at [https://cp-algorithms.com/](https://cp-algorith ### New articles -- (16 December 2023) [Introduction to Dynamic Programming](https://cp-algorithms.com/dynamic_programming/intro-to-dp.html) +- (28 January 2024) [Introduction to Dynamic Programming](https://cp-algorithms.com/dynamic_programming/intro-to-dp.html) - (8 December 2023) [Hungarian Algorithm](https://cp-algorithms.com/graph/hungarian-algorithm.html) - (10 September 2023) [Tortoise and Hare Algorithm](https://cp-algorithms.com/others/tortoise_and_hare.html) - (12 July 2023) [Finding faces of a planar graph](https://cp-algorithms.com/geometry/planar.html) pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy