You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/algebra/extended-euclid-algorithm.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -93,7 +93,7 @@ int gcd(int a, int b, int& x, int& y) {
93
93
}
94
94
```
95
95
96
-
If you look closely at the variable`a1` and `b1`, you can notice that they taking exactly the same values as in the iterative version of the normal [Euclidean algorithm](euclid-algorithm.md). So the algorithm will at least compute the correct GCD.
96
+
If you look closely at the variables`a1` and `b1`, you can notice that they take exactly the same values as in the iterative version of the normal [Euclidean algorithm](euclid-algorithm.md). So the algorithm will at least compute the correct GCD.
97
97
98
98
To see why the algorithm also computes the correct coefficients, you can check that the following invariants will hold at any time (before the while loop, and at the end of each iteration): $x \cdot a + y \cdot b = a_1$ and $x_1 \cdot a + y_1 \cdot b = b_1$.
99
99
It's trivial to see, that these two equations are satisfied at the beginning.
@@ -107,6 +107,6 @@ However if you do so, you lose the ability to argue about the invariants.
There can only be $p$ different remainders modulo $p$, and at most $p^2$ different pairs of remainders, so there are at least two identical pairs among them. This is sufficient to prove the sequence is periodic, as a Fibonacci number is only determined by it's two predecessors. Hence if two pairs of consecutive numbers repeat, that would also mean the numbers after the pair will repeat in the same fashion.
220
+
There can only be $p$ different remainders modulo $p$, and at most $p^2$ different pairs of remainders, so there are at least two identical pairs among them. This is sufficient to prove the sequence is periodic, as a Fibonacci number is only determined by its two predecessors. Hence if two pairs of consecutive numbers repeat, that would also mean the numbers after the pair will repeat in the same fashion.
221
221
222
222
We now choose two pairs of identical remainders with the smallest indices in the sequence. Let the pairs be $(F_a,\ F_{a + 1})$ and $(F_b,\ F_{b + 1})$. We will prove that $a = 0$. If this was false, there would be two previous pairs $(F_{a-1},\ F_a)$ and $(F_{b-1},\ F_b)$, which, by the property of Fibonacci numbers, would also be equal. However, this contradicts the fact that we had chosen pairs with the smallest indices, completing our proof that there is no pre-period (i.e the numbers are periodic starting from $F_0$).
Copy file name to clipboardExpand all lines: src/algebra/linear-diophantine-equation.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -129,12 +129,12 @@ Let there be two intervals: $[min_x; max_x]$ and $[min_y; max_y]$ and let's say
129
129
130
130
Note that if $a$ or $b$ is $0$, then the problem only has one solution. We don't consider this case here.
131
131
132
-
First, we can find a solution which have minimum value of $x$, such that $x \ge min_x$. To do this, we first find any solution of the Diophantine equation. Then, we shift this solution to get $x \ge min_x$ (using what we know about the set of all solutions in previous section). This can be done in $O(1)$.
132
+
First, we can find a solution which has minimum value of $x$, such that $x \ge min_x$. To do this, we first find any solution of the Diophantine equation. Then, we shift this solution to get $x \ge min_x$ (using what we know about the set of all solutions in previous section). This can be done in $O(1)$.
133
133
Denote this minimum value of $x$ by $l_{x1}$.
134
134
135
-
Similarly, we can find the maximum value of $x$ which satisfy $x \le max_x$. Denote this maximum value of $x$ by $r_{x1}$.
135
+
Similarly, we can find the maximum value of $x$ which satisfies $x \le max_x$. Denote this maximum value of $x$ by $r_{x1}$.
136
136
137
-
Similarly, we can find the minimum value of $y$ $(y \ge min_y)$ and maximum values of $y$ $(y \le max_y)$. Denote the corresponding values of $x$ by $l_{x2}$ and $r_{x2}$.
137
+
Similarly, we can find the minimum value of $y$ $(y \ge min_y)$ and maximum value of $y$ $(y \le max_y)$. Denote the corresponding values of $x$ by $l_{x2}$ and $r_{x2}$.
138
138
139
139
The final solution is all solutions with x in intersection of $[l_{x1}, r_{x1}]$ and $[l_{x2}, r_{x2}]$. Let denote this intersection by $[l_x, r_x]$.
Copy file name to clipboardExpand all lines: src/algebra/prime-sieve-linear.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -87,7 +87,7 @@ In practice the linear sieve runs about as fast as a typical implementation of t
87
87
88
88
In comparison to optimized versions of the sieve of Erathosthenes, e.g. the segmented sieve, it is much slower.
89
89
90
-
Considering the memory requirements of this algorithm - an array $lp []$ of length $n$, and an array of $pr []$ of length $\frac n {\ln n}$, this algorithm seems to worse than the classic sieve in every way.
90
+
Considering the memory requirements of this algorithm - an array $lp []$ of length $n$, and an array of $pr []$ of length $\frac n {\ln n}$, this algorithm seems to be worse than the classic sieve in every way.
91
91
92
92
However, its redeeming quality is that this algorithm calculates an array $lp []$, which allows us to find factorization of any number in the segment $[2; n]$ in the time of the size order of this factorization. Moreover, using just one extra array will allow us to avoid divisions when looking for factorization.
Copy file name to clipboardExpand all lines: src/algebra/sieve-of-eratosthenes.md
+7-7Lines changed: 7 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -15,15 +15,15 @@ A proper multiple of a number $x$, is a number greater than $x$ and divisible by
15
15
Then we find the next number that hasn't been marked as composite, in this case it is 3.
16
16
Which means 3 is prime, and we mark all proper multiples of 3 as composite.
17
17
The next unmarked number is 5, which is the next prime number, and we mark all proper multiples of it.
18
-
And we continue this procedure until we processed all numbers in the row.
18
+
And we continue this procedure until we have processed all numbers in the row.
19
19
20
20
In the following image you can see a visualization of the algorithm for computing all prime numbers in the range $[1; 16]$. It can be seen, that quite often we mark numbers as composite multiple times.
21
21
22
22
<center></center>
23
23
24
24
The idea behind is this:
25
25
A number is prime, if none of the smaller prime numbers divides it.
26
-
Since we iterate over the prime numbers in order, we already marked all numbers, who are divisible by at least one of the prime numbers, as divisible.
26
+
Since we iterate over the prime numbers in order, we already marked all numbers, which are divisible by at least one of the prime numbers, as divisible.
27
27
Hence if we reach a cell and it is not marked, then it isn't divisible by any smaller prime number and therefore has to be prime.
28
28
29
29
## Implementation
@@ -53,15 +53,15 @@ Using such implementation the algorithm consumes $O(n)$ of the memory (obviously
53
53
It's simple to prove a running time of $O(n \log n)$ without knowing anything about the distribution of primes - ignoring the `is_prime` check, the inner loop runs (at most) $n/i$ times for $i = 2, 3, 4, \dots$, leading the total number of operations in the inner loop to be a harmonic sum like $n(1/2 + 1/3 + 1/4 + \cdots)$, which is bounded by $O(n \log n)$.
54
54
55
55
Let's prove that algorithm's running time is $O(n \log \log n)$.
56
-
The algorithm will perform $\frac{n}{p}$ operations for every prime $p \le n$ the inner loop.
56
+
The algorithm will perform $\frac{n}{p}$ operations for every prime $p \le n$ in the inner loop.
57
57
Hence, we need to evaluate the next expression:
58
58
59
59
$$\sum_{\substack{p \le n, \\\ p \text{ prime}}} \frac n p = n \cdot \sum_{\substack{p \le n, \\\ p \text{ prime}}} \frac 1 p.$$
60
60
61
61
Let's recall two known facts.
62
62
63
63
- The number of prime numbers less than or equal to $n$ is approximately $\frac n {\ln n}$.
64
-
- The $k$-th prime number approximately equals $k \ln k$ (that follows immediately from the previous fact).
64
+
- The $k$-th prime number approximately equals $k \ln k$ (this follows immediately from the previous fact).
65
65
66
66
Thus we can write down the sum in the following way:
67
67
@@ -115,7 +115,7 @@ Such optimization doesn't affect the complexity (indeed, by repeating the proof
115
115
116
116
Since all even numbers (except $2$) are composite, we can stop checking even numbers at all. Instead, we need to operate with odd numbers only.
117
117
118
-
First, it will allow us to half the needed memory. Second, it will reduce the number of operations performing by algorithm approximately in half.
118
+
First, it will allow us to halve the needed memory. Second, it will reduce the number of operations performed by algorithm approximately in half.
119
119
120
120
### Memory consumption and speed of operations
121
121
@@ -139,13 +139,13 @@ Another drawback from `bitset` is that you need to know the size at compile time
139
139
140
140
### Segmented Sieve
141
141
142
-
It follows from the optimization "sieving till root" that there is no need to keep the whole array `is_prime[1...n]` at all time.
142
+
It follows from the optimization "sieving till root" that there is no need to keep the whole array `is_prime[1...n]` at all times.
143
143
For sieving it is enough to just keep the prime numbers until the root of $n$, i.e. `prime[1... sqrt(n)]`, split the complete range into blocks, and sieve each block separately.
144
144
145
145
Let $s$ be a constant which determines the size of the block, then we have $\lceil {\frac n s} \rceil$ blocks altogether, and the block $k$ ($k = 0 ... \lfloor {\frac n s} \rfloor$) contains the numbers in a segment $[ks; ks + s - 1]$.
146
146
We can work on blocks by turns, i.e. for every block $k$ we will go through all the prime numbers (from $1$ to $\sqrt n$) and perform sieving using them.
147
147
It is worth noting, that we have to modify the strategy a little bit when handling the first numbers: first, all the prime numbers from $[1; \sqrt n]$ shouldn't remove themselves; and second, the numbers $0$ and $1$ should be marked as non-prime numbers.
148
-
While working on the last block it should not be forgotten that the last needed number $n$ is not necessary located in the end of the block.
148
+
While working on the last block it should not be forgotten that the last needed number $n$ is not necessarily located at the end of the block.
149
149
150
150
As discussed previously, the typical implementation of the Sieve of Eratosthenes is limited by the speed how fast you can load data into the CPU caches.
151
151
By splitting the range of potential prime numbers $[1; n]$ into smaller blocks, we never have to keep multiple blocks in memory at the same time, and all operations are much more cache-friendlier.
Copy file name to clipboardExpand all lines: src/data_structures/segment_tree.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -240,7 +240,7 @@ The memory consumption is limited by $4n$, even though a Segment Tree of an arra
240
240
However it can be reduced.
241
241
We renumber the vertices of the tree in the order of an Euler tour traversal (pre-order traversal), and we write all these vertices next to each other.
242
242
243
-
Lets look at a vertex at index $v$, and let him be responsible for the segment $[l, r]$, and let $mid = \dfrac{l + r}{2}$.
243
+
Let's look at a vertex at index $v$, and let it be responsible for the segment $[l, r]$, and let $mid = \dfrac{l + r}{2}$.
244
244
It is obvious that the left child will have the index $v + 1$.
245
245
The left child is responsible for the segment $[l, mid]$, i.e. in total there will be $2 * (mid - l + 1) - 1$ vertices in the left child's subtree.
246
246
Thus we can compute the index of the right child of $v$. The index will be $v + 2 * (mid - l + 1)$.
0 commit comments