Skip to content

Commit 1a24305

Browse files
committed
Merge branch 'master' into mkdocs
2 parents 10df323 + d0924ca commit 1a24305

File tree

9 files changed

+112
-82
lines changed

9 files changed

+112
-82
lines changed

src/algebra/fft.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -299,26 +299,26 @@ Therefore if we reverse the bits of the position of each coefficient, and sort t
299299

300300
For example the desired order for $n = 8$ has the form:
301301

302-
$$a = \left\{ \left[ (a_0, a_4), (a_2, a_6) \right], \left[ (a_1, a_5), (a_3, a_7) \right] \right\}$$
302+
$$a = \bigg\{ \Big[ (a_0, a_4), (a_2, a_6) \Big], \Big[ (a_1, a_5), (a_3, a_7) \Big] \bigg\}$$
303303

304304
Indeed in the first recursion level (surrounded by curly braces), the vector gets divided into two parts $[a_0, a_2, a_4, a_6]$ and $[a_1, a_3, a_5, a_7]$.
305305
As we see, in the bit-reversal permutation this corresponds to simply dividing the vector into two halves: the first $\frac{n}{2}$ elements and the last $\frac{n}{2}$ elements.
306306
Then there is a recursive call for each halve.
307307
Let the resulting DFT for each of them be returned in place of the elements themselves (i.e. the first half and the second half of the vector $a$ respectively.
308308

309-
$$a = \left\{[y_0^0, y_1^0, y_2^0, y_3^0], [y_0^1, y_1^1, y_2^1, y_3^1]\right\}$$
309+
$$a = \bigg\{ \Big[y_0^0, y_1^0, y_2^0, y_3^0\Big], \Big[y_0^1, y_1^1, y_2^1, y_3^1 \Big] \bigg\}$$
310310

311311
Now we want to combine the two DFTs into one for the complete vector.
312312
The order of the elements is ideal, and we can also perform the union directly in this vector.
313313
We can take the elements $y_0^0$ and $y_0^1$ and perform the butterfly transform.
314314
The place of the resulting two values is the same as the place of the two initial values, so we get:
315315

316-
$$a = \left\{[y_0^0 + w_n^0 y_0^1, y_1^0, y_2^0, y_3^0], [y_0^0 - w_n^0 y_0^1, y_1^1, y_2^1, y_3^1]\right\}$$
316+
$$a = \bigg\{ \Big[y_0^0 + w_n^0 y_0^1, y_1^0, y_2^0, y_3^0\Big], \Big[y_0^0 - w_n^0 y_0^1, y_1^1, y_2^1, y_3^1\Big] \bigg\}$$
317317

318318
Similarly we can compute the butterfly transform of $y_1^0$ and $y_1^1$ and put the results in their place, and so on.
319319
As a result we get:
320320

321-
$$a = \left\{[y_0^0 + w_n^0 y_0^1, y_1^0 + w_n^1 y_1^1, y_2^0 + w_n^2 y_2^1, y_3^0 + w_n^3 y_3^1], [y_0^0 - w_n^0 y_0^1, y_1^0 - w_n^1 y_1^1, y_2^0 - w_n^2 y_2^1, y_3^0 - w_n^3 y_3^1]\right\}$$
321+
$$a = \bigg\{ \Big[y_0^0 + w_n^0 y_0^1, y_1^0 + w_n^1 y_1^1, y_2^0 + w_n^2 y_2^1, y_3^0 + w_n^3 y_3^1\Big], \Big[y_0^0 - w_n^0 y_0^1, y_1^0 - w_n^1 y_1^1, y_2^0 - w_n^2 y_2^1, y_3^0 - w_n^3 y_3^1\Big] \bigg\}$$
322322

323323
Thus we computed the required DFT from the vector $a$.
324324

src/algebra/phi-function.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,6 @@ $$\phi(p^k) = p^k - p^{k-1}.$$
3333

3434
This relation is not trivial to see. It follows from the [Chinese remainder theorem](chinese-remainder-theorem.md). The Chinese remainder theorem guarantees, that for each $0 \le x < a$ and each $0 \le y < b$, there exists a unique $0 \le z < a b$ with $z \equiv x \pmod{a}$ and $z \equiv y \pmod{b}$. It's not hard to show that $z$ is coprime to $a b$ if and only if $x$ is coprime to $a$ and $y$ is coprime to $b$. Therefore the amount of integers coprime to $a b$ is equal to product of the amounts of $a$ and $b$.
3535

36-
3736
- In general, for not coprime $a$ and $b$, the equation
3837

3938
\[\phi(ab) = \phi(a) \cdot \phi(b) \cdot \dfrac{d}{\phi(d)}\]

src/algebra/polynomial.md

Lines changed: 78 additions & 72 deletions
Original file line numberDiff line numberDiff line change
@@ -10,40 +10,30 @@ In this article we will cover common operations that you will probably have to d
1010

1111
## Basic Notion and Facts
1212

13-
Consider a polynomial $A(x) = a_0 + a_1 x + \dots + a_n x^n$ such that $a_n \neq 0$.
13+
Let $A(x) = a_0 + a_1 x + \dots + a_n x^n$ be a polynomial over some field $\mathbb F$. It For simplicity we will write $A$ instead of $A(x)$ wherever possible, which will be understandable from the context. It is assumed that either $a_n \neq 0$ or $A(x)=0$.
1414

15-
- For simplicity we will write $A$ instead of $A(x)$ wherever possible, which will be understandable from the context.
16-
- We will define the degree of polynomial $A$ as $\deg A = n$. It is convenient to say that $\deg A = -\infty$ for $A(x) = 0$.
17-
- For arbitrary polynomials $A$ and $B$ it holds that $\deg AB = \deg A + \deg B$.
18-
- Polynomials form an euclidean ring which means that for any polynomials $A$ and $B \neq 0$ we can uniquely represent $A$ as:
15+
The degree of polynomial $A$ with $a_n \neq 0$ is defined as $\deg A = n$. For consistency, degree of $A(x) = 0$ is defined as $\deg A = -\infty$. In this notion, $\deg AB = \deg A + \deg B$ for arbitrary polynomials $A$ and $B$.
1916

20-
\[ A = D \cdot B + R,~ \deg R < \deg B. \]
17+
Polynomials form an Euclidean ring which means that for any polynomials $A$ and $B \neq 0$ we can uniquely represent $A$ as $$A = D \cdot B + R,~ \deg R < \deg B.$$ Here $R$ is the remainder of $A$ modulo $B$ and $D$ is called the quotient. If $A$ and $B$ have the same remainder modulo $C$, they're said to be equivalent modulo $C$, which is denoted as $A \equiv B \pmod{C}$. Several important properties of polynomial Euclidean division:
2118

22-
Here $R$ is called remainder of $A$ modulo $B$ and $D$ is called quotient.
19+
- $A$ is a multiple of $B$ if and only if $A \equiv 0 \pmod B$.
2320

24-
- If $A$ and $B$ have the same remainder modulo $C$, they're said to be equivalent modulo $C$, which is denoted as:
21+
- It implies that $A \equiv B \pmod C$ if and only if $A-B$ is a multiple of $C$.
2522

26-
\[A \equiv B \pmod{C}\]
23+
- In particular, $A \equiv B \pmod{C \cdot D}$ implies $A \equiv B \pmod{C}$.
2724

28-
- For any linear polynomial $x-r$ it holds that:
25+
- For any linear polynomial $x-r$ it holds that $A(x) \equiv A(r) \pmod{x-r}$.
2926

30-
\[A(x) \equiv A(r) \pmod{x-r}\]
27+
- It implies that $A$ is a multiple of $x-r$ if and only if $A(r)=0$.
3128

32-
- In particular:
33-
34-
\[A(r) = 0 \iff A(x) \equiv 0 \pmod {x-r}\]
35-
36-
Which means that $A$ is divisible by $x-r$ $\iff$ $A(r)=0$.
37-
38-
- If $A \equiv B \pmod{C \cdot D}$ then $A \equiv B \pmod{C}$
39-
- $A \equiv a_0 + a_1 x + \dots + a_{k-1} x^{k-1} \pmod{x^k}$
29+
- For modulo being $x^k$, it holds that $A \equiv a_0 + a_1 x + \dots + a_{k-1} x^{k-1} \pmod{x^k}$.
4030

4131
## Basic implementation
4232
[Here](https://github.com/e-maxx-eng/e-maxx-eng-aux/blob/master/src/polynomial.cpp) you can find the basic implementation of polynomial algebra.
4333

44-
It supports all trivial operations and some other useful methods. The main class is `poly<T>` for polynomials with coefficients of class `T`.
34+
It supports all trivial operations and some other useful methods. The main class is `poly<T>` for polynomials with coefficients of type `T`.
4535

46-
All arithmetic operation `+`, `-`, `*`, `%` and `/` are supported, `%` and `/` standing for remainder and quotient in integer division.
36+
All arithmetic operation `+`, `-`, `*`, `%` and `/` are supported, `%` and `/` standing for remainder and quotient in Euclidean division.
4737

4838
There is also the class `modular<m>` for performing arithmetic operations on remainders modulo a prime number `m`.
4939

@@ -69,97 +59,113 @@ Other useful functions:
6959

7060
### Multiplication
7161

72-
The very core operation is the multiplication of two polynomials, that is, given polynomial $A$ and $B$:
62+
The very core operation is the multiplication of two polynomials. That is, given the polynomials $A$ and $B$:
7363

7464
$$A = a_0 + a_1 x + \dots + a_n x^n$$
7565

7666
$$B = b_0 + b_1 x + \dots + b_m x^m$$
7767

78-
You have to compute polynomial $C = A \cdot B$:
68+
You have to compute polynomial $C = A \cdot B$, which is defined as $$\boxed{C = \sum\limits_{i=0}^n \sum\limits_{j=0}^m a_i b_j x^{i+j}} = c_0 + c_1 x + \dots + c_{n+m} x^{n+m}.$$
69+
It can be computed in $O(n \log n)$ via the [Fast Fourier transform](./algebra/fft.html) and almost all methods here will use it as subroutine.
7970

80-
$$\boxed{C = \sum\limits_{i=0}^n \sum\limits_{j=0}^m a_i b_j x^{i+j}} = c_0 + c_1 x + \dots + c_{n+m} x^{n+m}$$
71+
### Inverse series
8172

82-
It can be computed in $O(n \log n)$ via the [Fast Fourier transform](fft.md) and almost all methods here will use it as subroutine.
73+
If $A(0) \neq 0$ there always exists an infinite formal power series $A^{-1}(x) = q_0+q_1 x + q_2 x^2 + \dots$ such that $A^{-1} A = 1$. It often proves useful to compute first $k$ coefficients of $A^{-1}$ (that is, to compute it modulo $x^k$). There are two major ways to calculate it.
8374

84-
### Inverse series
75+
#### Divide and conquer
76+
77+
This algorithm was mentioned in [Schönhage's article](http://algo.inria.fr/seminars/sem00-01/schoenhage.pdf) and is inspired by [Graeffe's method](https://en.wikipedia.org/wiki/Graeffe's_method). It is known that for $B(x)=A(x)A(-x)$ it holds that $B(x)=B(-x)$, that is, $B(x)$ is an even polynomial. It means that it only has non-zero coefficients with even numbers and can be represented as $B(x)=T(x^2)$. Thus, we can do the following transition: $$A^{-1}(x) \equiv \frac{1}{A(x)} \equiv \frac{A(-x)}{A(x)A(-x)} \equiv \frac{A(-x)}{T(x^2)} \pmod{x^k}$$
78+
79+
Note that $T(x)$ can be computed with a single multiplication, after which we're only interested in the first half of coefficients of its inverse series. This effectively reduces the initial problem of computing $A^{-1} \pmod{x^k}$ to computing $T^{-1} \pmod{x^{\lfloor k / 2 \rfloor}}$.
80+
81+
The complexity of this method can be estimated as $$T(n) = T(n/2) + O(n \log n) = O(n \log n).$$
82+
83+
#### Sieveking–Kung algorithm
84+
85+
The generic process described here is known as Hensel lifting, as it follows from Hensel's lemma. We'll cover it in more detail further below, but for now let's focus on ad hoc solution. "Lifting" part here means that we start with the approximation $B_0=q_0=a_0^{-1}$, which is $A^{-1} \pmod x$ and then iteratively lift from $\bmod x^a$ to $\bmod x^{2a}$.
86+
87+
Let $B_k \equiv A^{-1} \pmod{x^a}$. The next approximation needs to follow the equation $A B_{k+1} \equiv 1 \pmod{x^{2a}}$ and may be represented as $B_{k+1} = B_k + x^a C$. From this follows the equation $$A(B_k + x^{a}C) \equiv 1 \pmod{x^{2a}}.$$
8588

86-
If $A(0) \neq 0$ there always exists an infinite series $A^{-1}(x) = \sum\limits_{i=0}^\infty a_i'x^i$ such that $A^{-1} A = 1$.
89+
Let $A B_k \equiv 1 + x^a D \pmod{x^{2a}}$, then the equation above implies $$x^a(D+AC) \equiv 0 \pmod{x^{2a}} \implies D \equiv -AC \pmod{x^a} \implies C \equiv -B_k D \pmod{x^a}.$$
8790

88-
It may be reasonable for us to calculate first $k$ coefficients of $A^{-1}$:
91+
From this, one can obtain the final formula, which is
92+
$$x^a C \equiv -B_k x^a D \equiv B_k(1-AB_k) \pmod{x^{2a}} \implies \boxed{B_{k+1} \equiv B_k(2-AB_k) \pmod{x^{2a}}}$$
8993

90-
1. Let's say that $A^{-1} \equiv B_k \pmod{x^{a}}$. That means that $A B_k \equiv 1 \pmod {x^{a}}$.
91-
2. We want to find $B_{k+1} \equiv B_k + x^{a}C \pmod{x^{2a}}$ such that $A B_{k+1} \equiv 1 \pmod{x^{2a}}$:
92-
93-
\[A(B_k + x^{a}C) \equiv 1 \pmod{x^{2a}}\]
94-
95-
3. Note that since $A B_k \equiv 1 \pmod{x^{a}}$ it also holds that $A B_k \equiv 1 + x^a D \pmod{x^{2a}}$. Thus:
96-
97-
\[x^a(D+AC) \equiv 0 \pmod{x^{2a}} \implies D \equiv -AC \pmod{x^a} \implies C \equiv -B_k D \pmod{x^a}\]
98-
99-
4. From this we obtain that:
100-
101-
\[x^a C \equiv -B_k x^a D \equiv B_k(1-AB_k) \pmod{x^{2a}} \implies \boxed{B_{k+1} \equiv B_k(2-AB_k) \pmod{x^{2a}}}\]
94+
Thus starting with $B_0 \equiv a_0^{-1} \pmod x$ we will compute the sequence $B_k$ such that $AB_k \equiv 1 \pmod{x^{2^k}}$ with the complexity $$T(n) = T(n/2) + O(n \log n) = O(n \log n).$$
10295

103-
Thus starting with $B_0 \equiv a_0^{-1} \pmod x$ we will compute the sequence $B_k$ such that $AB_k \equiv 1 \pmod{x^{2^k}}$ with the complexity: \[T(n) = T(n/2) + O(n \log n) = O(n \log n)\]
96+
The algorithm here might seem a bit more complicated than the first one, but it has a very solid and practical reasoning behind it, as well as a great generalization potential if looked from a different perspective, which would be explained further below.
10497

105-
### Division with remainder
98+
### Euclidean division
10699

107-
Consider two polynomials $A(x)$ and $B(x)$ of degrees $n$ and $m$. As it was said earlier you can rewrite $A(x)$ as:
100+
Consider two polynomials $A(x)$ and $B(x)$ of degrees $n$ and $m$. As it was said earlier you can rewrite $A(x)$ as
108101

109-
$$A(x) = B(x) D(x) + R(x), \deg R < \deg B$$
102+
$$A(x) = B(x) D(x) + R(x), \deg R < \deg B.$$
110103

111-
Let $n \geq m$, then you can immediately find out that $\deg D = n - m$ and that leading $n-m+1$ coefficients of $A$ don't influence $R$.
104+
Let $n \geq m$, it would imply that $\deg D = n - m$ and the leading $n-m+1$ coefficients of $A$ don't influence $R$. It means that you can recover $D(x)$ from the largest $n-m+1$ coefficients of $A(x)$ and $B(x)$ if you consider it as a system of equations.
112105

113-
That means that you can recover $D(x)$ from the largest $n-m+1$ coefficients of $A(x)$ and $B(x)$ if you consider it as a system of equations.
106+
The system of linear equations we're talking about can be written in the following form:
114107

115-
The formal way to do it is to consider the reversed polynomials:
108+
$$\begin{bmatrix} a_n \\ \vdots \\ a_{m+1} \\ a_{m} \end{bmatrix} = \begin{bmatrix}
109+
b_m & \dots & 0 & 0 \\
110+
\vdots & \ddots & \vdots & \vdots \\
111+
\dots & \dots & b_m & 0 \\
112+
\dots & \dots & b_{m-1} & b_m
113+
\end{bmatrix} \begin{bmatrix}d_{n-m} \\ \vdots \\ d_1 \\ d_0\end{bmatrix}$$
114+
115+
From the looks of it, we can conclude that with the introduction of reversed polynomials
116116

117117
$$A^R(x) = x^nA(x^{-1})= a_n + a_{n-1} x + \dots + a_0 x^n$$
118118

119119
$$B^R(x) = x^m B(x^{-1}) = b_m + b_{m-1} x + \dots + b_0 x^m$$
120120

121121
$$D^R(x) = x^{n-m}D(x^{-1}) = d_{n-m} + d_{n-m-1} x + \dots + d_0 x^{n-m}$$
122122

123-
Using these terms you can rewrite that statement about the largest $n-m+1$ coefficients as:
123+
the system may be rewritten as
124124

125-
$$A^R(x) \equiv B^R(x) D^R(x) \pmod{x^{n-m+1}}$$
125+
$$A^R(x) \equiv B^R(x) D^R(x) \pmod{x^{n-m+1}}.$$
126126

127-
From which you can unambiguously recover all coefficients of $D(x)$:
127+
From this you can unambiguously recover all coefficients of $D(x)$:
128128

129129
$$\boxed{D^R(x) \equiv A^R(x) (B^R(x))^{-1} \pmod{x^{n-m+1}}}$$
130130

131-
And from this in turn you can easily recover $R(x)$ as $R(x) = A(x) - B(x)D(x)$.
131+
And from this, in turn, you can recover $R(x)$ as $R(x) = A(x) - B(x)D(x)$.
132+
133+
Note that the matrix above is a so-called triangular [Toeplitz matrix](https://en.wikipedia.org/wiki/Toeplitz_matrix) and, as we see here, solving system of linear equations with arbitrary Toeplitz matrix is, in fact, equivalent to polynomial inversion. Moreover, inverse matrix of it would also be triangular Toeplitz matrix and its entries, in terms used above, are the coefficients of $(B^R(x))^{-1} \pmod{x^{n-m+1}}$.
132134

133135
## Calculating functions of polynomial
134136

135137
### Newton's method
136-
Let's generalize the inverse series approach.
137-
You want to find a polynomial $P(x)$ satisfying $F(P) = 0$ where $F(x)$ is some function represented as:
138138

139-
$$F(x) = \sum\limits_{i=0}^\infty \alpha_i (x-\beta)^k$$
139+
Let's generalize the Sieveking–Kung algorithm. Consider equation $F(P) = 0$ where $P(x)$ should be a polynomial and $F(x)$ is some polynomial-valued function defined as
140+
141+
$$F(x) = \sum\limits_{i=0}^\infty \alpha_i (x-\beta)^k,$$
142+
143+
where $\beta$ is some constant. It can be proven that if we introduce a new formal variable $y$, we can express $F(x)$ as
144+
145+
$$F(x) = F(y) + (x-y)F'(y) + (x-y)^2 G(x,y),$$
146+
147+
where $F'(x)$ is the derivative formal power series defined as
148+
149+
$$F'(x) = \sum\limits_{i=0}^\infty (k+1)\alpha_{i+1}(x-\beta)^k,$$
150+
151+
and $G(x, y)$ is some formal power series of $x$ and $y$. With this result we can find the solution iteratively.
152+
153+
Let $F(Q_k) \equiv 0 \pmod{x^{a}}$. We need to find $Q_{k+1} \equiv Q_k + x^a C \pmod{x^{2a}}$ such that $F(Q_{k+1}) \equiv 0 \pmod{x^{2a}}$.
154+
155+
Substituting $x = Q_{k+1}$ and $y=Q_k$ in the formula above, we get $$F(Q_{k+1}) \equiv F(Q_k) + (Q_{k+1} - Q_k) F'(Q_k) + (Q_{k+1} - Q_k)^2 G(x, y) \pmod x^{2a}.$$
140156

141-
Where $\beta$ is some constant. It can be proven that if we introduce a new formal variable $y$, we can express $F(x)$ as:
157+
Since $Q_{k+1} - Q_k \equiv 0 \pmod{x^a}$, it also holds that $(Q_{k+1} - Q_k)^2 \equiv 0 \pmod{x^{2a}}$, thus $$0 \equiv F(Q_{k+1}) \equiv F(Q_k) + (Q_{k+1} - Q_k) F'(Q_k) \pmod{x^{2a}}.$$
158+
The last formula gives us the value of $Q_{k+1}$: $$\boxed{Q_{k+1} = Q_k - \dfrac{F(Q_k)}{F'(Q_k)} \pmod{x^{2a}}}$$
142159

143-
$$F(x) = F(y) + (x-y)F'(y) + (x-y)^2 G(x,y)$$
160+
Thus, knowing how to invert polynomials and how to compute $F(Q_k)$, we can find $n$ coefficients of $P$ with the complexity $$T(n) = T(n/2) + f(n),$$ where $f(n)$ is the time needed to compute $F(Q_k)$ and $F'(Q_k)^{-1}$ which is usually $O(n \log n)$.
144161

145-
Where $F'(x)$ is the derivative formal power series defined as $F'(x) = \sum\limits_{i=0}^\infty (k+1)\alpha_{i+1}(x-\beta)^k$ and $G(x, y)$ is some formal power series of $x$ and $y$.
162+
The iterative rule above is known in numerical analysis as [Newton's method](https://en.wikipedia.org/wiki/Newton%27s_method).
146163

147-
Given this we can find the coefficients of the solution iteratively:
164+
#### Hensel's lemma
148165

149-
1. Assume that $F(Q_k) \equiv 0 \pmod{x^{a}}$, we want to find $Q_{k+1} \equiv Q_k + x^a C \pmod{x^{2a}}$ such that $F(Q_{k+1}) \equiv 0 \pmod{x^{2a}}$.
150-
2. Pasting $x = Q_{k+1}$ and $y=Q_k$ in the formula above we get:
151-
152-
\[F(Q_{k+1}) \equiv F(Q_k) + (Q_{k+1} - Q_k) F'(Q_k) + (Q_{k+1} - Q_k)^2 G(x, y) \pmod x^{2a}\]
153-
154-
3. Since $Q_{k+1} - Q_k \equiv 0 \pmod{x^a}$ we can say that $(Q_{k+1} - Q_k)^2 \equiv 0 \pmod{x^{2a}}$, thus:
155-
156-
\[0 \equiv F(Q_{k+1}) \equiv F(Q_k) + (Q_{k+1} - Q_k) F'(Q_k) \pmod{x^{2a}}\]
157-
158-
4. From the last formula we derive the value of $Q_{k+1}$:
159-
160-
\[\boxed{Q_{k+1} = Q_k - \dfrac{F(Q_k)}{F'(Q_k)} \pmod{x^{2a}}}\]
166+
As was mentioned earlier, formally and generically this result is known as [Hensel's lemma](https://en.wikipedia.org/wiki/Hensel%27s_lemma) and it may in fact used in even broader sense when we work with a series of nested rings. In this particular case we worked with a sequence of polynomial remainders modulo $x$, $x^2$, $x^3$ and so on.
161167

162-
Thus knowing how to invert arbitrary polynomial and how to compute $F(Q_k)$ quickly, we can find $n$ coefficients of $P$ with the complexity: \[T(n) = T(n/2) + f(n)\] Where $f(n)$ is the maximum of $O(n \log n)$ needed to invert series and time needed to compute $F(Q_k)$ which is usually also $O(n \log n)$.
168+
Another example where Hensel's lifting might be helpful are so-called [p-adic numbers](https://en.wikipedia.org/wiki/P-adic_number) where we, in fact, work with the sequence of integer remainders modulo $p$, $p^2$, $p^3$ and so on. For example, Newton's method can be used to find all possible [automorphic numbers](https://en.wikipedia.org/wiki/Automorphic_number) (numbers that end on itself when squared) with a given number base. The problem is left as an exercise to the reader. You might consider [this](https://acm.timus.ru/problem.aspx?space=1&num=1698) problem to check if your solution works for $10$-based numbers.
163169

164170

165171
### Logarithm
@@ -267,7 +273,7 @@ You want to know if $A(x)$ and $B(x)$ have any roots in common. There are two in
267273

268274
### Euclidean algorithm
269275

270-
Well, we already have an [article](euclid-algorithm.md) about it. For an arbitrary euclidean domain you can write the Euclidean algorithm as easy as:
276+
Well, we already have an [article](./algebra/euclid-algorithm.html) about it. For an arbitrary domain you can write the Euclidean algorithm as easy as:
271277

272278
```cpp
273279
template<typename T>

src/combinatorics/burnside.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ $$|\text{Classes}| \cdot |G| = \sum_{f} J(f)$$
119119
## Pólya enumeration theorem
120120

121121
The Pólya enumeration theorem is a generalization of Burnside's lemma, and it also provides a more convenient tool for finding the number of equivalence classes.
122-
It should noted that this theorem was already discovered before Pólya by Redfield in 1927, but his publication went unnoticed by mathematicians.
122+
It should be noted that this theorem was already discovered before Pólya by Redfield in 1927, but his publication went unnoticed by mathematicians.
123123
Pólya independently came to the same results in 1937, and his publication was more successful.
124124

125125
Here we discuss only a special case of the Pólya enumeration theorem, which will turn out very useful in practice.

0 commit comments

Comments
 (0)
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy